Reference Frames for Expected Value

post by ozziegooen · 2014-03-16T19:22:39.976Z · LW · GW · Legacy · 25 comments

Contents

  Optimizing Future Decisions: Actual vs. Expected Value
  Judging Previous Decisions: Actual vs. Expected Value
  Judging
  Free Will Bounded Expected Value
  Conclusion: Should we Even Judge People or Decisions Anyway?
None
25 comments

Puzzle 1: George mortgages his house to invest in lottery tickets. He wins and becomes a millionaire. Did he make a good choice?

Puzzle 2: The U.S. president questions if he should bluff a nuclear war or concede to the USSR. He bluffs and it just barely works.  Although there were several close calls for nuclear catastrophe, everything works out ok. Was this ethical?

One interpretation of consequentialism is that decisions that produce good outcomes are good decisions, rather than decisions that produce good expected outcomes.12 One would be ethical if their actions end up with positive outcomes, disregarding the intentions of those actions. For instance, a terrorist who accidentally foils an otherwise catastrophic terrorist plan would have done a very ‘morally good’ action.3 This general view seems to be surprisingly common.4

This seems intuitively strange to many, it definitely is to me. Instead, ‘expected value’ seems to be a better way of both making decisions and judging the decisions made by others. However, while ‘expected value’ can be useful for individual decision making, I make the case that it is very difficult to use to judge other people’s decisions in a meaningful way.5 This is because ‘expected value’ is typically defined in reference to a specific set of information and intelligence rather than an objective truth about the world.

Two questions to help guide this:

  1. Should we judge previous actions based on ‘expected’ or ‘actual’ value?
  2. Should we make future decisions to optimize ‘expected’ or ‘actual’ value?

I believe these are in a sense quite simple, but require some consideration to definitions.6

Optimizing Future Decisions: Actual vs. Expected Value

The second question is the easiest of the two, so I’ll begin with that one. The simple answer is that this is a question of defining ‘expected value’. Once we do so the question kind of goes away.

There is nothing fundamentally different between expected value and actual value.  A more fair comparison may be ‘expected value from the perspective of the decision maker’ with ‘expected value from a later, more accurate prospective’.

Expected value converges on actual value with lots of information. Said differently, actual value is expected value with complete information.

In the case of an individual purchasing lottery tickets successfully (Puzzle 1), the ‘actual value’ is still not exact from our point of view. While we may know how much money was won, or what profit was made. We also don’t know what the counterfactual would have been. It is still theoretically possible that in the worlds where George wouldn’t have purchased the lottery tickets, he would have been substantially better off. While the fact that we have imperfect information doesn’t matter too much, I think it demonstrates that presenting a description of the outcome as ‘actual value’ is incomplete. ‘Actual value’ exists only theoretically, even after the fact.7

So this question becomes, then ‘should one make a decision to optimize value using the information and knowledge available to them, or using perfect knowledge and information?’ Obviously, in this case, ‘perfect knowledge’ is inaccessible to them (or the ‘expected value’ and ‘actual value’ would be the same exact thing). I believe it should be quite apparent that in this case, the best one can do (and should do) is make the best decision using their available information.

This question is similar to asking ‘should you drive your car as quickly as your car can drive, or much faster than your car can drive?’ Obviously you may like to drive faster, but that’s by definition not an option. Another question: ‘should you do well in life or should you become an all-powerful dragon king?’

Judging Previous Decisions: Actual vs. Expected Value

Judging previous decisions can get tricky.

Let’s study the lottery example again. A person purchases a lottery ticket and wins. For simplicity, let’s say the decision to purchase the ticket was done only to optimize money.

The question is, what is the expected value of purchasing the lottery ticket? How does this change depending on information and knowledge?

In general purchasing a lottery ticket can be expected to be a net loss in earnings, and thus a bad decision. However, if one was sure they would win, it would be a pretty good idea. Given the knowledge that the player won, the player made a good decision. Winning the lottery clearly is better than not playing once.

More interesting is considering the limitation not in information about the outcome but about knowledge of probability. Say the player thought that they were likely win the lottery, that it was a good purchase. This may seem insane to someone familiar with probability and the lottery system, but not everyone is familiar with these things.

From the point of view of the player, the lottery ticket purchase had net-positive utility. From the point of view of a person with knowledge of the lottery and/or statistics, the purchase had net-negative utility. From the point of view of any of these two groups, after they know that the lottery will be a success, it was a net positive decision.

  No Knowledge of Outcome Knowledge of Outcome
‘Intelligent’ Person with Knowledge of Probability Negative Positive
Lottery Player Positive Positive

Expected Value of purchasing a Lottery Ticket from different Reference Points

To make things a bit more interesting, imagine that there’s a genius out there with a computer simulation of our exact universe. This person can tell which lottery ticket will win in advance because they can run the simulations. To this ‘genius’ it’s obvious that the purchase is a net-positive outcome.

  No Knowledge of Outcome Knowledge of Outcome
Genius Positive Positive
‘Intelligent’ Person with Knowledge of Probability Negative Positive
Lottery Player Positive Positive

Expected Value of purchasing a Lottery Ticket from different Reference Points

So what is the expected value of purchasing the lottery ticket? The answer is that the ‘expected value’ is completely dependent on the ‘reference frame’, or a specific set of information and intelligence. From the reference frame of the ‘intelligent person’ this was low in expected value, so was a bad decision. From that of the genius, it was a good decision. And from the player, a good decision.

Judging

So how do we judge this poor (well, soon rich) lottery player? They made a good decision respective to the results, respective to the genius, and compared to their own knowledge. Should we say ‘oh, this person should have had slightly more knowledge, but not too much knowledge, and thus they made a bad choice’? What does that even mean?

Perhaps we could judge the player for not reading into lottery facts before playing. Wasn’t it irresponsible for falling for such a simple fallacy? Or perhaps the person was ‘lazy’ to not learn probability in the first place.

Well, things like these seem like intuitions to me. We may have the intuitions to us that the lottery is a poor choice. We may find facts to prove these intuitions accurate. But the gambler my not have these intuitions. It seems unfair to consider any intuitions ‘obvious’ to those who do not share them.

One might also say that the gambler probably knew it was a bad idea, but let his or her ‘inner irrationalities’ control the decision process. Perhaps they were trying to take an ‘easy way out’ of some sort. However, these seem quite judgmental as well. If a person experiences strong emotional responses; fear, anger, laziness; those inner struggles would change their expected value calculation. It might be a really bad, heuristically-driven ‘calculation’, but it would be the best they would have at that time.

Free Will Bounded Expected Value

We are getting to the question of free will and determinism. After all, if there is any sort of free will, perhaps we have the ability to make decisions that are sub-optimal by our expected value functions. Perhaps we commonly do so (else it wouldn’t be much in the sense of ‘free’ will.)

This would be interesting because it would imply an ‘expected result’ that the person should have calculated, even if they didn’t actually do so. We need to understand the person’s actions and understanding, not in terms of what we know, or what they knew, but what they should have figured out given their knowledge.

This would require a very well specified Free Will Boundary of some sort. A line around a few thought processes, parts of the brain, and resource constraints, which could produce a thereby optimal expected result calculation. Anything less than this ‘optimal given Free Will Boundary’ expected value calculation would be fair game for judging.

Conclusion: Should we Even Judge People or Decisions Anyway?

So, deciding to make future decisions based on expected value seems reasonable.  The main question in this essay, the harder question, is if we can judge previous decisions based on their respective expected values, and how to possibly come up with the relevant expected values to do so.

I think that we naturally judge people. We have old and modern heroes and villains. Judging people is simply something that humans do. However, I believe that on close inspection this is very challenging if not impossible to do reasonably and precisely.

Perhaps we should attempt to stop placing so much emphasis on individualism and just try to do the best we can while not judging others nor other decisions much. Considerations of judging may be interesting, but the main take away may be the complexity itself, indicated that judgements are very subjective and incredibly messy.

That said, it can still be useful to analyze previous decisions or individuals. That seems like one of the best ways to update our priors of the world. We just need to remember not to treat it personally.

  1. Dorsey, Dale. “Consequentialism, Metaphysical Realism, and the Argument from Cluelessness.” University of Kansas Department of Philosophy http://people.ku.edu/~ddorsey/cluelessness.pdf

  2. Sinhababu, Neiladri. “Moral Luck.” Tedx Presentation http://www.youtube.com/watch?v=RQ7j7TD8PWc

  3. This is assuming the terrorists are trying to produce ‘disutility’ or a value separate from ‘utility’. I feel like from their perspective, maximizing an intrinsic value dissimilar from our notion of utility would be maximizing ‘expected value’. But analyzing the morality of people with alternative value systems is a very different matter.

  4. These people tend not to like consequentialism much.

  5. I don’t want to impose what I deem to be a false individualistic appeal, so consider this to mean that one would have a difficult time judging anyone at any time except for their spontaneous consciousness.

  6. I bring them up because they are what I considered and have talked to others about before understanding what makes them frustrating to answer. Basically, they are nice starting points for getting towards answering the questions that were meant to be asked instead.

  7. This is true for essentially all physical activities. Thought experiments or very simple simulations may be exempt.

25 comments

Comments sorted by top scores.

comment by Richard_Kennaway · 2014-03-17T06:38:24.331Z · LW(p) · GW(p)

Puzzle 1: George mortgages his house to invest in lottery tickets. He wins and becomes a millionaire. Did he make a good choice?

This looks like a tree-falls-in-forest-did-it-make-a-sound question. The expected value was negative, the outcome was positive, "good choice" can mean either assessment, distinguish them, mystery dissolved.

‘expected value’ is typically defined in reference to a specific set of information and intelligence rather than an objective truth about the world.

Expected value is subjectively objective. It depends on the knowledge one has, but what knowledge one has is also an objective fact about the world.

After all, if there is any sort of free will, perhaps we have the ability to make decisions that are sub-optimal by our expected value functions. Perhaps we commonly do so (else it wouldn’t be much in the sense of ‘free’ will.)

Is this Sartre's concept of free will as actions coming out of nowhere, free of all considerations of what would actually be a good idea, with suicide as the ultimate free act? Eliezer has provided the answer to the Problem of Free Will here.

Replies from: TheAncientGeek, TheAncientGeek
comment by TheAncientGeek · 2014-03-17T12:34:54.395Z · LW(p) · GW(p)

Yes. "Good" can mean desirable outcomes, or responsible decision making. The first obviously matches consequentialism. It appears not to be obvious to Lesswrongians that the second matches deontology. When we judge whether someone behaved culpably or not, we want to know whether they applied the rules and heuristic appropriate to their reference class (doctor, CEO, ships captain...). The consequences of their decision may have landed them in a tribunal, but we don't hold people to blame for applying the rules and getting the wrong results.

Replies from: ozziegooen
comment by ozziegooen · 2014-03-19T01:34:19.964Z · LW(p) · GW(p)

Perhaps I have misunderstood consequentialism and deontology, but my impression was that (many forms of) consequentialism prefers that people optimize expected utility, while deontology does not (it would consider other things, like 'not lying', as considerably more important). My impression was that this was basically the main differentiating factor.

Agree about the tribunal situation. From a consequentialist viewpoint it would seem like we would want to judge people formally (in tribunals) according to how well they made an expected value decision, rather than on the outcome. For one, because otherwise we would have a lot more court cases (anything causally linked to a crime is responsible)

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-03-19T10:07:12.875Z · LW(p) · GW(p)

You need rules and heuristics to calculate expected value. How does that differ from deontology? The rules are not absolutes? But then it is still a compromise between D and C.

comment by TheAncientGeek · 2014-03-17T10:28:21.410Z · LW(p) · GW(p)

Freedom of a kind worth having would consist in being able to choose one's values, not in being able to go against them.

comment by Protagoras · 2014-03-16T21:11:35.578Z · LW(p) · GW(p)

You come to what is more or less the right consequentialist answer in the end, but it seems to me that your path is needlessly convoluted. Why are we judging past actions? Generally, the reason is to give us insight into and perhaps influence future decisions. So we don't judge the lottery purchase to have been good, because it wouldn't be a good idea to imitate it (we have no way to successfully imitate "buy a winning lottery ticket" behavior, and imitating "buy a lottery ticket" behavior has poor expected utility, and similarly for many broader or narrower classes of similar actions), and so we want to discourage people from imitating it, not encourage them. If we're being good consequentialists, what other means could it possibly be appropriate to use in deciding how to judge other than basing it on the consequences of judging in that way?

Replies from: ozziegooen, cousin_it, whales
comment by ozziegooen · 2014-03-17T00:08:23.597Z · LW(p) · GW(p)

your path is needlessly convoluted

Agreed. This really wasn't my best piece. I figured it would be better to publish it than not though. Was hoping it would turn out better. If the response is good I may rewrite it. However, I do feel like it is a complicated issue, so could require quite a bit of text to explain no matter how good the writing style.

Why are we judging past actions?

The first reason that comes to my mind is to say things like "X is a bad person", or "Y cheated on this test, which was bad", etc. If we are to evaluate them consequentially, I'm making the argument that seeing things from their point of view is exceedingly difficult. It's thus very difficult to ask if another person is acting in a 'utilitarian' way, especially if that person claims to be.

So we don't judge the lottery purchase to have been good,

In regard to the lottery purchase, the question is what does 'good' mean in the first place. I'm saying it is strongly coupled to a specific reference frame, and it's hard to make it an 'objective good' of any kind. However, it can be used to more clearly talk about specific kinds of 'good'. For instance, perhaps in this case if we used the 'reference frame' of our audience, we could explain the situation to them well, discouraging them (assuming a realistic audience).

If we're being good consequentialists, what other means could it possibly be appropriate to use in deciding how to judge other than basing it on the consequences of judging in that way?

I guess here the question is what it means to 'judge'. If 'judging' just means saying what happened (there was a person, he did this, this happened), then yes. If it is attempting to understand the decision making of the person in order to understand how 'morally good' that person is, or can be expected to be, those are different questions.

comment by cousin_it · 2014-03-16T22:12:45.047Z · LW(p) · GW(p)

Why are we judging past actions?

For example, to decide whether some institution should be reformed or left alone, we need to know whether it has a positive or negative effect. That requires evaluating counterfactuals about the past, which is surprisingly tricky, as I mentioned sometime ago. That might be a little tangential to the OP, though.

comment by whales · 2014-03-16T21:57:05.314Z · LW(p) · GW(p)

Right, it seems kind of strange to declare that you're considering only states of the world in your decisions, but then to treat judgments of right and wrong as an deontological layer on top of that where you consider whether the consequentialist rule was followed correctly. But that does seem to be a mainstream version of consequentialism. As far as I can tell, it mostly leads to convoluted, confused-sounding arguments like the above and the linked talk by Neiladri Sinhababu, but maybe I'm missing something important.

Replies from: ozziegooen
comment by ozziegooen · 2014-03-17T00:28:36.646Z · LW(p) · GW(p)

I think it leads to very confusing and technical arguments if free will is assumed. If not, there's basically reason to morally judging others (other than the learning potential for future decisions).

I think the mainstream version of consequentialism, if I understand what you are saying correctly, can still be followed for personal decisions as they happen. Or, when making a decision, you personally do your best to optimize for the future. That seems quite reasonable to me, it's just really hard to understand and criticize from an outside perspective.

comment by ChrisBillington · 2014-03-17T01:24:06.393Z · LW(p) · GW(p)

I read most of this post with a furrowed brow, wondering what you were getting at, until I got to the point on free will, which I think makes some sense.

If good choices are relative to states of knowledge and abilities, then how are not all choices good choices, given that these things are beyond our control?

I think, yes, in order to have the concept of 'good' and 'bad' choices in hindsight, one has to assume the person could have acted differently, even though in a very strict free-will sense, they couldn't have.

However there are fundamental limits to how differently they could have acted — nobody can predict the outcome of a lottery for example. So I suppose we draw the line at what reasonable expectations for a human being are. But we still make individual exceptions — if you were to find out someone had a cognitive disability, you're not going to judge them as harshly for making a bad decision. This is different to saying it's not a bad decision — it is — it's just you're not going to hold them responsible for it. It still should not be emulated, as Protagoras put it.

I'm also pretty convinced that large scale random events are more often than not quantum random (that is, quantum randomness, though initially small in classical systems, is amplified by classical chaos such that different Everett branches get different lottery results and coin flips). So if you ask yourself "If I were in that persons position, should I have bought the lottery ticket?", well, the outcome is actually totally not predetermined. Not that I think any argument here should rely on the quantum vs classical randomness distinction, but I thought I'd mention it anyway.

But it seems like it's not even a coherent concept, to judge based on actual results rather than expected, so apart from the free will angle and pointing out that some people might have badly calculated expectations, I don't think it's an idea worth putting too much thought into, and I think that those interpreting consequentialist ethics in this way must be very confused people indeed.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-03-17T06:19:09.004Z · LW(p) · GW(p)

If good choices are relative to states of knowledge and abilities, then how are not all choices good choices, given that these things are beyond our control?

In the same way that not all CPUs do arithmetic right.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-03-17T11:09:46.096Z · LW(p) · GW(p)

Yep. "Good" is normative.

comment by Jonathan Paulson (jpaulson) · 2014-03-16T20:05:24.086Z · LW(p) · GW(p)

Say the player thought that they were likely win the lottery, that it was a good purchase. This may seem insane to someone familiar with probability and the lottery system, but not everyone is familiar with these things.

I would say this person made a good decision with bad information.

Perhaps we should attempt to stop placing so much emphasis on individualism and just try to do the best we can while not judging others nor other decisions much.

There are lots of times when it's important to judge people e.g. for hiring or performance reviews.

Replies from: ozziegooen
comment by ozziegooen · 2014-03-16T23:58:00.305Z · LW(p) · GW(p)

I would say this person made a good decision with bad information.

I would agree that they made a good decision, good decision being defined as 'decision which optimizes expected value with information about the outcome'. My point was to clarify what 'good decision' meant.

There are lots of times when it's important to judge people e.g. for hiring or performance reviews.

In this case I was attempting to look at a very simple example (the lottery) so we could make moral claims about individuals. This is different from general performance. On that note though, the question of trying to separate what in an individuals' history they were or were not responsible for would be interesting for hiring or performance reviews, but it definitely is a tricky question.

comment by shokwave · 2014-03-18T16:08:23.760Z · LW(p) · GW(p)

One would be ethical if their actions end up with positive outcomes, disregarding the intentions of those actions. For instance, a terrorist who accidentally foils an otherwise catastrophic terrorist plan would have done a very ‘morally good’ action.

This seems intuitively strange to many, it definitely is to me. Instead, ‘expected value’ seems to be a better way of both making decisions and judging the decisions made by others.

If the actual outcome of your action was positive, it was a good action. Buying the winning lottery ticket, as per your example, was a good action. Buying a losing lottery ticket was a bad action. Since we care about just the consequences of the action, the goodness of an action can only be evaluated after the consequences have been observed - at some point after the action was taken (I think this is enforced by the direction of causality, but maybe not).

So we don't know if an action is good or not until it's in the past. But we can only choose future actions! What's a consequentialist to do? (Equivalently, since we don't know whether a lottery ticket is a winner or a loser until the draw, how can we choose to buy the winning ticket and choose not to buy the losing ticket?) Well, we make the best choice under uncertainty that we can, which is to use expected values. The probability-literate person is making the best choice under uncertainty they can; the lottery player is not.

The next step is to say that we want as many good things to happen as possible, so "expected value calculations" is a correct way of making decisions (that can sometimes produce bad actions, but less often than others) and "wishful thinking" is an incorrect way of making decisions.

So the probability-literate used a correct decision procedure to come to a bad action, and the lottery player used an incorrect decision procedure to come to a good action.

The last step is to say that judging past actions changes nothing about the consequences of that action, but judging decision procedures does change something about future consequences (via changing which actions get taken). Here is the value in judging a person's decision procedures. The terrorist used a very morally wrong decision procedure to come up with a very morally good action: the act is good and the decision procedure is bad, and if we judge the terrorist by their decision procedure we influence future actions.

--

I think it's very important for consequentialists to always remember that an action's moral worth is evaluated on its consequences, and not on the decision theory that produced it. This means that despite your best efforts, you will absolutely make the best decision possible and still commit bad acts.

If you let it collapse - if you take the shortcut and say "making the best decision you could is all you can do", then every decision you make is good, except for inattentiveness or laziness, and you lose the chance to find out that expected value calculations or Bayes' theorem needs to go out the window.

Replies from: ozziegooen
comment by ozziegooen · 2014-03-19T01:46:50.638Z · LW(p) · GW(p)

If all 'moral worth' meant was the consequences of what happened, I just wouldn't deem 'moral worth' to be that relevant towards judging. It would seem to me like we're just making 'moral worth' into something kind of irrelevant except from a completely pragmatic point.

Not sure if saying 'making the best decision you could is al you can do' is that much of a shortcut. I mean, I would imagine that a lot of smart people would realize that 'making the best decision you can' is still really, really difficult. If you act as your only judge (not just all of you, but only you at any given moment), then you may have less motivation; however, it would seem strange to me if 'fear of being judged' is the one thing that keeps us moral, even if it happens to become apparent that judging is technically impossible.

Replies from: ozziegooen
comment by ozziegooen · 2014-03-19T01:50:03.384Z · LW(p) · GW(p)

Also, keep in mind that in this case 'every decision you make is "good"', but 'good' is defined as everything, so it becomes a neutral term. In the future you can still learn stuff; you can say "I made the right decision at this time using what I knew, but then the results taught me some new information, and now I would know to choose differently next time".

comment by tom_cr · 2014-03-17T19:30:49.869Z · LW(p) · GW(p)

Thanks for taking the time to try to debunk some of the sillier aspects of classic utilitarianism. :)

‘Actual value’ exists only theoretically, even after the fact.

You've come close to an important point here, though I believe its expression needs to be refined. My conclusion is that value has real existence. This conclusion is primarily based on the personal experience of possessing real preferences, and my inference (to a high level of confidence) that other humans routinely do the same. We might reasonably doubt the a priori correspondence between actual preference, and the perception of preference, but even so, the assumption that I make decisions entails that I'm motivated by the pursuit of value.

Perhaps, then, you would agree that it is more correct to say that the relative value of an action can be judged only theoretically.

Thus, we account for the fact that if the action had not been performed, the outcome would be something different, the value of which we can at best only make an educated guess about, making a non-theory-laden assessment of relative value impossible. The further substitution of my 'can be judged' in place of your 'exists' seems to me necessary, to avoid committing the mind projection fallacy.

The main question in this essay, the harder question, is if we can judge previous decisions based on their respective expected values, ...

If it is the decision that is being judged (as the question specifies), rather than its outcome, then clearly the answer is "yes." There can not be anything better than expected value to base a decision on. In a determined bid to be voted captain obvious, I examined this in some detail, in a blog post, Is rationality desirable?

... and how to possibly come up with the relevant expected values to do so.

This is called science! You are right, though, to be cautious. It strikes me that many assume they can draw conclusions about the relative rationality of two agents, when really, they ought to do more work for their conclusions to be sound. I once listened to a talk in which it was concluded that the test subjects in some psychological study were not 'Bayesian optimal.' I asked the speaker how he knew this. How had he measured their prior distributions? their probability models? their utility functions? These things are all part of the process of determining a course of action.

comment by somervta · 2014-03-17T10:16:46.565Z · LW(p) · GW(p)

I feel like one of the most important distinctions one can make about consequentialism or a specific consequentialist system is to separate the value system form the decision procedure. In fact, I find that the ability to do this (implicitly or explicitly) is a prerequisite for having productive discussions about it.

comment by plex (ete) · 2014-03-16T22:27:48.056Z · LW(p) · GW(p)

It seems to me that there's two different hidden questions pointed at by "Was this decision ethical", and depending on why you're asking you come up with different answers.

If you're asking "Was this the correct choice", you want to know if from the perspective of perfect knowledge, how close to optimal was this action, which corresponds fairly closely to actual result (though there's complications with MWI, and possibly some other parts of the large universe. Or maybe that goes away if you swap out perfect knowledge for something more like "from the perspective of the observer after the event", in which case the ethical status of a decision can be literally physically undefined until some time after the decision is made?). However, a lot of the time what you're actually asking is "How does this choice impact my assessment of a person's ability to make correct choices", in which case you're just interested in knowing whether the choice made using a method which reliably produces correct choices (which includes things like gathering relevant information on probability before remortgaging your house and blowing it on lottery tickets).

The first question is relatively easy to judge since you have evidence on how well a decision went, though lack of knowing the results other options gives some uncertainty, but does not provide useful information about trustworthiness of a person in general. The second seems much more useful since it should relate better to future behaviour, but is basically impossible to even approach quantifying in any realistically complicated situation. So.. you ask the first question, trying to get evidence about the second which is what you usually want to know?

If, once you know whether a decision in the past was correct (with reference to whatever morals you pick), and whether the method used to make that decision generally produces correct decisions, you still feel the need to ask "but was it really ethical", it looks like a disguised query.

comment by somervta · 2014-03-17T10:06:01.138Z · LW(p) · GW(p)
comment by Shmi (shminux) · 2014-03-17T01:23:49.669Z · LW(p) · GW(p)

Optimizing Future Decisions: Actual vs. Expected Value

Not sure what you mean here. Future is never actual, only expected (or, more often, unexpected).

Replies from: ozziegooen
comment by ozziegooen · 2014-03-17T01:37:31.137Z · LW(p) · GW(p)

This just has to do with a question that was a poorly question to begin with. When one makes decisions, should they optimize for 'expected value' or 'actual value'. The answer is that the 'actual value' is obviously unknowable, so it's a moot question. That said, I've discussed this with people who weren't sure, so wanted to make this clear.

I call these "future decisions" to contrast them with 'past decisions' which can't really be made but judged, as they have already occurred.

Replies from: DefectiveAlgorithm
comment by DefectiveAlgorithm · 2014-03-17T16:28:31.238Z · LW(p) · GW(p)

Isn't expected value essentially 'actual value, to the extent that it is knowable in my present epistemic state'? Expected value reduces to 'actual value' when the latter is fully knowable.

EDIT: Oh, you said this in the post. This is why I should read a post before commenting on it.