Placing Yourself as an Instance of a Class
post by abramdemski · 2017-10-03T19:10:52.681Z · LW · GW · 5 commentsContents
5 comments
There's an intuition that I have, which I think informs my opinion on subjective probabilities, game theory, and many related matters: that part of what separates a foolish decision from a wise one is whether you treat it as an isolated instance or as one of a class of similar decisions.
A simple case: someone doing something for the first time (first date, first job interview, etc) vs someone who has done it many times and "knows how these things go". Surprise events to the greenhorn are tired stereotypes for the old hand. But, sometimes, we can short-circuit this process and react wisely to a situation without long experience and hard knocks.
For example, if a person is trying to save money but sees a doodad they'd like to buy, the fool reason as follows: "It's just this one purchase. The amount of money isn't very consequential to my overall budget. I can just save a little more in other ways and I'll meet my target." The wise person reasons as follows: "If I make this purchase now, I will similarly allow myself to make exceptions to my money-saving rule later, until the exception becomes the rule and I spend all my money. So, even though the amount of money here isn't so large, I prefer to follow a general policy of saving, which implies saving in this particular case." A very wise person may reason a bit more cleverly: "I can make impulse purchases if they pass a high bar, such that I actually only let a few dollars of unplanned spending past the bar every week on average. How rare is it that a purchase opportunity costing this much is at least this appealing?" **does a quick check and usually doesn't buy the thing, but sometimes does, when it is worth it**
One way to get this kind of wisdom is by spending money for a long time, and calibrating willingness-to-spend based on what seems to be happening with your bank account. This doesn't work very well for a lot of people, because the reinforcement happens too slowly to be habit-forming. A different way is to notice the logic of the situation, and think as though you had lived it.
Similarly, although people are heavily biased to treat vivid examples from the news (airplane crashes involving celebrity deaths) or personal anecdotes from friends and family (an uncle who fell ill when a high-voltage power line was installed near his home) or worse, from strangers on the internet (the guy who got "popcorn lung" from smoking an e-cig), actually, statistics are far more powerful. (It's true that medical anecdotes from family might be more relevant than statistics due to genetic factors shared with family members, but even so, taking the "statistical view" -- looking at the statistics available, including information about genetic conditions and heritability if available, and then making reasonable guesses about yourself based on your family -- will be better than just viewing your situation in isolation.)
I won't lecture much on the implications of that -- most readers will be familiar with the availability heuristic, the base-rate fallacy, and scope insensitivity. Here, I just want to point out that putting more credence in numbers than vivid examples is an instance of the pattern I'm pointing at: placing your decision as an instance of a class, rather than seeing it in isolation.
In an abstract sense, the statistical view of any given decision -- the view of the decision as part of a class of relevantly similar decisions -- is "more real" than an attempt to view it in isolation as a unique moment. Not because it's actually more real, but because this view is closer to your decision. Humans may exist in a single, deterministic universe -- but we decide in statistical space, where outcomes come in percentages.
When the weatherman says "70% chance of rain" in your area, but it is already raining outside, you know you're one of the 70% -- he's speaking to an area, and giving the percent of viewers in the area who will experience rain. He can't just say 100% for those viewers who will experience rain and 0% for those who won't. Similarly, when you make a decision, you're doing it in a general area -- you can't just decide to drive when you won't crash and avoid driving when you will (though you can split up your decision by some relevant factors and avoid driving when risk is high).
This exact same intuition, of course, supports timeless/updateless decision theory even more directly than it supports probabilism. Perhaps some future generation will regard the idea of timeless/updateless decision-making as more fundamental and obvious than the doctrine of probabilism, and wonder why subjective probability was invented first.
5 comments
Comments sorted by top scores.
comment by Conor Moreton · 2017-10-04T00:23:58.786Z · LW(p) · GW(p)
As someone who rates comments at roughly 25x the value of an upvote, I wanted to chime in to say that I couldn't think of a useful comment or extension but I think this point was well worth reading written out and is worth discussing further. I just basically nodded along, and so couldn't contribute much in the way of probing questions or skepticism.
comment by alexei (alexei.andreev) · 2017-07-26T03:34:26.833Z · LW(p) · GW(p)
Playing poker at higher levels actually requires one to practice this skill a lot.
comment by panickedapricott · 2017-10-07T03:54:09.949Z · LW(p) · GW(p)
Doesn't this only work if the logic of the situation is transparent? Or maybe I'm misunderstanding what you mean by the logic of the situation. Are you trying to say "Keep in mind the counterfactual situations in which you might make various decisions and then determine which situation you are in?"
Or maybe "determine your policy in all statistically likely scenarios then determine which scenario you are in"
↑ comment by abramdemski · 2017-10-14T23:37:13.759Z · LW(p) · GW(p)
It certainly only works to the extent that the logic of the situation is transparent. What I'm suggesting is that, in practice, it seems like humans can gain a lot by asking themselves what sort of situation they are in and what sort of response would be best to that class of situations (rather than trying to pin down the exact situation they're in and respond in the way that is best for that). This is somewhat odd. It may feel a bit like throwing away information. However, people tend to over-fit the details of a situation (overestimating the value they can get by attending to those details). And, deciding what policies you'd like to implement in general rather than special-casing helps you coordinate with yourself in important ways. (And with others.)
comment by RobertM (T3t) · 2017-10-04T03:49:49.209Z · LW(p) · GW(p)
To extend the programming metaphor a bit:
Agents who understand and explicitly use a decision theory along the lines of TDT/FDT may be said to be implementing an interface, which consequently modifies the expected outcomes of their decision process. This is important in situations like deciding how to vote, or even whether you should do so, as you can estimate that most agents in the problem-space will not be implementing that particular interface, so your doing so will only entangle with the outcome of the limited set of agents who do implement it.