Real-world Newcomb-like Problems
post by SilasBarta · 2011-03-25T20:44:16.401Z · LW · GW · Legacy · 35 commentsContents
35 comments
Elaboration of: A point I’ve made before.
Summary: I phrase a variety of realistic dilemmas so as to show how they’re similar to Newcomb’s problem.
Problem: Many LW readers don't understand why we bother talking about obviously-unrealistic situations like Counterfactual Mugging or Newcomb's problem. Here I'm going to put them in the context of realistic dilemmas, identifying the common thread, so that the parallels are clear and you can see how Counterfactual Mugging et al. are actually highlighting relevant aspects of real-world problems -- even though they may do it unrealistically.
The common thread across all the Newcomblike problems I will list is this: "You would not be in a position to enjoy a larger benefit unless you would cause [1] a harm to yourself within particular outcome branches (including bad ones)." Keep in mind that a “benefit” can include probabilistic ones (so that you don’t always get the benefit by having this propensity). Also, many of the relationships listed exist because your decisions are correlated with others’.
Without further ado, here is a list of both real and theoretical situations, in rough order from most to least "real-world"ish:
Natural selection: You would not exist as an evolution-constructed mind unless you would be willing to cause the spreading of your genes at the expense of your life and leisure. (I elaborate here.)
Expensive punishment: You would not be in the position of enjoying a crime level this low unless you would cause a net loss to yourself to punish crimes when they do happen. (My recent comments on the matter.)
"Mutually assured destruction" tactics: You would not be in the position of having a peaceful enemy unless you would cause destruction of both yourself and the enemy in those cases where the enemy attacks.
Voting: You would not be in a polity where humans (rather than "lizards") rule over you unless you would cause yourself to endure the costs of voting despite the slim chance of influencing the outcome.
Lying: You would not be in the position where your statements influence others’ beliefs unless you would be willing state true things that are sub-optimal to you for others to believe. (Kant/Categorical Imperative name-check)
Cheating on tests: You would not be in the position to reap the (larger) gains of being able to communicate your ability unless you would forgo the benefits of an artificially-high score. (Kant/Categorical Imperative name-check)
Shoplifting: You would not be in the position where merchants offer goods of this quality, with this low of a markup and this level of security lenience unless you would pass up the opportunity to shoplift even when you could get away with it, or at least have incorrect beliefs about the success probability that lead you to act this way. (Controversial -- see previous discussion.)
Hazing/abuse cycles: You would not be in the position to be unhazed/unabused (as often) by earlier generations unless you would forgo the satisfaction of abusing later generations when you had been abused.
Akrasia/addiction: You would not be addiction- and bad habit-free unless you would cause the pain of not feeding the habit during the existence-moments when you do have addictions and bad habits.
Absent-Minded Driver: You would not ever have the opportunity to take the correct exit unless you would sometimes drive past it.
Parfit's Hitchhiker: You would not be in the position of surviving the desert unless you would cause the loss of money to pay the rescuer.
Newcomb's problem: You would not be in the position of Box #2 being filled unless you would forgo the contents of Box #1.
Newcomb's problem with transparent boxes: Ditto, except that Box #2 isn't always filled.
Prisoner's Dilemma: You would not be in the position of having a cooperating partner unless you would cause the diminished "expected prison avoidance" by cooperating yourself.
Counterfactual Mugging: You would not ever be in the position of receiving lots of free money unless you would cause yourself to lose less money in those cases where you lose the coin flip.
[1] “Cause” is used here in the technical sense, which requires the effect to be either in the future, or, in timeless formalisms, a descendent of the minimal set (in a Bayesian network) that screens off knowledge about the effect. In the parlance of Newcomb’s problem, it may feel intuitive to say that “one-boxing causes Box #2 to be filled”, but this is not correct in the technical sense.
35 comments
Comments sorted by top scores.
comment by Manfred · 2011-03-26T00:41:47.230Z · LW(p) · GW(p)
Many of the examples are not like Newcomb's problem, but are just rephrased to use timeless decision-theory-esque language. To be like newcomb's problem you don't just want TDT to work, you want causal decision theory to not work, i.e. you want the problem to be decision-determined but not action-determined. This is more exotic, but may be approximated by situations where other people are trying to figure out your real feelings.
Replies from: SilasBarta↑ comment by SilasBarta · 2011-03-28T00:40:19.670Z · LW(p) · GW(p)
CDT doesn't statistically win in these examples: in all cases, if you reason only from what your actions cause you will be in a world where communication is statistically harder, transaction costs are statistically higher, etc. Newcomb's Problem only differs in the certainty of this relationship.
comment by atucker · 2011-03-26T03:31:36.360Z · LW(p) · GW(p)
Social Interaction. Kind of.
A lot of stuff is based on you being confident in taking a risk rather than the safe way out. Other people function as Omega, and look at how your nonverbal signals to predict what you do before you act. Once you act, if its in keeping with how they expected you to act, you get utility.
One recent example from my life was getting people on my robotics team to dance at the team social. I could just not do that, and two box. I could diffidently do that, and have Omega not expect me to one-box, and have it not work. (Alternatively, trying to start something while playing it safe is akin to two-boxing. Not sure which one is right.) Or I could confidently go out and dance and invite other people and, looking confident, encourage them to join in.
comment by Vaniver · 2011-03-26T06:25:03.538Z · LW(p) · GW(p)
This seems to take the spice out of Newcomb's problem. Doesn't a flat gamble (paying $5 in one branch, earning $6 in another branch) fit your definition that "You would not be in a position to enjoy a larger benefit unless you would cause [1] a harm to yourself within particular outcome branches (including bad ones)" ? How is that at all a Newcomb-like problem?
Replies from: SilasBarta↑ comment by SilasBarta · 2011-03-28T00:28:48.249Z · LW(p) · GW(p)
If I understand the gamble you're describing, that violates the requirement that the benefit be larger. If you're just transferring $5 from half the branches to the other half, your EU across the branches is not higher for being a gambler. OTOH, if you're transferring $X from half the branches and gaining $X+k in the other branches, then that would match what I call the common thread -- and for large enough k that becomes Counterfactual Mugging.
Replies from: Vaniver↑ comment by Vaniver · 2011-03-28T01:53:43.973Z · LW(p) · GW(p)
You're right- I've edited my first comment to be +$6 / -$5.
My true complaint is that Newcomb's Problem proper- the contentious one with two-boxers and one-boxers- is just a referendum on whether or not you're willing to suspend your disbelief. Is it possible for you to win $1,001,000? Then two box. Is it not? Then one-box. Is the question silly? Then zero-box.
What you call the the common thread- what I would call reputation problems- seems like an entirely different thing. You can't win a positive EV lottery unless you buy a ticket. Oftentimes, the only way to buy a ticket is with your existence / your genes: unless you live in a population where most people feel sympathy, then you can't benefit from that, and the price you pay is that you most likely feel sympathy.
But reputation problems are not difficult to solve, and many approaches do fine on them. Tying them to the existence of magic seems to be doing them a disservice by obscuring the mechanisms by which they operate.
Replies from: SilasBarta↑ comment by SilasBarta · 2011-03-28T21:41:19.360Z · LW(p) · GW(p)
I agree that reputational effects are present, and these are persuasive to pure-CDT minds. But, I contend, there is an acausal (what you call "magic") core to these problems that is isolated in Newcomb's problem, but present and relevant in the real-world situations.
For example, take the expensive punishment one. Certainly, carrying out punishments deters people, and they want punishments to have good deterrence value when selecting them, and often justify punishment by reference to its deterrent effect. However:
- Societies that only consider causal consequences are selected against: they are "money-pumped" by criminals who say, "Hey, you can't change the past, and punishing costs you a lot..."
- From the inside, people aren't moved purely by these causal considerations. If it were conclusively proven that e.g. the crime was a one-off event (see Psychohistorian's post), or nobody would learn about it, people would still want the punishment. (As Drescher argues in ch. 7 of Good and Real, people often speak as if a punishment would undo the crime, even as they know it does not.)
From this I conclude that there is a real acausal component to the real-world punishment problem, where you can't explain the situation, and people's reactions, purely by reference to causal criteria -- even though the causal considerations undoubtedly exist.
Also, consider the actions on a continuous rather than a binary scale. In Newcomb's problem, you have two (or three) choices, but in real-world problems you actually choose a "degree of defection". It's not that "if you would cheat, you will not be in a world with tests". Obviously, that's wrong. But what's going on is something more like this:
Among the set of test-takers, there is always a greater level of cheating they could engage in, but don't. And the expected level of cheating determines the information value the test-giver gets out of it. To the extent that people are unwilling to engage in a certain level of cheating, they already exist in a world where the test is that much more informative.
Replies from: Vaniver↑ comment by Vaniver · 2011-03-28T23:10:50.015Z · LW(p) · GW(p)
Societies that only consider causal consequences are selected against: they are "money-pumped" by criminals who say, "Hey, you can't change the past, and punishing costs you a lot..."
The judge looks at the criminal and slowly shakes his head. "We can never bring back the people you killed," he says sadly, "but we can be sure you will never kill again, and that others will think twice when they remember your body swinging from the gallows."
That is to say, they can still change the future through deterrence. And so they shall.
From the inside, people aren't moved purely by these causal considerations.
Of course they are! Where did their motivations come from in the first place? Genes that outreplicated others because they caused better results. That people behave deontologically rather than consequentially doesn't mean they're behaving acausally, and indeed that could be seen as a causal adaption- if you behave deontologically, you're less likely to be tricked by people with excuses!
This feels like a group selection argument to me, though I'm not sure how informative my pattern-matching is to you. Basically, if you can explain something on the atomic level, don't try to explain it on the molecular level. The upper bound on how much cheating occurs is generally not set by the students but by the proctors of the exam. The first order effects- "will I get caught if I write on my hand?" outweigh the second order effects- "will anyone care about my test results if cheating is widespread?", although the proctor chooses how harshly to watch the students based on how important they want the test to be. The tragedy of the commons is averted by enforcement mechanisms (which often take the form of reputation), not by acausal means.
Replies from: SilasBarta↑ comment by SilasBarta · 2011-03-29T14:20:27.447Z · LW(p) · GW(p)
...That is to say, they can still change the future through deterrence. And so they shall.
And this purely causal deterrence cannot fully explain the pattern in human use of punishment, for reasons given by posters like orthonormal here and here: this would not explain why never-used punishments can deter, and why past punishments, with a promise that future criminals of this type won't be punished, ceases to deter.
From the inside, people aren't moved purely by these causal considerations.
Of course they are! Where did their motivations come from in the first place? Genes that outreplicated others because they caused better results.
Equivocation. I meant "causal" in a different sense, one I spelled out with the bulleted list. Here, "causal" doesn't mean "obeying causality", it means "grounded in reasoning only from what an action causes [in the future]".
if you behave deontologically, you're less likely to be tricked by people with excuses!
Which is to say that decision theories considering (subjunctive) acausal "consequences" will be selected for over decision theories only counting costs and benefits that occur with/after a given action.
This feels like a group selection argument to me, though I'm not sure how informative my pattern-matching is to you. ... The first order effects- "will I get caught if I write on my hand?" outweigh the second order effects- "will anyone care about my test results if cheating is widespread?", although the proctor chooses how harshly to watch the students based on how important they want the test to be. The tragedy of the commons is averted by enforcement mechanisms (which often take the form of reputation), not by acausal means.
This is answered by the last two paragraphs of my previous response, but let me say it a different way: both effects are present. For any given proctor countermeasure, there are more powerful cheating measures that can overcome them; and any explanation for why students don't escalate to that level will ultimately rely, in part, on students acting as if they were reasoning from the acausal consequences (and the fact of their correlation).
If the proctor checks their hands, the students can smuggle in cheatsheets. If they're strip-searched before the test, they can get the smart student to steganographically transmit the answers to them. And so on. Explanations for why this doesn't happen will regress to explanations based on selection effects against counterfactual worlds. "The test is attributed proportionally less information value on account of the ease of cheating" is such an explanation.
Replies from: Vaniver↑ comment by Vaniver · 2011-03-30T04:06:16.127Z · LW(p) · GW(p)
And this purely causal deterrence cannot fully explain the pattern in human use of punishment, for reasons given by posters like orthonormal here and here: this would not explain why never-used punishments can deter, and why past punishments, with a promise that future criminals of this type won't be punished, ceases to deter.
I don't understand this statement, because from my point of view it does fully explain punishment. It may be valuable to see if we're having a semantic disagreement rather than a conceptual one.
When someone says "you can't change the past" they're trivially correct. It works for both executing prisoners and paying your bill / tipping your waiter at a restaurant. In both cases, you take the action you take because of your influence on the future. The right response is "yes, it's expensive, but we're not doing it to change the past."
The punishment (combined with the threat thereof) causes the perception that crime is costlier; that perception causes reduced crime; crimes are punished because not punishing them would cause the perception to weaken. Everything is justifiable facing forward.
Do you disagree with that view? Where?
I meant "causal" in a different sense, one I spelled out with the bulleted list. Here, "causal" doesn't mean "obeying causality", it means "grounded in reasoning only from what an action causes [in the future]".
I think we disagree on the definition of "causal." I am willing to call indirect effects causal (X causes Y which causes Z -> X causes Z), where you seem to want to reverse things (Z acauses X). I don't see the benefit in doing so.
A judge who doesn't realize that letting a prisoner escape punishment will weaken deterrence has no place as a judge- it's not causal societies that get pumped, but stupid societies.
For any given proctor countermeasure, there are more powerful cheating measures that can overcome them; and any explanation for why students don't escalate to that level will ultimately rely, in part, on students acting as if they were reasoning from the acausal consequences (and the fact of their correlation).
This is strengthening my belief that you're using acausal the way I do above (Z acauses X). I still think that's a silly way to put things, though.
For example, why talk about selection effects against counterfactual worlds, when we can talk about selection effects against factual worlds? People try things in real life that don't work, and only the things that do work stick around. Tests get ruined when students are able to cheat on them, and the students cheat even though it ruins the test!
It seems like 'acausal consequences' are just constraints from indirect consequences, but with the dangerous bug that it obscures that the constraints are indirect. Stating "fishermen don't overfish common stocks, because if they did the common stocks would disappear" ignores that fishermen often do overfish common stocks, and those common stocks often do disappear.
The ultimate justification for why students don't cheat more is "it's not worth it to them to cheat more." That's more fundamental than the test not existing if they cheat more.
comment by NihilCredo · 2011-03-27T22:22:55.765Z · LW(p) · GW(p)
In several of those examples, more often towards the top of the list, the second "you" should really be "many people in your society", which has some significant implications.
Replies from: SilasBarta↑ comment by SilasBarta · 2011-03-28T00:37:49.254Z · LW(p) · GW(p)
True, and I should elaborate on how this impacts the problem. I think that, given the way I've phrased the common thread, this distinction does not remove the Newcomb-like aspect of the problem.
What I mean is: if you would base your decision on the fact that Other People will suffer the consequences, then you already exist in a "branch" which is (statistically) more plagued by defection of this type than if the cluster of mindspace that includes you did not completely neglect the importance of other people.
In this way, the distinction is like the transparent-box variant of Newcomb's problem. You are (arguably) better off in general if you do not update on the contents of Box #2. Or like Absent-Minded Driver in that you are better off in general if you don't always exit, even though "other yous" will get the benefit of exiting at the right place.
comment by timtyler · 2011-03-27T14:47:48.781Z · LW(p) · GW(p)
Voting: You would not be in a polity where humans (rather than "lizards") rule over you unless you would cause yourself to endure the costs of voting despite the slim chance of influencing the outcome.
Ben covered that one here - though he also said that the argument was not convincing enough to make him actually vote.
Replies from: Matt_Simpson↑ comment by Matt_Simpson · 2011-03-28T01:37:52.281Z · LW(p) · GW(p)
As a side note, there are more benefits to voting than just winning the current election (as the linked blog post seems to suggest). There's also influencing the political climate in your direction to win future elections (or convince the usual suspects to change policy in your preferred direction or be voted out).
Replies from: timtyler↑ comment by timtyler · 2011-03-28T18:17:20.455Z · LW(p) · GW(p)
There are billions of people on the planet, and we can see how many of them actually vote. Doesn't this evidence count for a lot more than how you actually vote in terms of determining which sort of world you are in?
Also, voting looks like a device for placating the voters, more than anything else.
I'm pretty sure the cited argument is a rather poor reason for voting in national elections.
comment by luminosity · 2011-03-26T00:07:50.908Z · LW(p) · GW(p)
You would not exist as an evolution-constructed mind unless you would be willing to cause the spreading of your genes at the expense of your life and leisure.
Except that I don't have to be willing to spread my genes, just to undertake actions that will in turn spread my genes. Maybe expense of life is accurate, but surely leisure includes sex and even the pleasures that children can bring.
Even if our desire was solely to spread our genes and not to have sex, this still wouldn't hold true, in that just because you have been constructed by a process of natural selection on creatures that do spread their genes, does not mean that you won't have a mutation that causes you to not feel the need to do so.
Replies from: SilasBarta↑ comment by SilasBarta · 2011-03-26T00:26:37.180Z · LW(p) · GW(p)
Except that I don't have to be willing to spread my genes, just to undertake actions that will in turn spread my genes. Maybe expense of life is accurate, but surely leisure includes sex and even the pleasures that children can bring.
Right, that could have been more precisely worded, but I opted for brevity with the understanding that people would know what the more precise version is. That would be something like, "...unless you feel desires to put your life and minimization of effort at risk to do things sufficiently similar to those which, in the ancestral environment, increased inclusive genetic fitness."
Even if our desire was solely to spread our genes and not to have sex, this still wouldn't hold true, in that just because you have been constructed by a process of natural selection on creatures that do spread their genes, does not mean that you won't have a mutation that causes you to not feel the need to do so.
Yes, the relationships are probabilistic, both on the reward side, and (I should have mentioned) on the "you"-side. Likewise, Omega can guess incorrectly that you'll one-box, you may be a "never punish anyone!"-type in a society of defection-punishers, etc.
comment by FAWS · 2011-03-25T23:29:51.198Z · LW(p) · GW(p)
Good work, upvoted.
I don't completely agree about natural selection being a valid example, and I think the Prisoners Dilemma should be above Parfit's Hitchhiker, certainly above Newcomb's. The True Prisoners Dilemma could take its current place.
Replies from: SilasBarta, Perplexed↑ comment by SilasBarta · 2011-03-26T00:29:58.653Z · LW(p) · GW(p)
Thanks!
I don't completely agree about natural selection being a valid example
Did you see the comparison I made to Parfit's Hitchhiker in the Parfitian filter article?
The parallel to Parfit's Hitchhiker is this: Natural selection is the Omega, and the mind propagated through generations by natural selection is the hitchhiker. The mind only gets to the "decide to pay"/"decide to care for children" if it had the right decision theory before the "rescue"/"copy to next generation".
↑ comment by Perplexed · 2011-03-26T00:45:22.085Z · LW(p) · GW(p)
The Prisoner's dilemma doesn't even belong on the list. Even if my co-conspirator knows that I won't defect, he still has been given no reason not to defect himself. Reputation (or "virtue") comes into play only in the iterated PD. And the reputation you want there is not unilateral cooperation, it is something more like Tit-for-Tat.
As for natural selection, it doesn't belong for several reasons - the simplest of which is that the 'maximand' of NS is offspring, not "life and leisure". But there is one aspect of NS that does have a Newcomb-like flavor - the theory of "inclusive fitness" (aka Hamilton's rule, aka kin selection).
Other than those two, and "Absent Minded Driver" which I simply don't understand, it strikes me as a good list.
ETA: I guess "Akrasia/Addiction" doesn't belong, either. That is simply ordinary "No pain, No gain" with nothing mysterious about how the pain leads causally to gain. The defining feature of Newcomb-like problems is that there is no obvious causal link between pain and gain.
Replies from: orthonormal, SilasBarta↑ comment by orthonormal · 2011-03-29T00:06:18.287Z · LW(p) · GW(p)
The Prisoner's dilemma doesn't even belong on the list. Even if my co-conspirator knows that I won't defect, he still has been given no reason not to defect himself. Reputation (or "virtue") comes into play only in the iterated PD. And the reputation you want there is not unilateral cooperation, it is something more like Tit-for-Tat.
Imagine that you were playing a one-shot PD, and you knew that your partner was an excellent judge of character, and that they had an inviolable commitment to fairness- that they would cooperate if and only if they predicted you'd cooperate. Note that this is now Newcomb's Problem.
Furthermore, if it could be reliably signaled to others, wouldn't you find it useful to be such a person yourself? That would get selfish two-boxers to cooperate with you, when they otherwise wouldn't. In a certain sense, this decision process is the equivalent of Tit-for-Tat in the case where you have only one shot but you have mutual knowledge of each other's decision algorithm.
(You might want to patch up this decision process so that you could defect against the silly people who cooperate with everyone, in a way that keeps the two-boxers still cooperative. Guess what- you're now on the road to being a TDT agent.)
Replies from: Perplexed↑ comment by Perplexed · 2011-03-29T00:20:50.888Z · LW(p) · GW(p)
Imagine that you were playing a one-shot PD, and you knew that your partner was an excellent judge of character, and that they had an inviolable commitment to fairness- that they would cooperate if and only if they predicted you'd cooperate. Note that this is now Newcomb's Problem.
Yes it is. And Newcomb's problem belongs on the list. But the Prisoner's Dilemma does not.
↑ comment by SilasBarta · 2011-03-28T00:49:37.600Z · LW(p) · GW(p)
I guess "Akrasia/Addiction" doesn't belong, either. That is simply ordinary "No pain, No gain" with nothing mysterious about how the pain leads causally to gain. The defining feature of Newcomb-like problems is that there is no obvious causal link between pain and gain.
Do you think hazing belongs on the list? Because akrasia is (as I've phrased it, anyway) just a special case of hazing, where the correlation between the decision theories instantiated by each generation is even stronger -- later-you is the victim earlier-you's hazing.
What separates akrasia from standard gain-for-pain situations is the dynamic inconsistency (and, I claim, the possibility that you might never have to have the pain in the first place, simply from your decision theory's subjunctive output). In cases where a gain is better than the pain is bad and you choose the pain, that is not akrasia, because you are consistent across times: you prefer the pain+gain to the status quo route.
It is only when your utility function tells you, each period, that you have the preference ranking:
1) not have addiction
2) have addiction, feed it
3) have addiction, don't feed it
and choosing 2) excludes 1) (EDIT: previous post had 2 here) from the choice set in a future period, that it is called addiction. The central problem is that to escape from 2/3, you have to choose 3 over 2 despite it being ranked higher, a problem not present in standard pain-for-gain situations.
Replies from: Perplexed↑ comment by Perplexed · 2011-03-28T01:30:17.463Z · LW(p) · GW(p)
Hazing, cheating, and shoplifting all fit together and probably belong on your list. In each case, your accepting the 'pain' of acting morally results directly in someone else's gain. But then through the magical Kantian reference class fallacy, what goes around comes around and you end up gaining ('start off gaining'?) after all.
Akrasia doesn't seem to match that pattern at all. Your pain leads to your gain, directly. Your viewpoint that iterated decisions made by a single person about his own welfare somehow is analogous to iterated decisions made by a variety of people bearing on the welfare of other people - well, I'm sorry, but that just does not seem to fly.
Replies from: SilasBarta↑ comment by SilasBarta · 2011-03-28T14:42:06.066Z · LW(p) · GW(p)
So, if I understand you, akrasia doesn't belong on the list because the (subjunctive) outcome specification is more realistic, while Newcomb's Problem, where the output is posited to be accurate, does belong on the list?
Replies from: Perplexed↑ comment by Perplexed · 2011-03-28T15:45:26.119Z · LW(p) · GW(p)
You probably do not understand me, because I have no idea what is meant by "the (subjunctive) outcome specification is more realistic" nor by "the output is posited to be accurate".
What I am saying is that akrasia is perfectly well modeled by hyperbolic discounting, and that the fix for akrasia is simply CDT with exponential discounting. And that the other, truely Newcomb-like problems require a belief in this mysterious 'acausal influence' if you are going to 'solve' them as they are presented - as one-time decision problems.
Replies from: timtyler, SilasBarta↑ comment by timtyler · 2011-03-28T19:25:47.431Z · LW(p) · GW(p)
What I am saying is that akrasia is perfectly well modeled by hyperbolic discounting, and that the fix for akrasia is simply CDT with exponential discounting.
http://en.wikipedia.org/wiki/Hyperbolic_discounting#Explanations
...seems to be saying that hyperbolic discounting is the rational result of modelling some kinds of uncertainty about future payoffts. Is it really something that needs to be fixed? Should it not be viewed as a useful heuristic?
Replies from: Perplexed↑ comment by Perplexed · 2011-03-28T22:05:49.212Z · LW(p) · GW(p)
Is it really something that needs to be fixed?
Yes, it needs to be fixed, because it is not a rational analysis.
You are assuming, to start, that the probability of something happening is going to increase with time. So the probability of it happening tomorrow is small, but the probability of it happening in two days is larger.
So then a day passes without the thing happening. That it hasn't happened yet is the only new information. But, following that bizarre analysis, I am supposed to reduce my probability assignments that it will happen tomorrow, simply because what used to be two days out is now tomorrow. That is not rational at all!
Replies from: timtyler↑ comment by timtyler · 2011-03-28T23:34:17.617Z · LW(p) · GW(p)
Hmm. The article cited purports to derive hyperbolic discounting from a rational analysis. Maybe it is sometimes used inappropriately, but I figure creatures probably don't use hyperbolic discounting because of a biases, but because it is a more appropriate heuristic than exponential discounting, under common circumstances.
Replies from: Perplexed↑ comment by Perplexed · 2011-03-29T00:59:45.606Z · LW(p) · GW(p)
The article cited (pdf) purports to derive hyperbolic discounting from a rational analysis.
But it does not do that. Sozou obviously doesn't understand what (irrational) 'time-preference reversal' means. He writes:
I may appear to be temporally inconsistent if, for example, I prefer the promise of a bottle of wine in three months over the promise of a cake in two months, but I prefer a cake immediately over a promise of a bottle of wine in one month.
That is incorrect. What he should have said is: "I am temporally inconsistent if, for example, I prefer the promise of a bottle of wine in three months over the promise of a cake in two months, but two months from now I prefer a cake immediately over a promise of a bottle of wine in one month."
A person whose time preferences predictably change in this way can be money pumped. If he started with a promise of a cake in two months, he would pay to exchange it for a promise of wine in three months. But then two months later, he would pay again to exchange promise of wine in another month for an immediate cake. Edit: Corrected above sentence.
There is nothing irrational in having the probabilities 's' in his Table 1 at a particular point in time (1, 1/2, 1/3, 1/4). What is irrational and constitutes hyperbolic discounting is to still have the same 's' numbers two months later. If the original estimates were rational, then two months later the current 's' schedule for a Bayesian would begin (1, 3/4, 3/5, ...). And the Bayesian would still prefer the promise of wine.
Replies from: timtyler↑ comment by timtyler · 2011-03-29T08:20:45.207Z · LW(p) · GW(p)
The article cited (pdf) purports to derive hyperbolic discounting from a rational analysis.
But it does not do that. Sozou obviously doesn't understand what (irrational) 'time-preference reversal' means. He writes:
I may appear to be temporally inconsistent if, for example, I prefer the promise of a bottle of wine in three months over the promise of a cake in two months, but I prefer a cake immediately over a promise of a bottle of wine in one month.
That is incorrect. What he should have said is: "I am temporally inconsistent if, for example, I prefer the promise of a bottle of wine in three months over the promise of a cake in two months, but two months from now I prefer a cake immediately over a promise of a bottle of wine in one month."
Uh, no. Sozou is just assuming that all else is equal - i.e. it isn't your birthday, and you have no special preference for cake or wine on any particular date. Your objection is a quibble - not a real criticism. Perhaps try harder for a sympathetic reading. The author did not use the same items with the same temporal spacing just for fun.
People prefer rewards now partly because they know from experience that rewards in the future are more uncertain. Promises by the experimenter that they really really will get paid are treated with scepticism. Subjects are factoring such uncertainty in - and that results in hyperbolic discounting.
It can be seen from the table that a cake immediately is worth more than a promise of wine after a month, while a promise of wine after three months is worth more than a promise of cake after two months. So my preferences are indeed consistent with maximizing my expected reward.
↑ comment by SilasBarta · 2011-03-28T16:29:13.989Z · LW(p) · GW(p)
"the (subjunctive) outcome specification is more realistic" = It is more realistic to say that you will suffer a consquence from hazing your future self than from hazing the next generation.
"the output is posited to be accurate" = In Newcomb's Problem, Omega's accuracy is posited by the problem, while Omega's counterparts in other instances is taken to have whatever accuracy it does in real life.
What I am saying is that akrasia is perfectly well modeled by hyperbolic discounting, and that the fix for akrasia is simply CDT with exponential discounting.
That would be wrong though -- the same symmetry can persist through time with exponential discounting. Exponential discounting is equivalent to a period-invariant discount factor. Yet you can still find yourself wishing your previous (symmetric) self did what your current self does not wish to.
And that the other, truely Newcomb-like problems require a belief in this mysterious 'acausal influence' if you are going to 'solve' them as they are presented - as one-time decision problems.
I thought we had this discussion on the Parfitian filter article. You can have Newcomb's problem without acausal infuences: just take yourself to be the Omega where a computer program plays against you. There's no acausal information flow, yet the winning programs act isomorphically to those that "believe in" an acausal influence.