Does Evidential Decision Theory really fail Solomon's Problem?

post by AlexMennen · 2011-01-11T04:53:36.930Z · LW · GW · Legacy · 19 comments

Contents

19 comments

Solomon's Problem and varients thereof are often cited as criticism of Evidential decision theory.

For background, here's Solomon's Problem: King Solomon wants to sleep with another man's wife. However, he knows that uncharismatic leaders frequently sleep with other men's wives, and charismatic leaders almost never do. Furthermore, uncharismatic leaders are frequently overthrown, and charismatic leaders rarely are. On the other hand, sleeping with other men's wives does not cause leaders to be overthrown. Instead, high charisma decreases the chance that a leader will sleep with another man's wife and the chance that the leader will be overthrown separately. Not getting overthrown is more important to King Solomon than getting the chance to sleep with the other guy's wife.

Causal decision theory holds that King Solomon can go ahead and sleep with the other man's wife because it will not directly cause him to be overthrown. Timeless decision theory holds that he can sleep with the woman because it will not cause his overthrow in any timeless sense either. Conventional wisdom holds that Evidential decision theory would have him refrain from her, because updating on the fact that he slept with her would suggest a higher probability that he will get overthrown.

The problem with that interpretation is that it assumes that King Solomon only updates his probability distributions based on information about him that is accessible to others. He cannot change whether or not he would sleep with another man's wife given no other disincentives by refraining from doing so in response to other disincentives. The fact that he is faced with the dilemma already indicates that he would. Updating on this information, he knows that he is probably uncharismatic, and thus likely to get overthrown. Updating further on his decision after taking into account the factors guiding his decision will not change the correct probability distribution.

This more complete view of Evidential decision theory is isomorphic to Timeless decision theory (edit: shown to be false in comments). I'm slightly perplexed as to why I have not seen it elsewhere. Is it flawed? Has it been mentioned elsewhere and I haven't noticed? If so, why isn't it so widely known?

19 comments

Comments sorted by top scores.

comment by benelliott · 2011-01-11T10:59:57.173Z · LW(p) · GW(p)

I believe I have seen this mentioned in a few places, including the paper on TDT where it is referred to as the tickle theory. It does help get EDT out of a number of holes.

Unfortunately, I think it ends up being isomorphic to causal decision theory, or at any rate very similar, to the point of two-boxing on Newcomb's problem. I don't know why it isn't more widely known though.

comment by cousin_it · 2011-01-11T06:37:32.512Z · LW(p) · GW(p)

I'm not sure what Solomon's Problem means. If I make up a really convincing argument that the king should (or shouldn't) send for another man's wife, and communicate it to the king, am I thus forcing him to "always have been" charismatic or otherwise? How is this conditioning on actions supposed to work? Does adopting a certain decision theory make you retroactively charismatic?

Also I don't understand why the problem is named this way. In the prototypical example from the Bible, king David sends for Uriah's wife Bathsheba, and Solomon is their son. What.

Replies from: SilasBarta, wedrifid
comment by SilasBarta · 2011-01-11T15:29:39.917Z · LW(p) · GW(p)

Also I don't understand why the problem is named this way. In the prototypical example from the Bible, king David sends for Uriah's wife Bathsheba, and Solomon is their son. What.

Well, it's definitely become a problem for Solomon at that point...

comment by wedrifid · 2012-03-20T06:10:00.183Z · LW(p) · GW(p)

Also I don't understand why the problem is named this way. In the prototypical example from the Bible, king David sends for Uriah's wife Bathsheba, and Solomon is their son. What.

And Solomon already had pretty much all the wives to himself from what I recall...

comment by Nornagest · 2011-01-11T05:59:45.196Z · LW(p) · GW(p)

Wanting to sleep with another man's wife is not necessarily evidentially equivalent to sleeping with another man's wife, as the reaction to Jimmy Carter's famous Playboy interview demonstrates. If Solomon updates his model of himself based on the latter but given only the former, he's already made a mistake.

On the other hand, your analysis would hold true if Solomon's desire for adultery implies a willingness to commit adultery given no causal disincentives, which isn't the same thing in the general sense but might be in this restricted motivational space. If that's true, Solomon's model of himself has already been updated based on his propensity for adultery, and conditioning on it would be double-counting the evidence.

Replies from: AlexMennen
comment by AlexMennen · 2011-01-11T21:20:49.328Z · LW(p) · GW(p)

Edited with your more accurate phrasing. Thanks.

comment by SilasBarta · 2011-01-11T14:11:06.776Z · LW(p) · GW(p)

Have you read EY's TDT paper? What you're arguing sounds like the "tickle" defense of EDT (Solomon should update on the tickle), and is addressed, with references to those who have argued it, starting on page 67. One author has used this analysis of EDT to claim that it should two-box on Newcomb's problem.

Edit: Apologies to benelliot, who beat me to saying the saying the same thing by a few hours. I'll leave it up for the link and page cite though.

Replies from: AlexMennen
comment by AlexMennen · 2011-01-11T21:51:53.249Z · LW(p) · GW(p)

I read some of Yudkowsky's TDT paper, which is actually what prompted my post, but not very much of it, and I had not seen any mention of the tickle defense.

I checked the section on the tickle defense on his paper, but while he mentioned that the tickle defense would two-box on Newcomb's problem, he did not explain why. I considered it intuitively obvious that this modification of EDT would make it conform to TDT. On further reflection, it is not so obvious, but it is also not obvious why it would not. Could explain why an EDT agent using the tickle defense would two-box on Newcomb's problem, or provide a link to such an explanation?

Replies from: SilasBarta
comment by SilasBarta · 2011-01-11T22:07:15.815Z · LW(p) · GW(p)

I was citing from memory (though I had to do a search to get the page number) because I remembered EY going through the argument of the author who promoted the tickle argument, and then noting with dismay that the author went on to say that it also two-boxes, which is proof of its greatness.

Just now I looked at the paper again, and you're right that in the section I had in mind (2nd full paragraph on p. 68) doesn't fully spell out how it reaches that conclusion. But (as I thought) it does have a citation to the author that does spell out the algorithm, which is where you can now go to find the answer. EY does mention that the algorithm is shaky, as it may not even converge, and requires that it update n-th order decisions per some process until it stops changing.

Replies from: AlexMennen
comment by AlexMennen · 2011-01-11T22:36:02.768Z · LW(p) · GW(p)

The cited article is not available for free. Also, I'm more interested in the situation with the tickle defense rather than the metatickle defense, because assuming zero capacity for introspection seems like a silly thing to do when formulating a decision theory.

Also, in regards to people claiming that two-boxing on Newcomb's problem is an advantage for the tickle defense, that seems very strange. What's the point in avoiding CDT if you're going to trick yourself into two-boxing anyway?

Replies from: benelliott, SilasBarta
comment by benelliott · 2011-01-11T22:59:11.421Z · LW(p) · GW(p)

I think the vanilla tickle-defence still two-boxes on Newcomb's problem. The logic goes something like:

Apply some introspection to deduce whether you are likely to be a one-boxer or a two-boxer (in the same way that Solomon introspects to see whether he is charismatic) and then use this information to deduce whether the money is in the box. Now you are facing the transparent-box version of the dilemma, in which EDT two-boxes.

Tickle defence works by gradually screening off all non-causal paths emerging from the 'action' node while CDT simply ignores them. This means they make decisions based on different information, so they aren't entirely identical, although they are similar in that they both always choose the dominant strategy when there is one (which incidentally proves that Tickle Defence is not TDT, since TDT does not always choose a dominant strategy).

Replies from: AlexMennen
comment by AlexMennen · 2011-01-11T23:04:40.979Z · LW(p) · GW(p)

Oh, I see. I hadn't even thought about the fact that EDT fails Newcomb's problem if the prediction is revealed beforehand.

Edit: Wait a minute, I'm not sure that works. The predictor's decision depends on what your final decision will be, so noting your inclination to one-box or two-box does not completely screen off your final decision from the contents of the box that may or may not contain $1 million.

The transparent Newcomb's problem is still a fatal flaw in EDT, though.

Replies from: benelliott
comment by benelliott · 2011-01-11T23:49:30.162Z · LW(p) · GW(p)

One could apply the same logic to Solomon's problem, and say that the difference between charismatic and uncharismatic leaders is that while charismatic leaders may feel tempted to call for someone else's wife they eventually find a reason not to, while uncharismatic leaders may feel worried about losing power but eventually find a reason why they can still call for another man's wife without putting themselves at any more risk. In other words normal EDTs are charismatic, tickle-EDTs, CDTs and TDTs are uncharismatic.

Leaving aside the question of how the populace can tell the difference (perhaps we postulate a weaker version of whatever power Omega uses) the final decision logic now goes back to not calling for another man's wife.

comment by SilasBarta · 2011-01-11T22:56:34.286Z · LW(p) · GW(p)

The cited article is not available for free. Also, I'm more interested in the situation with the tickle defense rather than the metatickle defense, because assuming zero capacity for introspection seems like a silly thing to do when formulating a decision theory.

Then I don't think I have anything to offer on that front.

Also, in regards to people claiming that two-boxing on Newcomb's problem is an advantage for the tickle defense, that seems very strange. What's the point in avoiding CDT if you're going to trick yourself into two-boxing anyway?

That's exactly the point EY is making: they're taking their intuitions as supreme and finding out how they can fit the decision theory to the intuitions rather than looking at results and working backward to what decision theories get the good results.

comment by itaibn0 · 2012-08-14T17:03:45.073Z · LW(p) · GW(p)

I would like to say that I agree with the arguments presented in this post, even though the OP eventually retracted them. I think the arguments for why EDT leads to the wrong decision are themselves wrong.

As mentioned by others, EY referred to this argument as the 'tickle defense' in section 9.1 of his TDT paper. I am not defending the advocates which EY attacked, since (assuming EY hasn't misrepresented them) they have made some mistakes of their own. In particular they argue for two-boxing.

I will start by talking about the ability to introspect. Imagine God promised Solomon that Solomon won't be overthrown. Then the decision of weather or not to sleep with other men's wives is easy, and Solomon can just act on his preferences. Yet if Solomon can't introspect then in the original situation he doesn't know weather he prefers sleeping with others' wives or not. So Solomon not being able to introspect means that there is information that he can rationally react to in some situations and not in others. While a problems like that can occur in real people, I don't expect a theory of rational behavior to have to deal with them. So I assume an agent knows what its preferences are, or if not fails to act on them in consistently.

In fact, the meta-tickle defense doesn't really deal with lack of introspection either. It assumes an agent can think about an issue and 'decide' on it, only to not act on that decision but rather to use that 'decision' as information. An agent that really couldn't introspect wouldn't be able to do that.

The tickle defense has been used to defend two-boxing. While this argument isn't mentioned in the paper, it is described in one of the comments here. This argument has been rebutted by the original poster AlexMennen. I would like to add to that something: For an agent to find out for sure weather it is a one-boxer or a two-boxer, the agent must make a complete simulation of itself in Newcomb's problem. If they try to find this out as part of their strategy for Newcomb's problem, they will get into an infinite loop.

benelliott raised a final argument here. He postulated that charisma is not related to preference for screwing wives, but rather to weather a king's reasoning would lead them to actually do it. Here I have to question weather the hypothetical situation makes sense. For real people an intrinsic personality trait might change their bottom line conclusion, but this behavior is irrational. A ideal rational agent cannot have a trait of the form charisma is postulated to have. benelliott also left the possibility the populace have Omega-like abilities, but then situation is really just another form of Newcomb's problem, and the rational choice is to not screw wives.

Overall I think that EDT actually does lead to rational behavior in these sorts of situations. In fact I think it is better than TDT, because TDT relies on computations with one right answer to not only have probabilities and correlations, but also on there being causality between them. I am unconvinced of this and unsatisfied with the various attempts to deal with it.

comment by JenniferRM · 2011-01-11T06:34:14.876Z · LW(p) · GW(p)

Not sure about all the abstract issues involved, but I read a blog where a more practical version of similar issues came up with startups. This might be a more interesting test case because you can do more research and be surprised by data and so on?

In any case, Rolf Nelson writes:

Having a co-founder right away might be a "bozo filter" for an investor: it helps convince an early investor that you're not a bozo who nobody sane will work with. However, if you are such a bozo, somehow getting a co-founder won't change that fact that you're a bozo; and conversely, if you're not bozish (bozonic?), deciding to initially forgo having a co-founder will not turn you into an innate bozo. Therefore, I speculate it may be vitally important for an investor to insist on co-founders, but it may be less important from the founder's point-of-view for a pre-funding founder to immediately acquire co-founders.

Replies from: benelliott
comment by benelliott · 2011-01-11T12:03:50.853Z · LW(p) · GW(p)

I think that's an issue of signalling rather than Evidential Decision theory. It's clearly the right decision to get a co-founder whatever theory you use, because there is a clear causal link between getting a co-founder and getting money from an investor.

Replies from: JenniferRM
comment by JenniferRM · 2011-01-11T18:53:01.720Z · LW(p) · GW(p)

I think the puzzle Rolf was wondering about was the ordering of the steps in between Step 1 "I'm going to start something" and Step N "Approach investor". Do you want "Get co-founder" to be step 2 (before you're even sure what the idea is and who you should be looking for) or step N-1 (when you know a lot more from steps 2, 3, .... N-3, N-2 but still don't know how hard the co-founder step will actually be)?

If it's purely a matter of signaling to the investors then you'd think that later would be OK. If it's a matter of learning something important about yourself then maybe the co-founder step should be earlier so you don't accidentally waste your time on something you weren't cut out for anyway... unless we hypothesize some sort of bozo-curing self-cultivation that could be performed?

And in the meantime, because this is reality rather than a thought experiment, there is the complicating factor of the value of the co-founder in themselves, bringing expertise and more hands and someone to commiserate/celebrate with and so on.

Replies from: benelliott
comment by benelliott · 2011-01-11T18:58:28.237Z · LW(p) · GW(p)

In that case searching for a co-founder is useful not only for signalling, but for finding out an unknown fact about your own abilities. Once again, something that all seriously considered decision theories recommend, as there is an indisputable causal link between knowing your own abilities and doing well.