Posts

Comments

Comment by scmbradley on What Is Signaling, Really? · 2012-07-15T10:05:26.730Z · LW · GW

Signals by Brian Skyrms is a great book in this area. It shows how signalling can evolve in even quite simple set-ups.

Comment by scmbradley on Decision Theories: A Less Wrong Primer · 2012-03-20T17:49:32.209Z · LW · GW

So I agree. It's lucky I've never met a game theorist in the desert.

Less flippantly. The logic pretty much the same yes. But I don't see that as a problem for the point I'm making; which is that the perfect predictor isn't a thought experiment we should worry about.

Comment by scmbradley on Decision Theories: A Less Wrong Primer · 2012-03-20T17:44:34.519Z · LW · GW

Elsewhere on this comment thread I've discussed why I think those "rules" are not interesting. Basically, because they're impossible to implement.

Comment by scmbradley on Decision Theories: A Less Wrong Primer · 2012-03-17T12:53:24.083Z · LW · GW

According to what rules? And anyway I have preferences for all kinds of impossible things. For example, I prefer cooperating with copies of myself, even though I know it would never happen, since we'd both accept the dominance reasoning and defect.

Comment by scmbradley on Decision Theories: A Less Wrong Primer · 2012-03-17T12:45:36.540Z · LW · GW

So these alternative decision theories have relations of dependence going back in time? Are they sort of couterfactual dependences like "If I were to one-box, Omega would have put the million in the box"? That just sounds like the Evidentialist "news value" account. So it must be some other kind of relation of dependence going backwards in time that rules out the dominance reasoning. I guess I need "Other Decision Theories: A Less Wrong Primer".

Comment by scmbradley on Decision Theories: A Less Wrong Primer · 2012-03-17T12:41:11.227Z · LW · GW

See mine and orthonormal's comments on the PD on this post for my view of that.

The point I'm struggling to express is that I don't think we should worry about the thought experiment, because I have the feeling that Omega is somehow impossible. The suggestion is that Newcomb's problem makes a problem with CDT clearer. But I argue that Newcomb's problem makes the problem. The flaw is not with the decision theory, but with the concept of such a predictor. So you can't use CDT's "failure" in this circumstance as evidence that CDT is wrong.

Here's a related point: Omega will never put the money in the box. Smith act like a one-boxer. Omega predicts that Smith will one-box. So the million is put in the opaque box. Now Omega reasons as follows: "Wait though. Even if Smith is a one-boxer, now that I've fixed what will be in the boxes, Smith is better off two-boxing. Smith is smart enough to realise that two-boxing is dominant, once I can't causally affect the contents of the boxes." So Omega doesn't put the money in the box.

Would one-boxing ever be advantageous if Omega were reasoning like that? No. The point is Omega will always reason that two-boxing dominates once the contents are fixed. There seems to be something unstable about Omega's reasoning. I think this is related to why I feel Omega is impossible. (Though I'm not sure how the points interact exactly.)

Comment by scmbradley on Decision Theories: A Less Wrong Primer · 2012-03-16T14:03:32.530Z · LW · GW

Aha. So when agents' actions are probabilistically independent, only then does the dominance reasoning kick in?

So the causal decision theorist will say that the dominance reasoning is applicable whenever the agents' actions are causally independent. So do these other decision theories deny this? That is, do they claim that the dominance reasoning can be unsound even when my choice doesn't causally impact the choice of the other?

Comment by scmbradley on Schelling fences on slippery slopes · 2012-03-16T13:56:56.186Z · LW · GW

Given the discussion, strictly speaking the pill reduces Ghandi's reluctance to murder by 1 percentage point. Not 1%.

Comment by scmbradley on Decision Theories: A Less Wrong Primer · 2012-03-16T13:54:47.743Z · LW · GW

Wouldn't you like to be the type of agent who cooperates with near-copies of yourself? Wouldn't you like to be the type of agent who one-boxes?

Yes, but it would be strictly better (for me) to be the kind of agent who defects against near-copies of myself when they co-operate in one-shot games. It would be better to be the kind of agent who is predicted to one-box, but then two-box once the money has been put in the opaque box.

But the point is really that I don't see it as the job of an alternative decision theory to get "the right" answers to these sorts of questions.

Comment by scmbradley on Decision Theories: A Less Wrong Primer · 2012-03-16T13:47:03.538Z · LW · GW

we might ask whether it is preferable to be the type of person who two boxes or the type of person who one boxes. As it turns out it seems to be more preferable to one-box

No. What is preferable is to be the kind of person Omega will predict will one-box, and then actually two-box. As long as you "trick" Omega, you get strictly more money. But I guess your point is you can't trick Omega this way.

Which brings me back to whether Omega is feasible. I just don't share the intuition that Omega is capable of the sort of predictive capacity required of it.

Comment by scmbradley on Decision Theories: A Less Wrong Primer · 2012-03-14T13:05:05.150Z · LW · GW

There are a couple of things I find odd about this. First, it seems to be taken for granted that one-boxing is obviously better than two boxing, but I'm not sure that's right. J.M. Joyce has an argument (in his foundations of causal decision theory) that is supposed to convince you that two-boxing is the right solution. Importantly, he accepts that you might still wish you weren't a CDT (so that Omega predicted you would one-box). But, he says, in either case, once the boxes are in front of you, whether you are a CDT or a EDT, you should two-box! The dominance reasoning works in either case, once the prediction has been made and the boxes are in front of you.

But this leads me on to my second point. I'm not sure how much of a flaw Newcomb's problem is in a decision theory, given that it relies on the intervention of an alien that can accurately predict what you will do. Let's leave aside the general problem of predicting real agents' actions with that degree of accuracy. If you know that the prediction of your choice affects the success of your choices, I think that reflexivity or self reference simply makes the prediction meaningless. We're all used to self-reference being tricky, and I think in this case it just undermines the whole set up. That is, I don't see the force of the objection from Newcomb's problem, because I don't think it's a problem we could ever possibly face.

Here's an example of a related kind of "reflexivity makes prediction meaningless". Let's say Omega bets you $100 that she can predict what you will eat for breakfast. Once you accept this bet, you now try to think of something that you would never otherwise think to eat for breakfast, in order to win the bet. The fact that your actions and the prediction of your actions have been connected in this way by the bet makes your actions unpredictable.

Going on to the prisoner's dilemma. Again, I don't think that it's the job of decision theory to get "the right" result in PD. Again, the dominance reasoning seems impeccable to me. In fact, I'm tempted to say that I would want any future advanced decision theory to satisfy some form of this dominance principle: it's crazy to ever choice an act that is guaranteed to be worse. All you need to do to "fix" PD is to have the agent attach enough weight to the welfare of others. That's not a modification of the decision theory, that's a modification of the utility function.

Comment by scmbradley on Satisficers want to become maximisers · 2012-02-17T17:09:59.654Z · LW · GW

As I understand what is meant by satisficing, this misses the mark. A satisficer will search for an action until it finds one that is good enough, then it will do that. A maximiser will search for the best action and then do that. A bounded maximser will search for the "best" (best according to its bounded utility function) and then do that.

So what the satisficer picks depends on what order the possible actions are presented to it in a way it doesn't for either maximiser. Now, if easier options are presented to it first then I guess your conclusion still follows, as long as we grant the premise that self-transforming will be easy.

But I don't think it's right to identify bounded maximisers and satisficers.

Comment by scmbradley on Rationality Quotes February 2012 · 2012-02-03T21:25:26.468Z · LW · GW

Any logically coherent body of doctrine is sure to be in part painful and contrary to current prejudices

– Bertrand Russell, History of Western Philosophy p. 98

Bertie is a goldmine of rationality quotes.

Comment by scmbradley on A summary of Savage's foundations for probability and utility. · 2012-02-02T13:56:59.781Z · LW · GW

P6 entails that there are (uncountably) infinitely many events. It is at least compatible with modern physics that the world is fundamentally discrete both spatially and temporally. The visible universe is bounded. So it may be that there are only finitely many possible configurations of the universe. It's a big number sure, but if it's finite, then Savage's theorem is irrelevant. It doesn't tell us anything about what to believe in our world. This is perhaps a silly point, and there's probably a nearby theorem that works for "appropriately large finite worlds", but still. I don't think you can just uncritically say "surely the world is thus and so".

If this is supposed to say something normative about how I should structure my beliefs, then the structural premises should be true of the world I have beliefs about.

Comment by scmbradley on Rationality Quotes February 2012 · 2012-02-02T13:43:12.088Z · LW · GW

The greatest challenge to any thinker is stating the problem, in a way that will allow a solution

– Bertrand Russell

Comment by scmbradley on Rationality Quotes February 2012 · 2012-02-02T13:41:43.049Z · LW · GW

Anyone who can handle a needle convincingly can make us see a thread which isn't there

-E.H. Gombrich

Comment by scmbradley on Rationality quotes January 2012 · 2012-01-19T13:50:52.518Z · LW · GW

Ah I see now. Glad we cleared that up.

Still, I think there's something to the idea that if there is a genuine debate about some claim that lasts a long time, then there might well be some truth on either side. So perhaps Russell was wrong to universally quantify over "debates" (as your counterexamples might show), but I think there is something to the claim.

Comment by scmbradley on A summary of Savage's foundations for probability and utility. · 2012-01-19T11:13:59.096Z · LW · GW

But why ought the world be such that such a partition exists for us to name? That doesn't seem normative. I guess there's a minor normative element in that it demands "If the world conspires to allow us to have partitions like the ones needed in P6, then the agent must be able to know of them and reason about them" but that still seems secondary to the demand that the world is thus and so.

Comment by scmbradley on Rationality quotes January 2012 · 2012-01-19T11:10:55.357Z · LW · GW

Er. What? You can call it a false generalisation all you like, that isn't in itself enough to convince me it is false. (It may well be false, that's not what's at stake here). You seem to be suggesting that merely by calling it a generalisation is enough to impugn its status.

And in homage to your unconvential arguing style, here are some non sequituurs: How many angels can dance on the head of a pin? Did Thomas Aquinas prefer red wine or white wine? Was Stalin lefthanded? What colour were Sherlock Holmes' eyes?

Comment by scmbradley on A summary of Savage's foundations for probability and utility. · 2012-01-18T15:38:27.141Z · LW · GW

This thought isn't original to me, but it's probably worth making. It feels like there are two sorts of axioms. I am following tradition in describing them as "rationality axioms" and "structure axioms". The rationality axioms (like the transitivity of the order among acts) are norms on action. The structure axioms (like P6) aren't normative at all. (It's about structure on the world, how bizarre is it to say "The world ought to be such that P6 holds of it"?)

Given this, and given the necessity of the structure axioms for the proof, it feels like Savage's theorem can't serve as a justification of Bayesian epistemolgy as a norm of rational behaviour.

Comment by scmbradley on Dutch Books and Decision Theory: An Introduction to a Long Conversation · 2012-01-18T15:31:04.322Z · LW · GW

What the Dutch book theorem gives you are restrictions on the kinds of will-to-wager numbers you can exhibit and still avoid sure loss. It's a big leap to claim that these numbers perfectly reflect what your degrees of belief ought to be.

But that's not really what's at issue. The point I was making is that even among imperfect reasoners, there are better and worse ways to reason. We've sorted out the perfect case now. It's been done to death. Let's look at what kind of imperfect reasoning is best.

Comment by scmbradley on Rationality quotes January 2012 · 2012-01-18T15:25:37.585Z · LW · GW

What do you mean "the statement is affected by a generalisation"? What does it mean for something to be "affected by a generalisation"? What does it mean for a statement to be "affected"?

The claim is a general one. Are general claims always false? I highly doubt that. That said, this generalisation might be false, but it seems like establishing that would require more than just pointing out that the claim is general.

Comment by scmbradley on Dutch Books and Decision Theory: An Introduction to a Long Conversation · 2012-01-17T17:46:00.105Z · LW · GW

I think this misses the point, somewhat. There are important norms on rational action that don't apply only in the abstract case of the perfect bayesian reasoner. For example, some kinds of nonprobabilistic "bid/ask" betting strategies can be Dutch-booked and some can't. So even if we don't have point-valued will-to-wager values, there are still sensible and not sensible ways to decide what bets to take.

Comment by scmbradley on Dutch Books and Decision Theory: An Introduction to a Long Conversation · 2012-01-17T17:41:19.679Z · LW · GW

If you weaken your will-to-wager assumption and effectively allow your agents to offer bid-ask spreads on bets (i'll buy bets on H for x, but sell them for y) then you get "Dutch book like" arguments that show that your beliefs conform to Dempster-Shafer belief functions, or Choquet capacities, depending on what other constraints you allow.

Or, if you allow that the world is non-classical – that the function that decides which propositions are true is not a classical logic valuation function – then you get similar results.

Other arguments for having probability theory be the right representation of belief include representation theorems of various kinds, Cox's theorem, going straight from qualitative probability orderings, gradational accuracy style arguments…

Comment by scmbradley on Dutch Books and Decision Theory: An Introduction to a Long Conversation · 2012-01-17T17:33:14.170Z · LW · GW

This seems to be orthogonal to the current argument. The Dutch book argument says that your will-to-wager fair betting prices for dollar stakes had better conform to the axioms of probability. Cox's theorem says that your real-valued logic of plausible inference had better conform to the axioms of probability. So you need the extra step of saying that your betting behaviour should match up with your logic of plausible inference before the arguments support each other.

Comment by scmbradley on Dutch Books and Decision Theory: An Introduction to a Long Conversation · 2012-01-17T17:29:55.865Z · LW · GW

Savage's representation theorem in Foundations of Statistics starts assuming neither. He just needs some axioms about preference over acts, some independence concepts and some pretty darn strong assumptions about the nature of events.

So it's possible to do it without assuming a utility scale or a probability function.

Comment by scmbradley on Rationality quotes January 2012 · 2012-01-17T17:11:59.824Z · LW · GW

I've had rosewater flavoured ice cream.

I bet cabbage ice cream does not taste as nice.

Comment by scmbradley on Rationality quotes January 2012 · 2012-01-17T17:05:27.725Z · LW · GW

Sorry I'm new. I don't understand. What do you mean?

Comment by scmbradley on Welcome to Less Wrong! (2012) · 2012-01-08T21:43:45.478Z · LW · GW

I have lots of particular views and some general views on decision theory. I picked on decision theory posts because it's something I know something about. I know less about some of the other things that crop up on this site…

Comment by scmbradley on Rationality quotes January 2012 · 2012-01-03T23:13:21.908Z · LW · GW

it is clear that each party to this dispute – as to all that persist through long periods of time – is partly right and partly wrong

— Bertrand Russell History of Western Philosophy (from the introduction, again.)

Comment by scmbradley on Rationality quotes January 2012 · 2012-01-03T23:12:02.532Z · LW · GW

Uncertainty, in the presence of vivid hopes and fears, is painful, but must be endured if we wish to live without the support of comforting fairy tales

— Bertrand Russell, History of Western Philosophy (from the introduction)

Comment by scmbradley on Welcome to Less Wrong! (2012) · 2012-01-03T19:11:38.519Z · LW · GW

Hi. I'll mostly be making snarky comments on decision theory related posts.

Comment by scmbradley on So You Want to Save the World · 2012-01-03T19:00:13.502Z · LW · GW

The VNM utility theorem implies there is some good we value highest? Where has this come from? I can't see how this could be true. The utility theorem only applies once you've fixed what your decision problem looks like…