Rationality Reading Group: Part W: Quantified Humanism

post by Gram_Stone · 2016-03-24T03:48:57.995Z · LW · GW · Legacy · 5 comments

Contents

  W. Quantified Humanism
None
5 comments

This is part of a semi-monthly reading group on Eliezer Yudkowsky's ebook, Rationality: From AI to Zombies. For more information about the group, see the announcement post.


Welcome to the Rationality reading group. This fortnight we discuss Part W: Quantified Humanism (pp. 1453-1514) and Interlude: The Twelve Virtues of Rationality (pp. 1516-1521). This post summarizes each article of the sequence, linking to the original LessWrong post where available.

W. Quantified Humanism

281. Scope Insensitivity - The human brain can't represent large quantities: an environmental measure that will save 200,000 birds doesn't conjure anywhere near a hundred times the emotional impact and willingness-to-pay of a measure that would save 2,000 birds.

282. One Life Against the World - Saving one life and saving the whole world provide the same warm glow. But, however valuable a life is, the whole world is billions of times as valuable. The duty to save lives doesn't stop after the first saved life. Choosing to save one life when you could have saved two is as bad as murder.

283. The Allais Paradox - Offered choices between gambles, people make decision-theoretically inconsistent decisions.

284. Zut Allais! - Eliezer's second attempt to explain the Allais Paradox, this time drawing motivational background from the heuristics and biases literature on incoherent preferences and the certainty effect.

285. Feeling Moral - Our moral preferences shouldn't be circular. If a policy A is better than B, and B is better than C, and C is better than D, and so on, then policy A really should be better than policy Z.

286. The "Intuitions" for "Utilitarianism" - Our intuitions, the underlying cognitive tricks that we use to build our thoughts, are an indispensable part of our cognition. The problem is that many of those intuitions are incoherent, or are undesirable upon reflection. But if you try to "renormalize" your intuition, you wind up with what is essentially utilitarianism.

287. Ends Don't Justify Means (Among Humans) - Humans have evolved adaptations that allow them to simultaneously deceive themselves into thinking that their policy suggestions are helpful to the tribe and actually enact policies that are self-serving. As a general rule, there are certain things that you should never do, even if you come up with persuasive reasons that they're good for the tribe.

288. Ethical Injunctions - Understanding more about ethics should make your moral choices stricter, but people usually use a surface-level knowledge of moral reasoning as an excuse to make their moral choices more lenient.

289. Something to Protect - Many people only start to grow as a rationalist when they find something that they care about more than they care about rationality itself. It takes something really scary to cause you to override your intuitions with math.

290. When (Not) to Use Probabilities - When you don't have a numerical procedure to generate probabilities, you're probably better off using your own evolved abilities to reason in the presence of uncertainty.

291. Newcomb's Problem and Regret of Rationality - Newcomb's problem is a very famous decision theory problem in which the rational move appears to be consistently punished. This is the wrong attitude to take. Rationalists should win. If your particular ritual of cognition consistently fails to yield good results, change the ritual.

Interlude: The Twelve Virtues of Rationality

 


This has been a collection of notes on the assigned sequence for this fortnight. The most important part of the reading group though is discussion, which is in the comments section. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

The next reading will cover Beginnings: An Introduction (pp. 1527-1530) and Part X: Yudkowsky's Coming of Age (pp. 1535-1601). The discussion will go live on Wednesday, 6 April 2016, right here on the discussion forum of LessWrong.

5 comments

Comments sorted by top scores.

comment by torekp · 2016-04-01T22:33:48.593Z · LW(p) · GW(p)

So after reading the Allais paradox posts or being otherwise familiar with the topic, what do lesswrongers think? [pollid:1133]

Replies from: Gram_Stone
comment by Gram_Stone · 2016-04-02T13:32:25.622Z · LW(p) · GW(p)

So, I think that this is actually a loaded question that may result from a common misconception about the thrust of Eliezer's arguments when he juxtaposes normative decision theory with empirical observations about human behavior. If your question is implicitly about normative decision theory, then yeah, conformance to the Axiom of Independence is a requirement of rationality. But it's clear that humans cannot do the math of probability theory and decision theory in real time, and that they were created in a very particular environment that is not that similar to the skeletal reality that normative decision agents inhabit. This is why we have things like framing effects and risk aversion (the example in the Allais paradox): you make a scale for the situation you're in because it lets you do a cheap approximation of the normative approach, or you pick the certainty over the uncertainty, because most biological creatures have to worry about ruin. This also means that you have different scales in different situations, even trivially different ones, so if we looked at you as a normative agent, you would have inconsistent preferences. Obviously we can't get through the day without framing effects, but it seems to help to have an idea of the psychological reasons for why we take normatively stupid bets sometimes; and being able to decide when you need to rely on framing effects and risk aversion as a tractable, helpful heuristic, and when you need to throw it out and do something that scares your jury-rigged brain, but that is probably a good idea anyway. And it cannot hurt to know how to do this, for if you knew how to evaluate situations and decide whether or not you should use a heuristic like risk aversion, you could always just choose the strategy that you would have used if you didn't know how to do that.

Replies from: torekp
comment by torekp · 2016-04-02T19:34:19.510Z · LW(p) · GW(p)

The poll question takes the Axiom to be a normative principle, not a day to day recipe for every decision. I agree that the case for it as a normative principle is better than taking it as a prescription. I just don't think it's a completely convincing case.

I agree with Wei Dai's remark that

the Axiom of Independence implies dynamic consistency, but not vice versa. If we were to replace the Axiom of Independence with some sort of Axiom of Dynamic Consistency, we would no longer be able to derive expected utility theory. (Similarly with dutch book/money pump arguments, there are many ways to avoid them besides being an expected utility maximizer.)

If a Dutchman throws a book at you - duck! You don't need to be the sort of agent to whom expected utility theory applies.

The deep reason why utility theory fails to be required by rationality, is that there is no general separability between the decision process itself and the "outcomes" that agents care about. I'm putting "outcomes" in scare quotes because the term strongly suggests that what matters is the destination, not the journey (where the journey includes the decision process and its features such as risk).

There are many particular occasions, at least for many agents (including me), on which there is such separability. That's why I find expected utility theory useful. But rationally required? Not so much.

Here's a toy version of the journey/destination problem. (I think I'm borrowing from Kaj Sotala, who probably said it better, but I can't find the original.) Suppose I sell my convertible Monday for $5000 and buy an SUV for $5010. On Tuesday I sell the SUV for $5000 and buy a Harley for $5010. On Wednesday I sell the Harley for $5000 and buy the original convertible back for $5010. Oh no, I've been money pumped! Except, wait - I got to drive a different vehicle each day, something that I enjoy. I'm out $30, but that might be a small price to pay for the privilege. This example doesn't involve risk per se, but does illustrate the care needed to avoid defining "outcomes" in such a way as to avoid begging questions against an agent's values.

Replies from: Gram_Stone
comment by Gram_Stone · 2016-04-02T20:23:51.577Z · LW(p) · GW(p)

Thanks for all of this, I wasn't aware of any of these things.

The poll question takes the Axiom to be a normative principle, not a day to day recipe for every decision.

This may sound nitpicky, but poll questions don't take anything to be anything; people do. I wonder if your results won't be skewed by the people who actually make the mistake that you didn't make but that I thought you made, or ignored by people like me who think they know more and that the question is silly but who actually know less and don't understand the question. I almost skipped the poll entirely, and would never have read your wonderful comment. Maybe you could add some elaboration in the OP, or suggest that voters read this thread? Not sure.

Replies from: torekp
comment by torekp · 2016-04-03T00:27:38.017Z · LW(p) · GW(p)

Sure, if there were more people answering the poll, there'd probably be some that took the Axiom of Independence, and/or expected utility theory, in the way you worried about. It's a fair point. But so far I'm the only skeptical vote.