Harmful Options

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-12-25T02:26:22.000Z · LW · GW · Legacy · 45 comments

Contents

45 comments

Barry Schwartz's The Paradox of Choice—which I haven't read, though I've read some of the research behind it—talks about how offering people more choices can make them less happy.

A simple intuition says this shouldn't ought to happen to rational agents:  If your current choice is X, and you're offered an alternative Y that's worse than X, and you know it, you can always just go on doing X.  So a rational agent shouldn't do worse by having more options.  The more available actions you have, the more powerful you become—that's how it should ought to work.

For example, if an ideal rational agent is initially forced to take only box B in Newcomb's Problem, and is then offered the additional choice of taking both boxes A and B, the rational agent shouldn't regret having more options.  Such regret indicates that you're "fighting your own ritual of cognition" which helplessly selects the worse choice once it's offered you.

But this intuition only governs extremely idealized rationalists, or rationalists in extremely idealized situations.  Bounded rationalists can easily do worse with strictly more options, because they burn computing operations to evaluate them.  You could write an invincible chess program in one line of Python if its only legal move were the winning one.

Of course Schwartz and co. are not talking about anything so pure and innocent as the computing cost of having more choices.

If you're dealing, not with an ideal rationalist, not with a bounded rationalist, but with a human being—

Say, would you like to finish reading this post, or watch this surprising video instead?

Schwartz, I believe, talks primarily about the decrease in happiness and satisfaction that results from having more mutually exclusive options.  Before this research was done, it was already known that people are more sensitive to losses than to gains, generally by a factor of between 2 and 2.5 (in various different experimental scenarios).  That is, the pain of losing something is between 2 and 2.5 times as worse as the joy of gaining it.  (This is an interesting constant in its own right, and may have something to do with compensating for our systematic overconfidence.)

So—if you can only choose one dessert, you're likely to be happier choosing from a menu of two than a menu of fourteen.  In the first case, you eat one dessert and pass up one dessert; in the latter case, you eat one dessert and pass up thirteen desserts.  And we are more sensitive to loss than to gain.

(If I order dessert on a menu at all, I will order quickly and then close the menu and put it away, so as not to look at the other items.)

Not only that, but if the options have incommensurable attributes, then whatever option we select is likely to look worse because of the comparison.  A luxury car that would have looked great by comparison to a Crown Victoria, instead becomes slower than the Ferrari, more expensive than the 9-5, with worse mileage than the Prius, and not looking quite as good as the Mustang.  So we lose on satisfaction with the road we did take.

And then there are more direct forms of harm done by painful choices.  IIRC, an experiment showed that people who refused to eat a cookie—who were offered the cookie, and chose not to take it—did worse on subsequent tests of mental performance than either those who ate the cookie or those who were not offered any cookie.  You pay a price in mental energy for resisting temptation.

Or consider the various "trolley problems" of ethical philosophy—a trolley is bearing down on 5 people, but there's one person who's very fat and can be pushed onto the tracks to stop the trolley, that sort of thing.  If you're forced to choose between two unacceptable evils, you'll pay a price either way.  Vide Sophie's Choice.

An option need not be taken, or even be strongly considered, in order to wreak harm.  Recall the point from "High Challenge", about how offering to do someone's work for them is not always helping them—how the ultimate computer game is not the one that just says "YOU WIN", forever.

Suppose your computer games, in addition to the long difficult path to your level's goal, also had little side-paths that you could use—directly in the game, as corridors—that would bypass all the enemies and take you straight to the goal, offering along the way all the items and experience that you could have gotten the hard way.  And this corridor is always visible, out of the corner of your eye.

Even if you resolutely refused to take the easy path through the game, knowing that it would cheat you of the very experience that you paid money in order to buy—wouldn't that always-visible corridor, make the game that much less fun?  Knowing, for every alien you shot, and every decision you made, that there was always an easier path?

I don't know if this story has ever been written, but you can imagine a Devil who follows someone around, making their life miserable, solely by offering them options which are never actually taken—a "deal with the Devil" story that only requires the Devil to have the capacity to grant wishes, rather than ever granting a single one.

And what if the worse option is actually taken?  I'm not suggesting that it is always a good idea for human governments to go around Prohibiting temptations.  But the literature of heuristics and biases is replete with examples of reproducible stupid choices; and there is also such a thing as akrasia (weakness of will).

If you're an agent operating from a much higher vantage point—high enough to see humans as flawed algorithms, so that it's not a matter of second-guessing but second-knowing—then is it benevolence to offer choices that will assuredly be made wrongly?  Clearly, removing all choices from someone and reducing their life to Progress Quest, is not helping them.  But are we wise enough to know when we should choose?  And in some cases, even offering that much of a choice, even if the choice is made correctly, may already do the harm...

45 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Robin_Hanson2 · 2008-12-25T03:28:13.000Z · LW(p) · GW(p)

The hard question is: who do you trust to remove your choices, and are they justified in doing so anyway even if you don't trust them to do so?

Replies from: pnrjulius
comment by pnrjulius · 2012-06-06T20:37:24.284Z · LW(p) · GW(p)

One would hope you at least trust yourself to limit your own options.

comment by Selfreferencing · 2008-12-25T03:37:53.000Z · LW(p) · GW(p)

I once spoke with David Schmidtz, a philosophy at the University of Arizona, about Scwartz's work. All he shows is that more choices makes people anxious and confused. But Dave told me that he got Scwartz to admit that being anxious and confused isn't the same way as having a net utility decrease. It's not even close.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-12-25T04:10:57.000Z · LW(p) · GW(p)

Robin, if people could always be trusted to say when they themselves could be trusted, the problem would have a very simple solution at the meta-level. So if you're going so far as to ask that question, then people can't trust their choices, or trust themselves to know when to trust their choices, or meta-meta-trust, etc. And this goes for everyone having the conversation. Not going anywhere in particular with this, just making the observation as a starting point.

It seems to me that adult humans, dealing with other adult humans, are very rarely justified in removing the choices of people who haven't chosen to trust them.

But we recognize e.g. parents and children as an exception, where the parents are expected to have a hugely superior epistemic position, to have (brainware-supported) motives to care for the child's best interests, and finally we have large amounts of historical experience with the situation. (It doesn't always work perfectly, but on the whole, it still seems like trusting children to know when to trust their parents would be worse.)

Not that this is a metaphor for anything. It's different out in the transhuman spaces.

Replies from: Dojan
comment by Dojan · 2013-01-21T12:58:46.768Z · LW(p) · GW(p)

... and finally we have large amounts of historical experience with the situation.

This would be the mother of all sampling biases (read the mouse-over text)...

Though I won't dispute your conclusion, we are the ones who survived after all.

Replies from: MugaSofer
comment by MugaSofer · 2013-01-21T16:05:11.151Z · LW(p) · GW(p)

For those of us who can't view XKCD, could someone comment with what it said?

Replies from: ygert
comment by ygert · 2013-01-21T16:27:57.711Z · LW(p) · GW(p)

The bit that Dojan was referring to, the mouse-over text, is:

On one hand, every single one of my ancestors going back billions of years has managed to figure it [parenting] out. On the other hand, that's the mother of all sampling biases.

comment by frelkins · 2008-12-25T05:21:06.000Z · LW(p) · GW(p)

Eli, really - rickrolling!

comment by Psy-Kosh · 2008-12-25T05:39:22.000Z · LW(p) · GW(p)

He should have singularitrolled instead. :)

comment by steven · 2008-12-25T05:53:01.000Z · LW(p) · GW(p)

The rickroll example actually applies to all agents, including ideal rationalists. Basically you're giving the victim an extra option that you know the victim thinks is better than it actually is. There's no reason why this would apply to humans only or to humans especially.

comment by Daniel_Burfoot · 2008-12-25T06:27:54.000Z · LW(p) · GW(p)
(If I order dessert on a menu at all, I will order quickly and then close the menu and put it away, so as not to look at the other items.)

I do something similar when ordering at a table with several other people. I don't even look at the menu. I arrange to order last, listen to what the other people order, and then just copy one of their orders.

The whole paradox of choice problem can be viewed through a Bayesian lens. In order to make a consistent choice from a set of 2^N options, you need at least N bits of information. This doesn't seem like a lot, but in most cases our information is totally corrupted by noise (do you really know you like cream sauce more than red sauce?). So reducing the size of the option set makes it more likely that you will be able to make the correct choice given the amount of information you have. If I'm dining with four other people at a restaurant with 64 menu options, my strategy decreases the number of bits I need from 6 to 2.

Many other techniques can be interpreted in this light. One notable example is Warren Buffett's "buy and hold" strategy for investing. Most investment strategies involve the investor buying and selling various stocks at different times, based on whatever analysis he has conducted. Obviously this requires repeated decision making. An investor applying buy and hold makes a far smaller set of decisions, thereby maximizing the power of the information he has obtained.

comment by Ben_Jones · 2008-12-25T11:26:43.000Z · LW(p) · GW(p)

Hey Rick Astley! Much better than this decision theory crap.

Came across this at work yesterday, which isn't unrelated. For every level of abstraction involved in a decision, or extra option added, I guess we should just accept that 50% of the population will fall by the wayside. Or start teaching decision theory in little school.

Happy Nondenominational Winter Holiday Period, all. Keep it rational.

comment by Roland2 · 2008-12-25T13:41:00.000Z · LW(p) · GW(p)

Eliezer,

what would be the right thing to do regarding our own choices? Should we limit them? Somehow this seems related to the internet where you always have to choose when to click another link and when to stop reading. Timothy Ferris also recommends a low information diet. I'm just brainstorming a bit here.

comment by Dustin3 · 2008-12-25T17:46:04.000Z · LW(p) · GW(p)

I never thought I'd get rickrolled by Yudkowsky.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-12-25T18:46:45.000Z · LW(p) · GW(p)

Dustin, I've found that people who only read my online writings sometimes come away with this completely wrong picture of my personality.

My father has business cards identifying him as an Assassin working for the Martian Government in Exile. Genes is genes.

comment by Vladimir_Nesov · 2008-12-25T19:07:52.000Z · LW(p) · GW(p)

I now usually simply trust salesperson's choice, after explaining my requirements, only checking that his choice seems to satisfy them, rather than trying to optimize over all the available options. It's probably the main thing salespeople are for in the first place, not to provide expertise (which they often don't have), or even to find the best option for your requirements, but to simplify the choosing process, lifting the psychological weight off the customer.

Replies from: Dojan
comment by Dojan · 2013-01-21T13:23:12.222Z · LW(p) · GW(p)

Well, besides from making the customer believe that s/he actually needs something more expensive than they thought...

comment by Tim_Tyler · 2008-12-25T21:31:31.000Z · LW(p) · GW(p)

Re: Barry Schwartz's The Paradox of Choice [...] talks about how offering people more choices can make them less happy. A simple intuition says this shouldn't ought to happen to rational agents: If your current choice is X, and you're offered an alternative Y that's worse than X, and you know it, you can always just go on doing X. So a rational agent shouldn't do worse by having more options. The more available actions you have, the more powerful you become - that's how it should ought to work.

This makes no sense to me. A blind choice between lady and tiger is preferable to a blind choice between a lady and two tigers. Problems arise when you don't know that the other choices are worse. So having more choices can be really bad - in a way that has nothing to do with the extra cycles burned in evaluating them.

comment by John_Maxwell_Old (John_Maxwell) · 2008-12-25T22:06:49.000Z · LW(p) · GW(p)
Barry Schwartz's The Paradox of Choice - which I haven't read, though I've read some of the research behind it

Yay, a book I've read that Eliezer hasn't! That said, I don't actually recommend it; it was kinda tedious and repetitive.

comment by Doug_S. · 2008-12-25T22:20:19.000Z · LW(p) · GW(p)

I wasn't too surprised when I saw that the video was Rick Astley. It's the standard internet prank, these days.

It could be worse, though. At least people have stopped referring people to Goatse, and this never caught on.

comment by frelkins · 2008-12-25T23:58:19.000Z · LW(p) · GW(p)

@Doug S

Rickrolling is bad for you. It is really is. It devalues your online social currency - the internet is a link economy, right? - and causes people to trust your information less. Trust is the ultimate value, not only in the stock market but also in social networking.

comment by Mike_Blume · 2008-12-26T00:51:49.000Z · LW(p) · GW(p)

Is it possible that Eliezer has indirectly answered Robin's question about gifting from a few days ago? That is, is it possible that I gain more benefit from a copy of Tropic Thunder given to me by my brothers than from one I purchase myself? By giving it to me as a gift, they have removed from me the necessity of comparing it to other films I could purchase, as well as the thought that I could have spent the money in a more "responsible" fashion.

Replies from: taryneast, pnrjulius
comment by taryneast · 2011-06-07T17:21:48.504Z · LW(p) · GW(p)

Hmm, I guess that's why it's always nice to get non-sensible gifts.

comment by pnrjulius · 2012-06-06T20:43:23.119Z · LW(p) · GW(p)

It also depends how good your brothers are at evaluating your taste in films. But they are probably better than most of your other sources (especially advertising).

Though that part about escaping "responsible" spending doesn't actually do much, since you could always sell the DVD on eBay and use the money to buy something else. It's easy to get caught up in sunk cost fallacy and endowment effect though---thinking you should keep it just because you have it. (I guess the resale value is probably a bit less than the original value, so there does exist a narrow region of utility where the movie is worth owning if you already have it but not worth buying yourself. But as I said, this is narrow---on the order of $5---and hence improbable.)

comment by Doug_S. · 2008-12-26T05:24:45.000Z · LW(p) · GW(p)

frelkins: I agree. I'm just pointing out that, as far as pranks go, there are worse things on the web than Rick Astley music videos.

comment by Phil_Goetz2 · 2008-12-26T17:59:11.000Z · LW(p) · GW(p)

It is deeply creepy and disturbing to hear this talk from someone who already thinks he knows better than just about everybody about what is good for us, and who plans to build an AI that will take over the world.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-12-26T20:36:49.000Z · LW(p) · GW(p)

I'll go ahead and repeat that as Goetz's misunderstandings of me and inaccurate depictions of my opinions are frequent and have withstood frequent correction, that I will not be responding to Goetz's comment.

comment by Phil_Goetz2 · 2008-12-26T21:16:29.000Z · LW(p) · GW(p)

Eliezer, I have probably made any number of inaccurate depictions of your opinions, but you can't back away from these ones. You DO generally think that your opinion on topics you have thought deeply about is more valuable than the opinion of almost everyone, and you HAVE thought deeply about fun theory. And you ARE planning to build an AI that will be in control of the world. You might protest that "take over the world" has different connotations. But there's no question that you plan for your AI to be in charge.

comment by Nordsieck · 2008-12-26T21:30:40.000Z · LW(p) · GW(p)

I always thought that the "pushing the fat man in front of the train" as opposed to "switching the direction of the fork in the tracks" was due to people not believing the questioner at a deep level because problem creation by construction doesn't work in the real world.

comment by Phil_Goetz2 · 2008-12-28T03:00:37.000Z · LW(p) · GW(p)

Eliezer: "I'll go ahead and repeat that as Goetz's misunderstandings of me and inaccurate depictions of my opinions are frequent and have withstood frequent correction, that I will not be responding to Goetz's comment."

Really? I challenge you to point to ONE post in which you have tried to correct a misunderstanding by me of your opinion, rather than just complaining about my "misunderstandings" without even saying what the misunderstanding was.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-12-28T03:33:42.000Z · LW(p) · GW(p)

@Goetz: Quick googling turned up this SL4 post. (I don't particularly give people a chance to start over when they switch forums.)

comment by Tim_Tyler · 2008-12-28T10:44:11.000Z · LW(p) · GW(p)

FWIW, Phil's point there seems to be perfectly reasonable - and not in need of correction: if a moral system tells you to do what you were going to do anyway, it isn't going to be doing much work.

Moral systems usually tell you not to do things that you would otherwise be inclined to do - on the grounds that they are bad. Common examples include taking things you want - and having sex.

Replies from: taryneast
comment by taryneast · 2011-06-07T17:25:32.336Z · LW(p) · GW(p)

I'd say that moral systems explain the deeper consequences of an action you may not have thought deeply about.

comment by Lior · 2009-01-01T01:35:12.000Z · LW(p) · GW(p)

Some people want to make a choice but don't want to deal with the cost of making that choice. For example, some couples want the choice of signing up for a prenuptial, but prefer not to have to make that choice. They would prefer that they would have to sign up for it. Making the choice may make the spouse angry.

comment by Robin_Powell · 2009-01-28T23:32:45.000Z · LW(p) · GW(p)

I don't know if this story has ever been written, but you can imagine a Devil who follows someone around, making their life miserable, solely by offering them options which are never actually taken - a "deal with the Devil" story that only requires the Devil to have the capacity to grant wishes, rather than ever granting a single one.

FWIW (very little), this is exactly how I experience shows like "Ah My Goddess!". The main character routinely refuses to take advantage of a situation that I most certainly would. I can't watch stuff like that.

-Robin

comment by Tim_Lang · 2009-02-03T08:37:18.000Z · LW(p) · GW(p)

Love this topic!

Here are my thoughts about the paradox of choice: http://blog.timlang.com/2009/02/too-many-choices-unhappiness-or-why.html

I don't have the solution, but I at least tried to suggest possible categories of solution.

comment by billswift · 2009-11-27T06:01:51.006Z · LW(p) · GW(p)

Apparently paradox of choice is rarely a factor and may not even be real; Tyler just put this up on Marginal Revolution http://www.marginalrevolution.com/marginalrevolution/2009/11/the-paradox-of-choice-is-not-robust.html

Replies from: RobinZ
comment by RobinZ · 2009-11-27T15:23:20.145Z · LW(p) · GW(p)

Quick summary: the paradox of choice suggests that offering more options discourages people from making any selection, and reduces their satisfaction with their ultimate choice when they do. The research that Tyler Cowen cites suggests that there is no significant effect.

comment by Desrtopa · 2010-12-21T18:04:11.652Z · LW(p) · GW(p)

Although it certainly seems to be true that viable options not taken can decrease the pleasure of the option that is taken, I've noticed that I often enjoy my choices most when I am presented with many options, most of which are simply bad. Clearly bad choices don't sap willpower to reject, but I feel like there's a sense of reward in feeling that one has discriminated among one's options and made an unambiguously correct choice. I'd be interested in seeing the results of a study where subjects' satisfaction in their choices is tracked against an increasing number of bad options in addition to one good one.

comment by pnrjulius · 2012-06-06T20:36:55.618Z · LW(p) · GW(p)

I have had something like the "easy path" experience in actual video games, when they offer the option of changing the game difficulty at any time. You could play all the way through Skyrim on "Novice" difficulty if you wanted to, and you would have to be extremely incompetent not to win. But then for someone like me who plays on "Expert" (or "Master" once I'm at a high enough level), the game is more satisfying overall, but after every difficult battle, loss of a companion, etc. there's always that temptation to knock the difficulty down a step or two.

comment by A1987dM (army1987) · 2013-10-25T08:39:21.879Z · LW(p) · GW(p)

Say, would you like to finish reading this post, or watch this surprising video instead?

What does it say about me that I kept on reading (and resolved to follow the link later) because I felt too lazy to watch the video straight away?

comment by Elo · 2014-06-09T06:46:12.155Z · LW(p) · GW(p)

What about when choice inflicts the problem of the multi-armed bandit on us: multi-armed bandit on wikipedia

Where with more options you need to explore them (some amount) to avoid missing out on rewards. Where you might not always know if Y is lesser than X, even when being told specifically that Y < X.

which is to say that: someone who behaves with applied rationality should be occasionally exploring choices to avoid missing rewards. Because of that, when spare choices come up - they create a burden of exploration on the party and that exploration is taxing on resources (even when not chosen).

Isn't that a clearer description for why extra choice can be harmful?

comment by Morgan_Rogers · 2022-03-19T12:52:06.050Z · LW(p) · GW(p)

Suppose your computer games, in addition to the long difficult path to your level's goal, also had little side-paths that you could use—directly in the game, as corridors—that would bypass all the enemies and take you straight to the goal, offering along the way all the items and experience that you could have gotten the hard way.  And this corridor is always visible, out of the corner of your eye.

Even if you resolutely refused to take the easy path through the game, knowing that it would cheat you of the very experience that you paid money in order to buy—wouldn't that always-visible corridor, make the game that much less fun?  Knowing, for every alien you shot, and every decision you made, that there was always an easier path?

This exact phenomenon happens in Deus Ex: Human Revolution, where you can get around almost every obstacle in the game by using the ventilation system. The frustration that results is apparent in this video essay/analysis: it undermines all of the otherwise well-designed systems in the game in spite of not actually interfering with the player's ability to engage with them.

I wonder if, alongside the "loss of rejected options" proposition, a reason that extra choices impact us is the mental bandwidth they take up. If the satisfaction we derive from a choice is (to a first-order approximation) proportional to our intellectual and emotional investment in the option we select, then having more options leaves less to invest as soon as the options go from being free to having any cost at all. As an economic analogy, a committee seeking to design a new product or building must choose between an initial set of designs. The more designs there are, the more resources must go into the selection procedure, and if the committee's budget is fixed, then this will remove resources that could have improved the product further down the line.