The Shadow Question

post by Alicorn · 2009-10-14T01:40:56.490Z · LW · GW · Legacy · 44 comments

Contents

44 comments

This is part 2 of a sequence on problem solving.  Here's part 1, which introduces the vocabulary of "problems" versus "tasks".  This post's title is a reference1 worth 15 geek points if you get it without Googling, and 20 if you can also get it without reading the rest of the post.

You have to be careful what you wish for.  You can't just look at a problem, say "That's not okay," and set about changing the world to contain something, anything, other than that.  The easiest way to change things is usually to make them worse.  If I owe the library fifty cents that I don't have lying around, I can't go, "That's not okay!  I don't want to owe the library fifty cents!" and consider my problem solved when I set the tardy book on fire and now owe them, not money, but a new copy of the book.  Or you could make things, not worse in the specific domain of your original problem, but bad in some tangentially related department: I could solve my library fine problem by stealing fifty cents from my roommate and giving it to the library.  I'd no longer be indebted to the library.  But then I'd be a thief, and my roommate might find out and be mad at me.  Calling that a solution to the library fine problem would be, if not an outright abuse of the word "solution", at least a bit misleading.

So what kind of solutions are we looking for?  How do we answer the Shadow Question?  It's hard to turn a complex problem into doable tasks without some idea of what you want the world to look like when you've completed those tasks.  You could just say that you want to optimize according to your utility function, but that's a little like saying that your goal is to achieve your goals: no duh, but now what?  You probably don't even know what your utility function is; it's not a luminous feature of your mind.

For little problems, the answer to the Shadow Question may not be complete.  For instance, I have never before thought to mentally specify, when making a peanut butter sandwich, that I'd prefer that my act of sandwich-making not lead to the destruction of the Everglades.  But it's complete enough.  The Everglades aren't close enough to my sandwich for me to think they're worth explicitly acting to protect, even now that Everglades-destruction has occurred to me as an undesirable potential side effect.  But for big problems, well - we may have a problem...

Here's a few broad approaches you could take in trying to answer the Shadow Question.  Somebody please medicate me for my addiction to cutesy reference-y titles for things:

These strategies tolerate plenty of overlap, but in general, the more overlap available in a situation, the less problematic a problem you have.  If you can simultaneously enable the best case, disable the worst case, make it unlikely that anything will deteriorate, and nearly guarantee that things will improve - uh - go ahead and do that, then!  Sometimes, though, it seems like you have to organize these strategies and narrow down your plan in order.  Arrange them however you like, and in the search space each one leaves behind, optimize for the next.

Part 3 of this sequence will conclude it, and will talk about resource evaluation.

 

1"The Shadow Question" refers to the question "What do you want?", which was repeatedly asked by creatures called Shadows and their agents during the course of the splendid television show Babylon 5.

44 comments

Comments sorted by top scores.

comment by Morendil · 2009-10-14T13:57:17.144Z · LW(p) · GW(p)

There's also "All-in", aka "Go for broke", which picks high utility OR high disutility, with a distribution of probability that is less extreme than in the case of a lottery ticket (though not necessarily fifty-fifty chances). For instance "with all the hype and all the expectations I have formed, Watchmen-the-movie is either going to be a joyride or a horrible disappointment."

Assuming I understand what you're aiming at... These four don't quite seem to answer the question itself, but rather how you evaluate the possible answers to the question.

This seems to leave open the real issue, which is how you enumerate possible answers to the question.

To take a concrete example, suppose I am fed up with my job, so fed up that I'd take "something, anything, other than that". That's not literally true - it just feels that way. I'm not going to inquire at the nearest McDonald's, for instance.

In this particular case, which should count as a "problem" by your previous definition, I don't believe I would carve up the search space first in terms of approaches such as the four you offer in this post, i.e. asking what would be a big gain, or how do I guarantee no huge loss, etc.. My very first question would be something like "what are the things I would be trading off against one another ?"

My first pass at this, by the availability heuristic, might yield things that are salient properties of the current job (salary, location, etc.). Obviously because that's the most available thing of all, my first pass will include the reason I'm unhappy about the current job: that might be annoying coworkers, a horrible boss, etc.

One of the key skills in problem solving is to also include the less obvious attributes that (possibly) have an even greater weight in my utility function. So my second pass would be "what exactly am I trying to achieve here ?" This may start to yield non-obvious insights, such as why I need a job in the first place, and what acceptable substitutes may be.

Replies from: uselessheuristic, Alicorn
comment by uselessheuristic · 2009-10-14T15:47:25.540Z · LW(p) · GW(p)

I might even say that it's better to explore as much of the problem's causal underpinnings as a first pass.

As a budding design engineer, one of the things that has been hammered into me is first to understand the problem in its wider context. Oftentimes just identifying a PROBLEM as opposed to a TASK is not enough: you need to understand the system that enabled the problem to exist. What aspect of the system is directly detrimental? Why is it detrimental? What features of the system influence that detrimental aspect? Why do those features exist in the first place? Can their core function be satisfied through a different principle of operation, or by restructuring the functions and flows of the system, or even by redefining your requirements?

Only once you understand the system holistically and identify functional requirements, causal structure, and your available tools can you really begin to accurately evaluate your options.

comment by Alicorn · 2009-10-14T14:00:22.510Z · LW(p) · GW(p)

"All in", after some thought, looks like a "lottery ticket" special case - without raising the stakes, you can't get at the preferred best-case, so you raise the stakes to enable that outcome.

You've also confirmed my suspicion that I wrote these in the wrong order; I probably should have done the next one before this one.

Replies from: Morendil
comment by Morendil · 2009-10-14T14:58:27.017Z · LW(p) · GW(p)

You're welcome. :)

In what way is "all in" a special case of "lottery ticket" ? Or to put it another way, how are you classifying everything that you'd see as a possible approach ?

In "lottery ticket" I am guaranteed a tolerable loss, for a tiny chance of a huge gain. When going "all in" what I forsake is any outcome close to zero ("tolerable loss" or "piddling gain"). I am guaranteed an outcome of large magnitude, but the probabilities are much closer to even. Either those are different beasts, or I'm totally confused as to what you're trying to achieve with your classification, and your reply above doesn't help me at all in the latter case. (I could be patient and wait for the next post in the series, however it sounds as if my confusion would be an issue of exposition with the current post.)

Replies from: Alicorn
comment by Alicorn · 2009-10-14T20:28:11.008Z · LW(p) · GW(p)

While in the actual purchase of a literal lottery ticket, you guarantee a loss to enable a huge gain, the criterion to be a "lottery ticket" case in the Alicorn-loves-cutesy-titles sense is just that the motivation is to make the huge gain possible. Sometimes, you can do this without guaranteeing a loss of any size - all it requires is that you move to open up the possibility of a large gain. Raising the stakes does exactly that: before you raise the stakes, the large gain isn't possible. After you do so, the large gain is possible, although not guaranteed. Presumably, you'd never raise stakes if that never made it possible to win big - you wouldn't raise the stakes on a bet you were certain to lose!

Replies from: Morendil
comment by Morendil · 2009-10-14T21:03:46.834Z · LW(p) · GW(p)

I get it now, thanks.

I'll wait for your next post then, and see how your classification fits in with that.

While I was thinking about your post initially, I envisioned a 2d graph, with "probability" on one axis and "(dis)utility" in another. I was toying with formalizations of your concepts as linked blobs of area at various locations on that graph, and my visualizations (of all-in vs lottery) were quite different. So, if I raise that particular point again, it probably will be in terms of that picture.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-04-21T11:27:30.357Z · LW(p) · GW(p)

Putting a lot of work into a career like acting where there's a low chance of a very high reward strikes me as an "all in" strategy.

comment by BrandonReinhart · 2009-10-14T08:17:44.883Z · LW(p) · GW(p)

Other than cryonics (I'm already a member of Alcor) what are some other accessible decisions that act as a lottery ticket: enabling high pay off -- if unlikely -- future outcomes?

"Accessible," to me, does not include abandoning my career to directly meddle -- yet.

I suppose donating to SIAI might be a lottery ticket, but I'm not entirely convinced that it is such. I honestly have no idea what the SIAI does in their day to day business and the material I can find on them doesn't provide much information. I also have no idea how credible the SIAI is among those who might be in a position to turn disasters off so its hard to determine how much to value an SIAI donation at compared to alternatives.

Supporting SENS could be a lottery ticket, although to some extent the same concerns with the SIAI apply to SENS -- I don't have enough information to evaluate it compared to alternatives.

Supporting existential risk research in some way seems like a good approach to turning disasters off, since this growing branch of research appears to be creating a solid basis for future risk mitigation methods. I might investigate that further.

I'm sure there are many options I don't know that I don't know.

A neat thing about cryonics is that the disaster (my death) can come to pass, but even after that point I still have a chance to survive. Should I look for things to invest in that share that insurance-like dynamic? It seems powerful. Is insurance against death a more effective investment than trying to resolve the causes of death? I suppose this depends on the amount of knowledge the civilization has at the time you go to make the bet.

Replies from: Alicorn
comment by Alicorn · 2009-10-14T12:11:50.071Z · LW(p) · GW(p)

Most of the really good "lottery ticket" examples are things like starting a startup company in the hopes of being a millionaire, becoming a drug dealer in hopes of becoming a kingpin, informing a crush of their status as such in hopes of getting to be with them, and anything else on which subject you can imagine some Chicken Soup for the Soul person saying "you miss 100% of the shots you don't take".

comment by Wei Dai (Wei_Dai) · 2009-10-14T10:56:42.286Z · LW(p) · GW(p)

Ok, we know that we can't just maximize expected utility, but the four strategies you give seem pretty arbitrary and unlikely to be even close to optimal. Why did you propose them?

Let me suggest another strategy that I think might make more sense. Start by considering what distributions of outcomes are feasible (intuitively). Then, among the set of seemingly feasible distributions, decide which one you most prefer, and try to work out a plan that results in that distribution. If it turns out (while trying to work out the plan) that you were wrong about its feasibility, then adjust your intuition, and reselect the most preferred feasible distribution of outcomes. Repeat this process until you end up with a plan.

This way, you get a plan that at least somewhat approximates optimality, given computational constraints and the fact that you don't know how to express your values as a utility function.

Replies from: thomblake, PhilGoetz
comment by thomblake · 2009-10-14T12:49:27.492Z · LW(p) · GW(p)

I'm not sure I know how to consider distributions of outcomes.

comment by PhilGoetz · 2009-10-14T14:58:47.422Z · LW(p) · GW(p)

That's more rational (and more difficult), but still only about halfway to expectation maximization.

comment by Tyrrell_McAllister · 2009-10-14T03:05:38.940Z · LW(p) · GW(p)

Only 20 geek points? Who do you think you are?

ETA: It was just a joke, but one which anyone who earned the geek points should get.

Replies from: pjeby
comment by pjeby · 2009-10-14T03:08:12.880Z · LW(p) · GW(p)

Only 20 geek points? Who do you think you are?

Where are you going... with this? Do you have anything worth, uh, listening for? ;-)

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-10-14T03:12:58.097Z · LW(p) · GW(p)

I hear there's one that went around continually asking "What time is it?"

Replies from: ygert, pjeby
comment by ygert · 2013-12-23T14:17:39.308Z · LW(p) · GW(p)

Wow. That is one hell of an obscure reference.The number of people in the world who would get it is probably in the triple digits.

Going by the scale Alicorn was using for geek points, if getting that Babylon 5 reference gets you 20 geek points, getting this reference should probably give you on the order of 200 000 geek points.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-12-24T20:02:05.196Z · LW(p) · GW(p)

...to Mr. Boffo? What were you thinking of?

Replies from: ygert
comment by ygert · 2013-12-26T11:56:18.069Z · LW(p) · GW(p)

Huh. Maybe it wasn't a reference to what I thought it was. Let's just say that a while ago I had the rather annoying habit of answering people who asked the time by repeating their question back to them. I assumed that whoever this was drew from the same source, although I now relize I may have been mistaken. (It really is that obscure...)

The thing I was thinking of was this really obscure RPG from more than a decade ago called Continuum (Tvtropes page Wikipedia page Official (semi-abandoned) website) in which time travveler's identify one another by one asking the other for the time, and then the other repeating the question right back. Thus, time-travellers can identify one another, while at worst confusing normal people with strange demands for the time or weird non-answers to the question of what the time is.

So, the obvious thing for a fan to do in order to try to identify nearby time-travelers is to go around asking a lot of people what the time is or answering such questions with the time-traveller recognized response.

As I said, very obscure.

comment by pjeby · 2009-10-14T03:16:44.862Z · LW(p) · GW(p)

I hear there's one that went around continually asking "What time is it?"

Teatime, of course. Aren't the three Adamsian questions, "How can we eat?", "Why do we eat?" and then, "Where shall we go for a nice lunch?"

comment by Blueberry · 2010-07-20T21:39:38.854Z · LW(p) · GW(p)

This post was very confusing to me. What is the Shadow Question? It was never explained in the post, and it's somewhat hard to understand without knowing. Like wware, I kept thinking "Who knows what evil lurks within the hearts of men?"

Replies from: Alicorn
comment by Alicorn · 2010-07-20T21:42:10.469Z · LW(p) · GW(p)

The Shadow Question is "What do you want?"

Replies from: Blueberry, Richard_Kennaway
comment by Blueberry · 2010-07-20T21:59:44.782Z · LW(p) · GW(p)

Could this be added to the article? It would make it much clearer.

Replies from: Alicorn
comment by Alicorn · 2010-07-20T22:14:17.109Z · LW(p) · GW(p)

But then future readers would have no opportunity to win geek points.

Replies from: steven0461, Blueberry
comment by steven0461 · 2010-07-20T22:58:59.742Z · LW(p) · GW(p)

A lot more readers will care about clarity than will care about geek points.

Replies from: Alicorn
comment by Alicorn · 2010-07-20T23:01:48.474Z · LW(p) · GW(p)

Fine, I'll put in a footnote :(

comment by Blueberry · 2010-07-20T23:46:49.341Z · LW(p) · GW(p)

They still would, if you took out the explanation of where the question came from (which I would never have known). I'd suggest putting the question itself in the main body of the article, but taking out the source of the question; that way, people could still have the chance to win points.

comment by Richard_Kennaway · 2010-07-20T22:46:21.187Z · LW(p) · GW(p)

And the reason that it is The Shadow Question is because it is a reference to Babylon 5).

comment by casebash · 2015-05-27T11:53:03.657Z · LW(p) · GW(p)

The other approach is to identify a good next step and to go with that. For example, if you're trying to improve your social skills, you may join meetup.com and go to a few events. Although this is unlikely to solve your problem, it'll probably give you more information as to what the nature of it is.

comment by Mass_Driver · 2010-07-20T21:22:51.837Z · LW(p) · GW(p)

The Everglades aren't close enough to my sandwich for me to think they're worth explicitly acting to protect, even now that Everglades-destruction has occurred to me as an undesirable potential side effect.

As someone who grew up in Florida, I politely request that you not eat PB&J sandwiches made with jelly made with sugar grown in Florida by conventional means -- the pesticide runoff tends to ruin the Everglades. If it's just PB, the sugar's probably not a big deal, although some brands add a lot of sugar.

comment by kpreid · 2009-10-14T02:21:34.181Z · LW(p) · GW(p)

I wish that the third and fourth approaches had more “everyday” examples like the first two do.

Replies from: RobinZ, Vladimir_Golovin
comment by RobinZ · 2009-10-14T03:20:11.757Z · LW(p) · GW(p)

Let me see if I understand the original post:

Lottery Ticket: examples include ... buying lottery tickets, flirting with a stranger, investing in an adventurous startup.

Turn Disasters Off: examples include ... wearing your seatbelt, buying insurance, taking a taxi home when you've been drinking.

Replies from: Alicorn
comment by Alicorn · 2009-10-14T12:13:56.306Z · LW(p) · GW(p)

Yes, those are good examples, thanks :)

comment by Vladimir_Golovin · 2009-10-14T14:27:19.622Z · LW(p) · GW(p)

Some examples of turning (everyday computer-related) disasters off:

  • Setting up a 24/7 automatic off-site backup for your machine
  • Working under a non-admin account to prevent malware infections
  • Choosing website passwords carefully
comment by handoflixue · 2011-05-09T21:22:36.932Z · LW(p) · GW(p)

This post could use an edit to include a link to the third part, especially as the series doesn't seem to have an easily Google'd title for "Foo, Part 3" searches :)

Replies from: Alicorn
comment by Alicorn · 2011-05-09T23:32:14.015Z · LW(p) · GW(p)

I never actually finished this one, sorry.

Replies from: handoflixue, Decius
comment by handoflixue · 2011-05-10T00:07:36.465Z · LW(p) · GW(p)

I feel significantly better about my failures to find it via Google, at least :)

comment by Decius · 2013-12-23T05:45:05.263Z · LW(p) · GW(p)

Suggest editing the post to reflect that, if possible.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-10-14T02:14:34.665Z · LW(p) · GW(p)

I did have to read (only) the first sentence before I got it. Do I get 15 geek points or 20?

Replies from: Alicorn, CronoDAS, pjeby
comment by Alicorn · 2009-10-14T12:15:25.719Z · LW(p) · GW(p)

By "the first sentence" do you mean "This is part 2 of a sequence on problem solving", or "You have to be careful what you wish for"?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-10-14T15:47:40.313Z · LW(p) · GW(p)

The latter, of course.

comment by CronoDAS · 2009-10-14T05:23:36.395Z · LW(p) · GW(p)

I got the reference.

And I've never actually watched more than a couple of episodes of the show...

Replies from: wware
comment by wware · 2009-10-17T03:30:46.048Z · LW(p) · GW(p)

Ouch, I got it wrong. I thought it was talking about the radio program from my father's childhood. The tagline I had in mind was, "Who knows what evil lurks in the hearts of men? The Shadow knows." Yikes, dating myself.

The silly examples with the library book reminded me of the idea that if you're sitting on a local maximum of the fitness function, any direction you go is down. I think that's why these shadow questions are hard: they are asking you to change your status quo, which almost certainly means coming down (at least temporarily) from a local maximum. I suppose that's why smart people can sometimes seem so over-analytical about big changes. They're smart enough to already be sitting on a pretty good local maximum, and smart enough to recognize that any tradeoffs involved may be complicated.

comment by pjeby · 2009-10-14T03:04:32.688Z · LW(p) · GW(p)

I got it from the title alone, but skimmed right past the part of the introduction where points were being offered. Guess that means I'm still too unlucky minded!

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-10-14T03:11:53.861Z · LW(p) · GW(p)

Rot13 that sort of thing so it doesn't show up in the comments bar. It's a spoiler.