[SEQ RERUN] Scope Insensitivity

post by Tyrrell_McAllister · 2011-06-11T01:08:46.996Z · LW · GW · Legacy · 12 comments

Today's post, Scope Insensitivity, was originally published on 14 May 2007. A summary (taken from the LW wiki):

The human brain can't represent large quantities: an environmental measure that will save 200,000 birds doesn't conjure anywhere near a hundred times the emotional impact and willingness-to-pay of a measure that would save 2,000 birds.

Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Third Alternatives for Afterlife-ism, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

12 comments

Comments sorted by top scores.

comment by David_Gerard · 2011-06-11T07:54:45.846Z · LW(p) · GW(p)

Wei Dai responded in 2009 with Boredom vs. Scope Insensitivity.

How much would you pay to see a typical movie? How much would you pay to see it 100 times?

How much would you pay to save a random stranger’s life? How much would you pay to save 100 strangers?

If you are like a typical human being, your answers to both sets of questions probably exhibit failures to aggregate value linearly. In the first case, we call it boredom. In the second case, we call it scope insensitivity.

Replies from: XiXiDu
comment by XiXiDu · 2011-06-11T09:35:34.417Z · LW(p) · GW(p)

Do we value saving lives independently of the good feelings we get from it?

I recently thought about something similar. I am really keen to learn and understand everything, how the universe works. But this might be dangerous, compared to faking those feelings by immersing myself into a virtual reality, where I feel like I constantly discover new and deep truths about reality. That way I could possibly experience a billion times as much pleasure, excitement and any other possible reactions my body is capable of, compared to discovering the true truth once and for real. Nevertheless, I would always choose the real thing over the simulated. Is that irrational?

All we want and do, we do because it causes our body (brain (us)) to become satisfied, because it makes us feel good in various different ways, and bad if we don't do it. Therefore, can I objectively justify to assign more weight (utility) to a goal that causes a lot less of that bodily payoff?

I don't know how I am confused. I suppose I am asking for some objective grounding of the notion of utility, and how maximizing it, irregardless of the character of its causation, wouldn't be the rational choice. Otherwise, as David Hume once wrote, "`Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger."

comment by Liron · 2011-06-11T17:17:54.562Z · LW(p) · GW(p)

My personal investment thesis, formed after reading The Black Swan and The Big Short, is designed to take advantage of scope insensitivity.

The thesis is: Invest in a 1-10% chance of a 100x-1000x return

The reasoning is that other investors act the same when the chance of something is anywhere from 0.5% to 10%, and they act the same when the potential value of something is anywhere from 10x to 1000x, so I'm buying underpriced investments.

I currently know of two investments that follow my thesis:

  1. BitCoin
  2. Yuri Milner and Ron Conway's offer of $150k in convertible debt to all Y Combinator startups.
comment by Liron · 2011-06-11T17:16:55.925Z · LW(p) · GW(p)

My personal investment thesis, formed after reading The Black Swan and The Big Short, is designed to take advantage of scope insensitivity.

The thesis is: Invest in a 1-10% chance of a 100x-1000x return

The reasoning is that other investors act the same when the chance of something is anywhere from 0.5% to 10%, and they act the same when the chance of something is anywhere from 10x to 1000x, so I'm buying underpriced investments.

I currently know of two investments that follow my thesis:

  1. BitCoin
  2. Yuri Milner and Ron Conway's offer of $150k in convertible debt to all Y Combinator startups.
comment by Liron · 2011-06-11T17:15:18.264Z · LW(p) · GW(p)

My personal investment thesis, formed after reading The Black Swan and The Big Short, is designed to take advantage of scope insensitivity.

The thesis is: Invest in a 1-10% chance of a 100x-1000x return

The reasoning is that other investors act the same when the chance of something is anywhere from 0.5% to 10%, and they act the same when the chance of something is anywhere from 10x to 100x, so I'm buying underpriced investments.

I currently know of two investments that follow my thesis:

  1. BitCoin
  2. Yuri Milner and Ron Conway's offer of $150k in convertible debt to all Y Combinator startups.
comment by Liron · 2011-06-11T17:14:56.129Z · LW(p) · GW(p)

My personal investment thesis, formed after reading The Black Swan and The Big Short, is designed to take advantage of scope insensitivity:

The thesis is: Invest in a 1-10% chance of a 100x-1000x return

The reasoning is that other investors act the same when the chance of something is anywhere from 0.5% to 10%, and they act the same when the chance of something is anywhere from 10x to 100x, so I'm buying underpriced investments.

I currently know of two investments that follow my thesis:

  1. BitCoin
  2. Yuri Milner and Ron Conway's offer of $150k in convertible debt to all Y Combinator startups.
comment by Will_Newsome · 2011-06-11T09:30:27.372Z · LW(p) · GW(p)

I think the most popular form of scope insensitivity 'round these parts might be failing to remember that an existential catastrophe would be roughly infinitely worse than just losing the paltry sum of seven billion people. We'd also lose access to an entire universe's worth of resources.

I want a mantra that is in the spirit of "Reasoning correctly is the most important thing in the universe" but with more of a poetic feel. The attitude it characterizes is one I really respect and would like to make part of myself. A lot of SingInst-related folk seem to me to have this virtue (e.g. Anna Salamon, Carl Shulman, and Steve Rayhawk, though each of them in different ways). Vladimir Nesov is a non-SingInst example. Anyone have any ideas about how to mantra-ize it?

Replies from: CarlShulman, timtyler
comment by CarlShulman · 2011-06-13T16:07:37.607Z · LW(p) · GW(p)

failing to remember that an existential catastrophe would be roughly infinitely worse than just losing the paltry sum of seven billion people

That seems like it might be true for someone fanatically committed to an unbounded aggregative social welfare function combined with a lot of adjustments to deal with infinities, etc. Given any moral uncertainty, mixed motivations, etc (with an aggregation rule that doesn't automatically hand the decision to the internal component that names the biggest number) the claim doesn't go through. Also, it's an annoying assertion of the supremacy of one's nominal values (as sometimes verbally expressed, not revealed preference) to most people.

Replies from: steven0461, Will_Newsome, Will_Newsome
comment by steven0461 · 2011-06-14T01:32:16.924Z · LW(p) · GW(p)

Given any moral uncertainty, mixed motivations, etc (with an aggregation rule that doesn't automatically hand the decision to the internal component that names the biggest number) the claim doesn't go through.

This isn't clear to me, especially given that Will only said roughly infinite.

An aggregation rule that says "follow the prescription of any moral hypothesis to which you assign at least 80% probability" might well make Will's claim go through, and yet does not "automatically hand the decision to the internal component that names the biggest number" as I understand that phrase; after all, the hypothesis won out by being 80% probable and not by naming the biggest number. Some other hypothesis could have won out by naming a smaller number (than the numbers that turn up in discussions of astronomical waste), if it had seemed true.

I don't actually endorse that particular aggregation rule, but for me to be convinced that all plausible candidates avoid Will's conclusion that the relevant value here is "roughly infinite" (or the much weaker conclusion that LW is irrationally scope-insensitive here) would require some further argument.

comment by Will_Newsome · 2011-06-14T01:14:53.821Z · LW(p) · GW(p)

(Also, your comment mixes mentions of predictions about future policies, predictions about predictions about future policies, and future policies in a way that makes it near-impossible to evaluate your explicit message (if someone wanted to do that instead of ignoring it because its foundations are weak). I point this out because if you want to imply that someone or some coalition of parts of someone (some group) is "fanatically committed" to some apparently-indefensible position you should be extra careful to make sure your explicit argument is particularly strong, or at least coherent. You also might not want to imply that one is implicitly assuming obviously absurd things (e.g. "with an aggregation rule that doesn't automatically hand the decision to the internal component that names the biggest number") even if you insist on implying that they are implicitly assuming non-obviously absurd things.)

comment by Will_Newsome · 2011-06-14T00:17:24.438Z · LW(p) · GW(p)

That seems like it might be true for someone fanatically committed to an unbounded aggregative social welfare function combined with a lot of adjustments to deal with infinities, etc.

One does not have to be "fanatically committed" to some ad hoc conjunct of abstract moral positions in order to justifiably make the antiprediction that humanity or its descendants might (that is, high enough probability (given moral uncertainty) that it still dominates the calculation) have some use for all those shiny lights in the sky (and especially the blackholes). It seems to me that your points about mixed motivations etc. are in favor of this antiprediction given many plausible aggregation rules. Sure, most parts of me/humanity might not have any ambitions or drives that require lots of resources to fulfill, but I know for certain that some parts of me/humanity at least nominally do and at least partially act/think accordingly. If those parts end up being acknowledged in moral calculations then a probable default position would be for those parts to take over (at least) the known universe while the other parts stay at home and enjoy themselves. For this not to happen would probably require (again dependent on aggregation rule) that the other parts of humanity actively value to a significant extent not using those resources (again to a significant extent). Given moral uncertainty, I am working under the provisional assumption that some non-negligible fraction of resources in the known universe is going to matter enough to at least some parts of something that guaranteeing its future access should be a very non-negligible goal as one of the few godshatter-coalitions that is able to recognize its potential importance (i.e. comparative advantage).

(Some of a lot of other reasoning I didn't mention involves a prediction that a singleton won't be eternally confined to an aggregation rule that is blatantly stupid in a few of the many, many ways an aggregation rule can be blatantly stupid as I judge them (or at least won't be eternally confined in a huge majority of possible futures). (E.g. CEV has the annoyingly vague necessity of coherence among other things which could easily be called blatantly stupid upon implementation.))

Also, it's an annoying assertion of the supremacy of one's nominal values (as sometimes verbally expressed, not revealed preference) to most people.

It's meant as a guess at a potentially oft-forgotten single-step implication of standard Less Wrong (epistemic and moral) beliefs; not any kind of assertion of supremacy. I have seen enough of the subtleties, complexities, and depth of morality to know that we are much too confused for anyone (any part of one) to be asserting such supremacy.

comment by timtyler · 2011-06-11T13:15:53.028Z · LW(p) · GW(p)

I think the most popular form of scope insensitivity 'round these parts might be failing to remember that an existential catastrophe would be roughly infinitely worse than just losing the paltry sum of seven billion people.

That is not a fact to be remembered. It is surely highly value dependent. I don't assign value to an existential catastrophe that way at all.