Late Great Filter Is Not Bad News
post by Wei Dai (Wei_Dai) · 2010-04-04T04:17:39.243Z · LW · GW · Legacy · 82 commentsContents
82 comments
But I hope that our Mars probes will discover nothing. It would be good news if we find Mars to be completely sterile. Dead rocks and lifeless sands would lift my spirit.
Conversely, if we discovered traces of some simple extinct life form—some bacteria, some algae—it would be bad news. If we found fossils of something more advanced, perhaps something looking like the remnants of a trilobite or even the skeleton of a small mammal, it would be very bad news. The more complex the life we found, the more depressing the news of its existence would be. Scientifically interesting, certainly, but a bad omen for the future of the human race.
— Nick Bostrom, in Where Are They? Why I hope that the search for extraterrestrial life finds nothing
This post is a reply to Robin Hanson's recent OB post Very Bad News, as well as Nick Bostrom's 2008 paper quoted above, and assumes familiarity with Robin's Great Filter idea. (Robin's server for the Great Filter paper seems to be experiencing some kind of error. See here for a mirror.)
Suppose Omega appears and says to you:
(Scenario 1) I'm going to apply a great filter to humanity. You get to choose whether the filter is applied one minute from now, or in five years. When the designated time arrives, I'll throw a fair coin, and wipe out humanity if it lands heads. And oh, it's not the current you that gets to decide, but the version of you 4 years and 364 days from now. I'll predict his or her decision and act accordingly.
I hope it's not controversial that the current you should prefer a late filter, since (with probability .5) that gives you and everyone else five more years of life. What about the future version of you? Well, if he or she decides on the early filter, that would constitutes a time inconsistency. And for those who believe in multiverse/many-worlds theories, choosing the early filter shortens the lives of everyone in half of all universes/branches where a copy of you is making this decision, which doesn't seem like a good thing. It seems clear that, ignoring human deviations from ideal rationality, the right decision of the future you is to choose the late filter.
Now let's change this thought experiment a little. Omega appears and instead says:
(Scenario 2) Here's a button. A million years ago I hid a doomsday device in the solar system and predicted whether you would press this button or not. Then I flipped a coin. If the coin came out tails, I did nothing. Otherwise, if I predicted that you would press the button, then I programmed the device to destroy Earth right after you press the button, but if I predicted that you would not press the button, then I programmed the device to destroy the Earth immediately (i.e., a million years ago).
It seems to me that this decision problem is structurally no different from the one faced by the future you in the previous thought experiment, and the correct decision is still to choose the late filter (i.e., press the button). (I'm assuming that you don't consider the entire history of humanity up to this point to be of negative value, which seems a safe assumption, at least if the "you" here is Robin Hanson.)
So, if given a choice between an early filter and a late filter, we should choose a late filter. But then why do Robin and Nick (and probably most others who have thought about it) consider news that imply a greater likelihood of the Great Filter being late to be bad news? It seems to me that viewing a late Great Filter to be worse news than an early Great Filter is another instance of the confusion and irrationality of SSA/SIA-style anthropic reasoning and subjective anticipation. If you anticipate anything, believing that the great filter is more likely to lie in the future means you have to anticipate a higher probability of experiencing doom.
(This paragraph was inserted to clarify in response to a couple of comments. These two scenarios involving Omega are not meant to correspond to any actual decisions we have to make, but just to establish that A) if we had a choice, it would be rational to choose a late filter instead of an early filter, therefore it makes no sense to consider the Great Filter being late to be bad news (compared to it being early), and B) human beings, working off subjective anticipation, would tend to incorrectly choose the early filter in these scenarios, especially scenario 2, which explains why we also tend to consider the Great Filter being late to be bad news. The decision mentioned below, in the last paragraph, is not directly related to these Omega scenarios.)
From an objective perspective, a universe with a late great filter simply has a somewhat greater density of life than a universe with an early great filter. UDT says, let's forget about SSA/SIA-style anthropic reasoning and subjective anticipation, and instead consider yourself to be acting in all of the universes that contain a copy of you (with the same preferences, memories, and sensory inputs), making the decision for all of them, and decide based on how you want the multiverse as a whole to turn out.
So, according to this line of thought, we're acting in both kinds of universes: those with early filters, and those with late filters. If, as Robin Hanson suggests, we were to devote a lot of resources to projects aimed at preventing possible late filters, then we would end up improving the universes with late filters, but hurting the universes with only early filters (because the resources would otherwise have been used for something else). But since copies of us occur more frequently in universes with late filters than in universes with early filters, such a decision (which Robin arrives at via SIA) can be justified on utilitarian grounds under UDT.
82 comments
Comments sorted by top scores.
comment by Nick_Tarleton · 2010-04-04T16:26:16.189Z · LW(p) · GW(p)
It seems to me that viewing a late Great Filter to be worse news than an early Great Filter is another instance of the confusion and irrationality of SSA/SIA-style anthropic reasoning and subjective anticipation. If you anticipate anything, believing that the great filter is more likely to lie in the future means you have to anticipate a higher probability of experiencing doom.
Let's take this further: is there any reason, besides our obsession with subjective anticipation, to discuss whether a late great filter is 'good' or 'bad' news, over and above policy implications? Why would an idealized agent evaluate the utility of counterfactuals it knows it can't realize?
Replies from: Wei_Dai, Rain, Roko↑ comment by Wei Dai (Wei_Dai) · 2010-04-05T06:40:10.092Z · LW(p) · GW(p)
That is a good question, and one that I should have asked and tried to answer before I wrote this post. Why do we divide possible news into "good" and "bad", and "hope" for good news? Does that serve some useful cognitive function, and if so, how?
Without having good answers to these questions, my claim that a late great filter should not be considered bad news may just reflect confusion about the purpose of calling something "bad news".
Replies from: cousin_it↑ comment by cousin_it · 2011-12-21T18:51:26.777Z · LW(p) · GW(p)
About the cognitive function of "hope": it makes evolutionary sense to become all active and bothered when a big pile of utility hinges on a single uncertain event in the near future, because that makes you frantically try to influence that event. If you don't know how to influence it (as in the case of a lottery), oh well, evolution doesn't care.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-12-21T19:16:55.222Z · LW(p) · GW(p)
Evolution might care. That is, systems that expend a lot of attention on systems they can't influence might do worse than systems that instead focus their attention on systems they can influence. But yes, either there weren't any of the second kind of system around to compete with our ancestors, or there were and they lost out for some other reason, or there were and it turns out that it's a bad design for our ancestral environment.
↑ comment by Roko · 2010-04-04T20:25:39.636Z · LW(p) · GW(p)
is there any reason, besides our obsession with subjective anticipation, to discuss whether a late great filter is 'good' or 'bad' news,
no, I don't think so. But if you strip subjective anticipation off a human, you might lose a lot of our preferences. We are who we are, so we care about subjective anticipation.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-04-05T00:03:00.941Z · LW(p) · GW(p)
We are who we are, but who we are is not fixed. What we care about seem to depend on what arguments we listen to or think up, and in what order. (See my Shut Up and Divide post for an example of this.)
While a ideally rational agent (according to our current best conception of ideal rationality) would seek to preserve its values regardless of what they are, some humans (including me, for example) actively seek out arguments that might change what they care about. Such "value-seeking" behavior doesn't seem irrational to me, even though I don't know how to account for it in terms of rationality.
And while it seems impossible for a human to completely give up subjective anticipation, it does seem possible to care less about it.
Replies from: JGWeissman↑ comment by JGWeissman · 2010-04-05T00:11:32.018Z · LW(p) · GW(p)
Such "value-seeking" behavior doesn't seem irrational to me, even though I don't know how to account for it in terms of rationality.
I would say it is part of checking for reflective consistency. Ideally, there shouldn't be arguments that change your (terminal) values, so if there are, you want to so you can figure out what is wrong and how to fix it.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-04-05T00:31:00.305Z · LW(p) · GW(p)
I don't think that explanation makes sense. Suppose an AI thinks it might have a security hole in its network stack, so that if someone sends it a certain packet, it would become that person's slave. It would try to fix that security hole, without actually seeking to have such a packet sent to itself.
We humans know that there are arguments out there that can change our values, but instead of hardening our minds against them, some of us actually try to have such arguments sent to us.
Replies from: Amanojack, Roko↑ comment by Amanojack · 2010-04-05T23:47:21.653Z · LW(p) · GW(p)
We humans know that there are arguments out there that can change our values, but instead of hardening our minds against them, some of us actually try to have such arguments sent to us.
In the deontological view of values this is puzzling, but in the consequentialist view it isn't: we welcome arguments that can change our instrumental values, but not our terminal values (A.K.A. happiness/pleasure/eudaimonia/etc.). In fact I contend that it doesn't even make sense to talk about changing our terminal values.
↑ comment by Roko · 2010-04-05T23:26:49.779Z · LW(p) · GW(p)
It is indeed a puzzling phenomenon.
My explanation is that the human mind is something like a coalition of different sub-agents, many of which are more like animals or insects than rational agents. In any given context, they will pull the overall strategy in different directions. The overall result is an agent with context dependent preferences, i.e. irrational behavior. Many people just live with this.
Some people, however, try to develop a "life philosophy" that shapes the disparate urges of the different mental subcomponents into an overall strategy, that reflects a consistent overall policy.
A moral "argument" might be a hypothetical that attempts to put your mind into a new configuration of relative power of subagents, so that you can re-assess the overall deal.
Replies from: pjeby↑ comment by pjeby · 2010-04-06T01:38:36.647Z · LW(p) · GW(p)
My explanation is that the human mind is something like a coalition of different sub-agents, many of which are more like animals or insects than rational agents. In any given context, they will pull the overall strategy in different directions. The overall result is an agent with context dependent preferences, i.e. irrational behavior.
Congratulations, you just reinvented [a portion of] PCT. ;-)
[Clarification: PCT models the mind as a massive array of simple control circuits that act to correct errors in isolated perceptions, with consciousness acting as a conflict-resolver to manage things when two controllers send conflicting commands to the same sub-controller. At a fairly high level, a controller might be responsible for a complex value: like correcting hits to self-esteem, or compensating for failings in one's aesthetic appreciation of one's work. Such high-level controllers would thus appear somewhat anthropomorphically agent-like, despite simply being something that detects a discrepancy between a target and an actual value, and sets subgoals in an attempt to rectify the detected discrepancy. Anything that we consider of value potentially has an independent "agent" (simple controller) responsible for it in this way, but the hierarchy of control does not necessarily correspond to how we would abstractly prefer to rank our values -- which is where the potential for irrationaity and other failings lies.]
Replies from: Rokocomment by RobinHanson · 2010-04-04T13:42:29.502Z · LW(p) · GW(p)
Yes there are different ways to conceive what news is good or bad, and yes it is good from a God-view if filters are late. But to those of us who already knew that we exist now, and have passed all previous filters, but don't know how many others out there are at a similar stage, the news that the biggest filters lie ahead of us is surely discouraging, if useful.
Replies from: Wei_Dai, timtyler↑ comment by Wei Dai (Wei_Dai) · 2010-04-04T23:00:15.695Z · LW(p) · GW(p)
Yes there are different ways to conceive what news is good or bad
My post tried to argue that some of them are better than others.
the news that the biggest filters lie ahead of us is surely discouraging, if useful.
But is such discouragement rational? If not, perhaps we should try to fight against it. It seems to me that we would be less discouraged if we considered our situation and decisions from what you call the God-view.
Replies from: RobinHanson↑ comment by RobinHanson · 2010-04-05T14:45:08.406Z · LW(p) · GW(p)
Call me stuck in ordinary decision theory, with less than universal values. I mostly seek info that will help me now make choices to assist myself and my descendants, given what I already know about us. The fact that I might have wanted to commit before the universe began to rating universes as better if they had later filters is not very relevant for what info I now will call "bad news."
Replies from: Wei_Dai, Vladimir_Nesov↑ comment by Wei Dai (Wei_Dai) · 2010-04-06T00:29:45.291Z · LW(p) · GW(p)
After I wrote my previous reply to you, I realized that I don't really know why we call anything good news or bad news, so I may very well be wrong when I claimed that late great filter is not bad news, or that it's more rational to not call it bad news.
That aside, ordinary decision theory has obvious flaws. (See here for the latest example.) Given human limitations, we may not be able to get ourselves unstuck, but we should at least recognize that it's less than ideal to be stuck this way.
↑ comment by Vladimir_Nesov · 2010-04-05T14:57:22.476Z · LW(p) · GW(p)
I mostly seek info that will help me now make choices to assist myself and my descendants, given what I already know about us.
The goal of UDT-style decision theories is making optimal decisions at any time, without needing to precommit in advance. Looking at the situation as if from before the beginning of time is argued to be the correct perspective from any location and state of knowledge, no matter what your values are.
↑ comment by timtyler · 2010-04-04T14:21:06.649Z · LW(p) · GW(p)
It might be interesting - if it was true.
I don't think anyone has so far attempted to make the case that the "biggest" filters lie in the future.
Replies from: gwern↑ comment by gwern · 2010-04-04T21:05:00.259Z · LW(p) · GW(p)
I think many of the most common solutions to the Fermi Paradox are exactly that, making the case that there is a Great Filter ahead.
Replies from: DanielVarga, timtyler↑ comment by DanielVarga · 2010-04-05T23:03:33.296Z · LW(p) · GW(p)
Maybe this is a good time to mention my proposed solution to the Fermi Paradox. It does not invoke a Great Filter. The one-sentence version is that we cannot observe other civilizations if they are expanding with the speed of light.
The main idea is a 0-1 law for the expansion speed of civilizations. I argue that there is only a very short timeframe in the life of a civilization when their sphere of influence is already expanding, but not yet expanding with exactly the speed of light. If they are before this short phase transition, they can’t be observed with current human technology. After the phase transition they can’t be observed at all.
Replies from: gwern↑ comment by gwern · 2010-04-06T00:16:17.629Z · LW(p) · GW(p)
That's an interesting idea. But it looks to me like you still need to postulate that civilization is very rare, because we are in the light cone of an enormous area.
Replies from: DanielVarga↑ comment by DanielVarga · 2010-04-06T10:31:52.709Z · LW(p) · GW(p)
You are absolutely correct that we need one more postulate: I postulate that expanding civilizations destroy nonexpanding ones on contact. (They turn them into negentropy-source or computronium or whatever.) This suggests that we are in a relatively small part of the space-time continuum unoccupied by expanding civilizations.
But you are right, at this point I already postulated many things, so a straight application of the Anthropic Principle would be a cleaner solution to the Fermi Paradox than my roundabout ways. Honestly, in a longer exposition, like a top-level post, I wouldn't even have introduced the idea as a solution to the Fermi Paradox. But it does show that our observations can be compatible with the existence of many civilizations.
I believe that the valuable part of my idea is not the (yet another) solution to the Paradox, but the proposed 0-1 law itself. I would be very interested in a discussion about the theoretical feasibility of light-speed expansion. More generally, I am looking for a solution to the following problem: if a one optimizes a light-cone to achieve the biggest computational power possible, what will be the expansion speed of this computer? I am aware that this is not a completely specified problem, but I think it is specified well enough so that we can start thinking about it.
Replies from: gwern↑ comment by gwern · 2010-04-06T12:43:29.516Z · LW(p) · GW(p)
Have you looked at Hanson's 'burning the cosmic commons' paper?
Replies from: DanielVarga↑ comment by DanielVarga · 2010-04-06T20:12:57.636Z · LW(p) · GW(p)
Yes. After I figured this little theory out, I did some googling to find the first inventor. It seemed like such a logical idea, I couldn't believe that I was the first to reason like this. This googling led me to Hanson's paper. As you note, this paper has some ideas similar to mine. These ideas are very interesting on their own, but the similarity is superficial, so they do not really help answering any of my questions. This is not surprising, considering that these are physics and computer science rather than economics questions.
Later I found another, more relevant paper: Thermodynamic cost of reversible computing Not incidentally, this was written by Tomasso Toffoli, who coined the term 'computronium'. But it still doesn't answer my questions.
Replies from: gwern↑ comment by gwern · 2010-04-07T00:17:10.776Z · LW(p) · GW(p)
This is not surprising, considering that these are physics and computer science rather than economics questions.
Hanson's paper is most useful for answering the question, 'if civilizations could expand at light-speed, would they?' There's 2 pieces to the puzzle, the ability to do so and the willingness to do so.
As for the ability: are you not satisfied by general considerations of von Neumann probes and starwisps? Those aren't going to get a civilization expanding at 0.9999c, say, but an average of 0.8 or 0.9 c would be enough, I'd think, for your theory.
↑ comment by timtyler · 2010-04-21T23:27:23.184Z · LW(p) · GW(p)
Those are unsupported arguments - speculation without basis in fact.
There are other satisfying resolutions to the Fermi Paradox - such as the idea that we are locally first. DOOM mongers citing Fermi for support should attempt to refute such arguments if they want to make a serious case that the Fermi paradox provides much evidence to support to their position.
Anyway, I don't see how these count as evidence that the "biggest" filters lie in the future. The Fermi paradox just tells us most planets don't make it to a galactic civilisation quickly. There's no implication about when they get stuck.
comment by FAWS · 2010-04-04T10:12:53.646Z · LW(p) · GW(p)
All else being the same we should prefer a late coin toss over an early one, but we should prefer an early coin toss that definitely came up tails over a late coin toss that might come up either way. Learning that the coin toss is still ahead is bad news in the same way as learning that the coin came up tails is good news. Bad news is not isomorphic to making a bad choice. An agent that maximizes good news behaves quite differently from a rational actor.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-04-04T12:29:37.231Z · LW(p) · GW(p)
So, according to your definition of "good news" and "bad news", it might be bad news to find that you've made a good decision, and good news to find that you've made a bad decision? Why would a rational agent want to have such a concept of good and bad news?
Replies from: Tyrrell_McAllister, Rain, FAWS, Benquo↑ comment by Tyrrell_McAllister · 2010-04-04T14:49:39.178Z · LW(p) · GW(p)
So, according to your definition of "good news" and "bad news", it might be bad news to find that you've made a good decision, and good news to find that you've made a bad decision? Why would a rational agent want to have such a concept of good and bad news?
If you wake up in the morning to learn that you got drunk last night, played the lottery, and won, then this is good news.
Let us suppose that, when you were drunk, you were computationally limited in a way that made playing the lottery (seem to be) the best decision given your computational limitations. Now, in your sober state, you are more computationally powerful, and you can see that playing the lottery last night was a bad decision (given your current computational power but minus the knowledge that your numbers would win). Nonetheless, learning that you played and won is good news. After all, maybe you can use your winnings to become even more computationally powerful, so that you don't make such bad decisions in the future.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-04-04T23:07:57.422Z · LW(p) · GW(p)
If you wake up in the morning to learn that you got drunk last night, played the lottery, and won, then this is good news.
Why is that good news, when it also implies that in the vast majority of worlds/branches, you lost the lottery? It only makes sense if, after learning that you won, you no longer care about the other copies of you that lost, but I think that kind of mind design is simply irrational, because it leads to time inconsistency.
Replies from: wnoise, Tyrrell_McAllister, prase, RobinZ↑ comment by Tyrrell_McAllister · 2010-04-06T23:59:24.865Z · LW(p) · GW(p)
Why is that good news, when it also implies that in the vast majority of worlds/branches, you lost the lottery? It only makes sense if, after learning that you won, you no longer care about the other copies of you that lost, but I think that kind of mind design is simply irrational, because it leads to time inconsistency.
I want to be careful to distinguish Many-Worlds (MW) branches from theoretical possibilities (with respect to my best theory). Events in MW-branches actually happen. Theoretical possibilities, however, may not. (I say this to clarify my position, which I know differs from yours. I am not here justifying these claims.)
My thought experiment was supposed to be about theoretical possibility, not about what happens in some MW-branches but not others.
But I'll recast the situation in terms MW-branches, because this is analogous to your scenario in your link. All of the MW-branches very probably exist, and I agree that I ought to care about them without regard to which one "I" am or will be subjectively experiencing.
So, if learning that I played and won the lottery in "my" MW-branch doesn't significantly change my expectation of the measures of MW-branches in which I play or win, then it is neither good news nor bad news.
However, as wnoise points out, some theoretical possibilities may happen in practically no MW-branches.
This brings us to theoretical possibilities. What are my expected measures of MW-branches in which I play and in which I win? If I learn news N that revises my expected measures in the right way, so that the total utility of all branches is greater, then N is good news. This is the kind of news that I was talking about, news that changes my expectations of which of the various theoretical possibilities are in fact realized.
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2010-04-16T20:52:29.373Z · LW(p) · GW(p)
I'm very surprised that this was downvoted. I would appreciate an explanation of the downvote.
↑ comment by prase · 2010-04-05T08:43:23.095Z · LW(p) · GW(p)
This is the point at which believing in many worlds and caring about other branches leads to very suspicious way to perceive reality. I know, absurdity heuristic isn't that much reliable, but still - would it make you really sad or angry or desperate if you realised that you have won a billion (in any currency) under described circumstances? Would you really celebrate if you realised that the great filter, which wipes out a species 90% of the time, and which you previously believed we have already passed, is going to happen in the next 50 years?
I am ready to change my opinion about this style of reasoning, but probably I need some more powerful intuition pump.
Replies from: Nick_Tarleton, cupholder↑ comment by Nick_Tarleton · 2010-04-05T17:48:09.389Z · LW(p) · GW(p)
would it make you really sad or angry or desperate if you realised that you have won a billion (in any currency) under described circumstances? Would you really celebrate if you realised that the great filter, which wipes out a species 90% of the time, and which you previously believed we have already passed, is going to happen in the next 50 years?
Caring about other branches doesn't imply having congruent emotional reactions to beliefs about them. Emotions aren't preferences.
Replies from: prase↑ comment by prase · 2010-04-05T18:36:03.001Z · LW(p) · GW(p)
Emotions are not preferences, but I believe they can't be completely disentangled. There is something wrong with a person who feels unhappy after learning that the world has changed towards his/her prefered state.
Replies from: BrandonReinhart↑ comment by BrandonReinhart · 2010-04-07T04:06:39.161Z · LW(p) · GW(p)
I don't see how you can effectively apply social standards like "something wrong" to a mind that implements UDT. There are no human minds or non-human minds that I am aware of that perfectly implement UDT. There are no known societies of beings that do. It stands to reason that such a society would seem very other if judged by the social standards of a society composed of standard human minds.
When discussing UDT outcomes you have to work around that part of you that wants to immediately "correct" the outcome by applying non-UDT reasoning.
Replies from: prase↑ comment by prase · 2010-04-07T12:24:50.577Z · LW(p) · GW(p)
That "something wrong" was not as much of a social standard, as rather an expression of an intuitive feeling of a contradiction, which I wasn't able to specify more explicitly. I could anticipate general objections such as yours, however, it would help if you can be more concrete here. The question is whether one can say he prefers the state of world where he dies soon with 99% probability, even if he would be in fact disappointed after realising that it was really going to happen. I think we are now at risk of redefining few words (like preference) to mean something quite different from what they used to mean, which I don't find good at all.
And by the way, why is this a question of decision theory? There is no decision in the discussed scenario, only a question whether some news can be considered good or bad.
↑ comment by cupholder · 2010-04-05T14:24:08.574Z · LW(p) · GW(p)
I am ready to change my opinion about this style of reasoning, but probably I need some more powerful intuition pump.
I don't know if this is exactly the kind of thing you're looking for, but you might like this paper arguing for why many-worlds doesn't imply quantum immortality and like-minded conclusions based on jumping between branches. (I saw someone cite this a few days ago somewhere on Less Wrong, and I'd give them props here, but can't remember who they were!)
Replies from: Cyan↑ comment by Cyan · 2010-04-05T17:31:53.169Z · LW(p) · GW(p)
It was Mallah, probably.
Replies from: cupholder↑ comment by cupholder · 2010-04-05T18:59:56.470Z · LW(p) · GW(p)
You're probably right - going through Mallah's comment history, I think it might have been this post of his that turned me on to his paper. Thanks Mallah!
↑ comment by RobinZ · 2010-04-05T00:28:04.199Z · LW(p) · GW(p)
It's good news because you just gained a big pile of utility last night.
Yes, learning that you're not very smart when drunk is bad news, but the money more than makes up for.
Replies from: wnoise↑ comment by wnoise · 2010-04-05T02:39:03.918Z · LW(p) · GW(p)
Wei_Dai is saying that all the other copies of you that didn't win lost more than enough utility to make up for it. This is far from a universally accepted utility measure, of course.
Replies from: RobinZ↑ comment by RobinZ · 2010-04-05T11:33:08.849Z · LW(p) · GW(p)
So Wei_Dai's saying the money doesn't more than make up for? That's clever, but I'm not sure it actually works.
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2010-04-05T23:42:06.839Z · LW(p) · GW(p)
Had the money more than made up for it, it would have been rational from a normal expected-utility perspective to play the lottery. My scenario was assuming that, with sufficient computational power, you would know that playing the lottery wasn't rational.
Replies from: RobinZ↑ comment by RobinZ · 2010-04-06T00:52:16.696Z · LW(p) · GW(p)
We're not disagreeing about the value of the lottery - it was, by stipulation, a losing bet - we are disagreeing about the proper attitude towards the news of having won the lottery.
I don't think I understand the difference in opinion well enough to discover the origin of it.
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2010-04-06T01:36:25.940Z · LW(p) · GW(p)
I must have misunderstood you, then. I think that we agree about having a positive attitude toward having won.
↑ comment by Rain · 2010-04-04T12:48:37.416Z · LW(p) · GW(p)
In the real world, we don't get to make any decision. The filter hits us or it doesn't.
If it hits early, then we shouldn't exist (good news: we do!). If it hits late, then WE'RE ALL GOING TO DIE!
In other words, I agree that it's about subjective anticipation, but would point out that the end of the world is "bad news" even if you got to live in the first place. It's just not as bad as never having existed.
Nick is wondering whether we can stop worrying about the filter (if we're already past it). Any evidence we have that complex life develops before the filter would then cause us to believe in the late filter, leaving it still in our future, and thus still something to worry about and strive against. Not as bad as an early filter, but something far more worrisome, since it is still to come.
↑ comment by FAWS · 2010-04-04T14:16:12.484Z · LW(p) · GW(p)
Depends on what you mean with " find that you've made a good decision", but probably yes. A decision is either rational given the information you had available or it's not. Do you mean finding out you made a rational decision that you forgot about? Or making the right decision for the wrong reasons and later finding out the correct reasons? Or finding additional evidence that increases the difference in expected utility for making the choice you made?
Finding out you have a brain tumor is bad news. Visiting the doctor when you have the characteristic headache is a rational decision, and an even better decision when in the third sense when you turn out to actually have a brain tumor. Finding a tumor would retroactively make a visit to the doctor a good decision in the second sense even if it originally was for irrational reasons. And in the first sense, if you somehow forgot about the whole thing in the mean time I guess being diagnosed would remind you of the original decision.
Bad news is news that reduces your expectation of utility. Why should a rational actor lack that concept? If you don't have a concept for that you might confuse things that change expectation of utility for things that change utility and accidentally end up just maximizing the expectation of utility when you try to maximize expected utility.
↑ comment by Benquo · 2010-04-04T16:13:01.056Z · LW(p) · GW(p)
UPDATE: This comment clearly misses the point. Don't bother reading it.
Well, the worse you turn out to have done within the space of possible choices/outcomes, the more optimistic you should be about your ability to do better in the future, relative to the current trend.
For example, if I find out that I am being underpaid for my time, while this may offend my sense of justice, it is good news about future salary relative to my prior forecast, because it means it should be easier than I thought to be paid more, all else equal.
Generally, if I find that my past decisions have all been perfect given the information available at the time, I can't expect to materially improve my future by better decisionmaking, while if I find errors that were avoidable at the time, then if I fix these errors going forward, I should expect an improvement. This is "good news" insofar as it expands the space of likely outcomes in a utility-positive direction, and so should raise the utility of the expected (average) outcome.
comment by Stuart_Armstrong · 2010-04-07T12:42:08.532Z · LW(p) · GW(p)
First of all, a late great filter may be total, while an early great filter (based on the fact of our existence) was not - a reason to prefer the early one.
Secondly, let's look at the problem from our own perspective. If we knew that there was an Omega simulating us, and that our decision would affect when a great filter happens, even in our past, then this argument could work.
But we have no evidence of that! Omega is an addition to the problem, that completely changes the situation. If I had the button in front of me that said "early partial great filter/no late great filter", I would press it immeditately, because, updating on the fact of my existence at this time, this button now reads: "(irrelevant unchangeable fact about the past)/no late great filter". Up until the moment I have evidence that I live in a universe where Omega is indeed simulating me, I have no reason not to press the button.
Replies from: RobinZ↑ comment by RobinZ · 2010-04-07T15:12:16.886Z · LW(p) · GW(p)
Hah - two hours after you, and without reading your comment, I come to the same conclusion by analogy to your own post.* :)
* That's where your ten karma just came from, by the way.
comment by CronoDAS · 2010-04-04T04:46:33.602Z · LW(p) · GW(p)
We can't do anything today about any filters that we've already passed...
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-04-04T06:39:49.808Z · LW(p) · GW(p)
I'm not sure which part of my post you're responding to with that comment, but perhaps there is a misunderstanding. The two scenarios involving Omega are only meant to establish that a late great filter should not be considered worse news than an early great filter. They are not intended to correspond to any decisions that we actually have to make. The decision mentioned in the last paragraph, about how much resources to spend on existential risk reduction, which we do have to make, is not directly related to those two scenarios.
Replies from: alyssavance, CronoDAS↑ comment by alyssavance · 2010-04-04T06:44:29.376Z · LW(p) · GW(p)
"The two scenarios involving Omega are only meant to establish that a late great filter should not be considered worse news than an early great filter."
I honestly think this would have been way, way, way clearer if you had dropped the Omega decision theory stuff, and just pointed out that, given great filters of equal probability, choosing an early great filter over a late great filter would entail wiping out the history of humanity in addition to the galactic civilization that we could build, which most of us would definitely see as worse.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-04-04T07:14:15.008Z · LW(p) · GW(p)
Point taken, but I forgot to mention that the Omega scenarios are also meant to explain why we might feel that the great filter being late is worse news than the great filter being early: an actual human, faced with the decision in scenario 2, might be tempted to choose the early filter.
I'll try to revise the post to make all this clearer. Thanks.
↑ comment by CronoDAS · 2010-04-04T09:00:13.359Z · LW(p) · GW(p)
But, in universes with early filters, I don't exist. Therefore anything I do to favor late filters over early filters is irrelevant, because I can't affect universes in which I don't exist.
(And by "I", I mean anything that UDT would consider "me".)
comment by Psychohistorian · 2010-04-04T19:50:47.394Z · LW(p) · GW(p)
This seems centered around a false dichotomy. If you have to choose between an early and a late Great Filter, the later may well be preferable. But the presupposes it must be on or the other. In reality, there may be no Great Filter, or there may be a great filter of such a nature that it only allows linear expansion, or some other option we simply haven't thought of. Or there may be a really late great filter. Your reasoning presumes an early/late dichotomy that is overly simplistic.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-04-05T00:49:28.020Z · LW(p) · GW(p)
I made that assumption because I was responding to two articles that both made that assumption, and I wanted to concentrate on a part of their reasoning apart from that assumption.
comment by Ivan_Tishchenko · 2010-04-24T20:21:47.817Z · LW(p) · GW(p)
I don't seem to understand the logic here. As I understand the idea of "Late Great Filter is bad news", it is simply about bayesian update of probabilities for hyphoteses A = "Humanity will eventually come to Explosion" versus not-A. Say, we have original probabilities for this p = P(A) and q = 1-p. Now, suppose, we take Great Filter hyphoteses for granted, and we find on Mars remnants of great civilization, equal to ours or even more improved. This means that we must update our probabilities of A/not-A so that P(A) decreases.
And I consider this really bad news. Either that, or Great Filter idea has some huuuuge flaw I overlooked.
So, where am I wrong?
comment by alyssavance · 2010-04-04T05:10:56.985Z · LW(p) · GW(p)
In the scenario where the coin lands tails, nothing interesting happens, and the coin is logically independent of anything else, so let us assume that the coin lands heads. We are assuming (correct me if I'm wrong) that Omega is a perfect predictor. So, in the second scenario, we already know that the person will press the button even before he makes his decision, even before he does it, or else either a). Omega's prediction is wrong (contradiction) or b). the Earth was destroyed a million years ago (contradiction). The fact that we currently exist gives us information about what people will do in the future, because what people will do in the future is perfectly tied to whether we were destroyed in the past through the logical device of Omega. Hence, in the second scenario, there's not even a "choice", because the outcome is known to us ahead of time. The Universe, including "choices", is deterministic, but we do not call something a "choice" when the outcome is known with complete certainty ahead of time.
I think that, because we have no experience with actual psychics, most of us have no idea what it feels like to have something that's normally a "choice" be pre-determined. If an actual, perfect predictor says that you are the Chosen One, it doesn't matter whether you go on a heroic quest, or lay in bed and read magazines all day, because you're going to wind up saving the Universe regardless. If you are a heroic individual, and a perfect predictor says that you will go on a dangerous quest, either a). there must be zero probability of you deciding to stay home and read magazines, which is fantastically unlikely because zero is a small number and dangerous quests are difficult, or b). you staying at home and reading magazines won't affect the outcome. How many heroes whose salvation of the universe was prophesied in advance carried out that logic?
"But since copies of us occur more frequently in universes with late filters than in universes with early filters, such a decision (which Robin arrives at via SIA) can be justified on utilitarian grounds under UDT."
Doesn't this contradict the title of the post? If I understand correctly, you're saying, in agreement with Robin, that we should put work into preventing late filters. But anything that you want to put work into preventing is, ipso facto, bad news.
Replies from: Wei_Dai, Vladimir_Nesov, Mass_Driver, Unknowns↑ comment by Wei Dai (Wei_Dai) · 2010-04-04T07:07:34.109Z · LW(p) · GW(p)
The Universe, including "choices", is deterministic, but we do not call something a "choice" when the outcome is known with complete certainty ahead of time.
I'm not sure how this relates to the main points of my post. Did you intend for it to be related (in which case please explain how), or is it more of a tangent?
Doesn't this contradict the title of the post? If I understand correctly, you're saying, in agreement with Robin, that we should put work into preventing late filters. But anything that you want to put work into preventing is, ipso facto, bad news.
What I meant by the title is that the Great Filter being late is not bad news (compared to it being early). Perhaps I should change the title to make that clearer?
Replies from: alyssavance↑ comment by alyssavance · 2010-04-04T16:25:12.866Z · LW(p) · GW(p)
"I'm not sure how this relates to the main points of my post. Did you intend for it to be related (in which case please explain how), or is it more of a tangent?"
You said: "It seems to me that this decision problem is structurally no different from the one faced by the future you in the previous thought experiment, and the correct decision is still to choose the late filter (i.e., press the button)."
This isn't a decision problem because the outcome is already known ahead of time (you will press the button).
↑ comment by Vladimir_Nesov · 2010-04-04T08:54:39.330Z · LW(p) · GW(p)
The Universe, including "choices", is deterministic, but we do not call something a "choice" when the outcome is known with complete certainty ahead of time.
Known to whom?
↑ comment by Mass_Driver · 2010-04-04T06:42:44.106Z · LW(p) · GW(p)
Hang on.
Let W = "The Forces of Good will win an epic battle against the Forces of Evil." Let C = "You will be instrumental in winning an epic battle against the Forces of Evil." Let B = "There will be an epic battle between the Forces of Good and the Forces of Evil."
What "You are the Chosen One" usually means in Western fiction is:
B is true, and; W if and only if C.
Thus, if you are definitely the Chosen One, and you stay home and read magazines, and reading magazines doesn't help you win epic battles, and epic battles are relatively evenly-matched, then you should expect to observe (B ^ !C ^ !W), i.e., you lose an epic battle on behalf of the Forces of Good.
Fate can compel you to the arena, but it can't make you win.
↑ comment by Unknowns · 2010-04-04T05:23:39.488Z · LW(p) · GW(p)
If you are a heroic individual and a perfect predictor says that you will go on a dangerous quest, you will go on a dangerous quest even if there is a significant probability that you will not go. After all many things happen that had low probabilities.
Replies from: alyssavance↑ comment by alyssavance · 2010-04-04T05:27:55.879Z · LW(p) · GW(p)
Contradiction. If a perfect predictor predicts that you will go on a dangerous quest, then the probability of you not going on a dangerous quest is 0%, which is not "significant".
Replies from: Unknowns↑ comment by Unknowns · 2010-04-04T05:30:49.389Z · LW(p) · GW(p)
There may be a significant probability apart from the fact that a perfect predictor predicted it. You might as well say that either you will go or you will not, so the probability is either 100% or 0%.
Replies from: alyssavance↑ comment by alyssavance · 2010-04-04T05:33:21.097Z · LW(p) · GW(p)
"There may be a significant probability apart from the fact that a perfect predictor predicted it. "
I do not understand your sentence.
"You might as well say that either you will go or you will not, so the probability is either 100% or 0%."
Exactly. Given omniscience about event X, the probability of event X is always either 100% or 0%. If we got a perfect psychic to predict whether I would win the lottery tomorrow, the probability of me winning the lottery would be either 100% or 0% after the psychic made his prediction.
Replies from: Unknownscomment by casebash · 2016-04-16T09:27:57.810Z · LW(p) · GW(p)
I downvoted this because it seems to be missing a very obvious point - that the reason why an early filter would be good is because we've already passed it. If we hadn't passed it, then of course we want the filter as late as possible.
On the other hand, I notice that this post has 15 upvotes. So I am wondering whether I have missed anything - generally posts that are this flawed do not get upvoted this much. I read through the comments and thought about this post a bit more, but I still came to the conclusion that this post is incredibly flawed.
comment by steven0461 · 2010-04-05T16:47:30.296Z · LW(p) · GW(p)
But since copies of us occur more frequently in universes with late filters than in universes with early filters, such a decision (which Robin arrives at via SIA) can be justified on utilitarian grounds under UDT.
But it seems like our copies in early-filter universes can eventually affect a proportionally greater share of the universe's resources.
Also, Robin's example of shelters seems mistaken: if shelters worked, some civilizations would already have tried them and colonized the universe. Whatever we try has to stand a chance of working against some unknown filter that almost nobody escapes. Which suggests the question of why learning that filter reduction doesn't work is a reason to invest more in filter reduction. I'm not sure how to think about this in a way that doesn't double-count things.
Finally, I wish everyone would remember that filters and existential risks are different (though overlapping) things.
comment by Roko · 2010-04-04T09:56:08.306Z · LW(p) · GW(p)
according to this line of thought, we're acting in both kinds of universes: those with early filters, and those with late filters.
Suppose that the reason for a late filter is a (complex) logical fact about the nature of technological progress; that there is some technological accident that it is almost impossible for an intelligent species to avoid, and that the lack of an early filter is a logical fact about the nature of life-formation and evolution. For the purposes of clarity, we might even think of an early filter as impossible and a late filter as certain.
Then, in what sense do we "exist in both kinds of universe"?
comment by Vladimir_Nesov · 2010-04-04T08:52:37.275Z · LW(p) · GW(p)
I agree. Note that the very concept of "bad news" doesn't make sense apart from a hypothetical where you get to choose to do something about determining these news one way or another. Thus CronoDAS's comment actually exemplifies another reason for the error: if the hypothetical decision is only able to vary the extent of a late great filter, as opposed to shifting the timing of a filter, it's clear that discovering powerful great filter is "bad news" according to such metric (because it's powerful, as opposed to because it's late).
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2010-04-04T14:36:04.216Z · LW(p) · GW(p)
Note that the very concept of "bad news" doesn't make sense apart from a hypothetical where you get to choose to do something about determining these news one way or another.
I don't think that that's the concept of "bad news" that Hanson and Bostrom are using. If you have background knowledge X, then a piece of information N is "bad news" if your expected utility conditioned on N & X is less than your expected utility conditioned on X alone.
Let our background knowledge X include the fact that we have secured all the utility that we received up till now. Suppose also that, when we condition only on X, the Great Filter is significantly less than certain to be in our future. Let N be the news that a Great Filter lies ahead of us. If we were to learn N, then, as Wei Dai pointed out, we would be obliged to devote more resources to mitigating the Great Filter. Therefore, our expected utility over our entire history would be less than it is when we condition only on X. That is why N is bad news.
comment by RobinZ · 2010-04-07T14:42:53.830Z · LW(p) · GW(p)
I think this is a misleading problem, like Omega's subcontracting problem. Our actions now do not affect early filters, even acausally, so we cannot force the filter to be late by being suicidal and creating a new filter now.
comment by Christian_Szegedy · 2010-04-06T23:40:40.505Z · LW(p) · GW(p)
Let us construct an analogous situation: let D be a disease that people contract with 99% probability, and they die at it at latest when they are n years old (let us say n=20).
Assume that you are 25 years old and there exists no diagnosis for the disease, but some scientist discovers that people can die at the disease even until they get 30. I don't know you, but in your place I'd call it a bad news for you personally, since you will have to live in fear for 5 additional years.
On the other hand it is a good news in an abstract sense, since it means 99% of the people has a good chance of living ten years longer.
Now the price question: If you are 25 years old and don't know whether you have the disease (p=99%) or not (p=1%), but you are presented with the option of being in one of the following two alternative universes: In one of them, people can live until they are 20 with the disease, in the other they die for sure before they become 30. Which one would you choose?
I must admit, it would not be an easy choice for me... :)
comment by CannibalSmith · 2010-04-04T20:43:11.700Z · LW(p) · GW(p)
A tangent: if we found extinct life on Mars, it would provide precious extra motivation to go there which is a good thing.
comment by A1987dM (army1987) · 2011-12-22T15:12:39.166Z · LW(p) · GW(p)
Scenario 1 and Scenario 2 are not isomorphic: the former is Newcomb-like and the latter is Solomon-like (see Eliezer's paper on TDT for the difference), i.e. in the former you can pre-commit to choose the late filter four years from now if you survive, whereas in the latter there's no such possibility. I'm still trying to work out what the implications of this are, though...
comment by timtyler · 2010-04-04T08:30:25.927Z · LW(p) · GW(p)
Re: "believing that the great filter is more likely to lie in the future means you have to anticipate a higher probability of experiencing doom."
Maybe - if you also believe the great filter is likely to result in THE END OF THE WORLD.
If it is merely a roadblock - similar to the many roadblocks we have seen so far - DOOM doesn't necessarily follow - at least not for a loooong time.
Replies from: Sticky