Don’t Fear The Filterpost by Scott Alexander (Yvain) · 2014-05-29T00:45:51.000Z · score: 5 (7 votes) · LW · GW · 5 comments
The Great Filter, remember, is the horror-genre-adaptation of Fermi’s Paradox. All of our calculations say that, in the infinite vastness of time and space, intelligent aliens should be very common. But we don’t see any of them. We haven’t seen their colossal astro-engineering projects in the night sky. We haven’t heard their messages through SETI. And most important, we haven’t been visited or colonized by them.
This is very strange. Consider that if humankind makes it another thousand years, we’ll probably have started to colonize other star systems. Those star systems will colonize other star systems and so on until we start expanding at nearly the speed of light, colonizing literally everything in sight. After a hundred thousand years or so we’ll have settled a big chunk of the galaxy, assuming we haven’t killed ourselves first or encountered someone else already living there.
But there should be alien civilizations that are a billion years old. Anything that could conceivably be colonized, they should have gotten to back when trilobytes still seemed like superadvanced mutants. But here we are, perfectly nice solar system, lots of any type of resources you could desire, and they’ve never visited. Why not?
Well, the Great Filter. No knows specifically what the Great Filter is, but generally it’s “that thing that blocks planets from growing spacefaring civilizations”. The planet goes some of the way towards a spacefaring civilization, and then stops. The most important thing to remember about the Great Filter is that it is very good at what it does. If even one planet in a billion light-year radius had passed through the Great Filter, we would expect to see its inhabitants everywhere. Since we don’t, we know that whatever it is it’s very thorough.
Various candidates have been proposed, including “it’s really hard for life to come into existence”, “it’s really hard for complex cells to form”, “it’s really hard for animals to evolve intelligent”, and “actually space is full of aliens but they are hiding their existence from us for some reason”.
The articles I linked at the top, especially the first, will go through most of the possibilities. This essay isn’t about proposing new ones. It’s about saying why the old ones won’t work.
The Great Filter is not garden-variety x-risk. A lot of people have seized upon the Great Filter to say that we’re going to destroy ourselves through global warming or nuclear war or destroying the rainforests. This seems wrong to me. Even if human civilization does destroy itself due to global warming – which is a lot further than even very pessimistic environmentalists expect the problem to go – it seems clear we had a chance not to do that. A few politicians voting the other way, we could have passed the Kyoto Protocol. A lot of politicians voting the other way, and we could have come up with a really stable and long-lasting plan to put it off indefinitely. If the gas-powered car had never won out over electric vehicles back in the early 20th century, or nuclear-phobia hadn’t sunk the plan to move away from polluting coal plants, then the problem might never have come up, or at least been much less. And we’re pretty close to being able to colonize Mars right now; if our solar system had a slightly bigger, slightly closer version of Mars, then we could restart human civilization anew there once we destroyed the Earth and maybe go a little easy on the carbon dioxide the next time around.
In other words, there’s no way global warming kills 999,999,999 in every billion civilizations. Maybe it kills 100,000,000. Maybe it kills 900,000,000. But occasionally one manages to make it to space before frying their home planet. That means it can’t be the Great Filter, or else we would have run into the aliens who passed their Kyoto Protocols.
And the same is true of nuclear war or destroying the rainforests.
Unfortunately, almost all the popular articles about the Great Filter miss this point and make their lead-in “DOES THIS SCIENTIFIC PHENOMENON PROVE HUMANITY IS DOOMED?” No. No it doesn’t.
The Great Filter is not Unfriendly AI. Unlike global warming, it may be that we never really had a chance against Unfriendly AI. Even if we do everything right and give MIRI more money than they could ever want and get all of our smartest geniuses working on the problem, maybe the mathematical problems involved are insurmountable. Maybe the most pessimistic of MIRI’s models is true, and AIs are very easy to accidentally bootstrap to unstoppable superintelligence and near-impossible to give a stable value system that makes them compatible with human life. So unlike global warming and nuclear war, this theory meshes well with the low probability of filter escape.
But as this article points out, Unfriendly AI would if anything be even more visible than normal aliens. The best-studied class of Unfriendly AIs are the ones whimsically called “paperclip maximizers” which try to convert the entire universe to a certain state (in the example, paperclips). These would be easily detectable as a sphere of optimized territory expanding at some appreciable fraction of the speed of light. Given that Hubble hasn’t spotted a Paperclip Nebula (or been consumed by one) it looks like no one has created any of this sort of AI either. And while other Unfriendly AIs might be less aggressive than this, it’s hard to imagine an Unfriendly AI that destroys its parent civilization, then sits very quietly doing nothing. It’s even harder to imagine that 999,999,999 out of a billion Unfriendly AIs end up this way.
The Great Filter is not transcendence. Lots of people more enthusiastically propose that the problem isn’t alien species killing themselves, it’s alien species transcending this mortal plane. Once they become sufficiently advanced, they stop being interested in expansion for expansion’s sake. Some of them hang out on their home planet, peacefully cultivating their alien gardens. Others upload themselves to computronium internets, living in virtual reality. Still others become beings of pure energy, doing whatever it is that beings of pure energy do. In any case, they don’t conquer the galaxy or build obvious visible structures.
Which is all nice and well, except what about the Amish aliens? What about the ones who have weird religions telling them that it’s not right to upload their bodies, they have to live in the real world? What about the ones who have crusader religions telling them they have to conquer the galaxy to convert everyone else to their superior way of life? I’m not saying this has to be common. And I know there’s this argument that advanced species would be beyond this kind of thing. But man, it only takes one. I can’t believe that not even one in a billion alien civilizations would have some instinctual preference for galactic conquest for galactic conquest’s own sake. I mean, even if most humans upload themselves, there will be a couple who don’t and who want to go exploring. You’re trying to tell me this model applies to 999,999,999 out of one billion civilizations, and then the very first civilization we test it on, it fails?
The Great Filter is not alien exterminators. It sort of makes sense, from a human point of view. Maybe the first alien species to attain superintelligence was jealous, or just plain jerks, and decided to kill other species before they got the chance to catch up. Knowledgeable people like as Carl Sagan and Stephen Hawking have condemned our reverse-SETI practice of sending messages into space to see who’s out there, because everyone out there may be terrible. On this view, the dominant alien civilization is the Great Filter, killing off everyone else while not leaving a visible footprint themselves.
Although I get the precautionary principle, Sagan et al’s warnings against sending messages seem kind of silly to me. This isn’t a failure to recognize how strong the Great Filter has to be, this is a failure to recognize how powerful a civilization that gets through it can become.
It doesn’t matter one way or the other if we broadcast we’re here. If there are alien superintelligences out there, they know. “Oh, my billion-year-old universe-spanning superintelligence wants to destroy fledgling civilizations, but we just can’t find them! If only they would send very powerful radio broadcasts into space so we could figure out where they are!” No. Just no. If there are alien superintelligences out there, they tagged Earth as potential troublemakers sometime in the Cambrian Era and have been watching us very closely ever since. They know what you had for breakfast this morning and they know what Jesus had for breakfast the morning of the Crucifixion. People worried about accidentally “revealing themselves” to an intergalactic supercivilization are like Sentinel Islanders reluctant to send a message in a bottle lest modern civilization discover their existence – unaware that modern civilization has spy satellites orbiting the planet that can pick out whether or not they shaved that morning.
What about alien exterminators who are okay with weak civilizations, but kill them when they show the first sign of becoming a threat (like inventing fusion power or leaving their home solar system)? Again, you are underestimating billion-year-old universe-spanning superintelligences. Don’t flatter yourself here. You cannot threaten them.
What about alien exterminators who are okay with weak civilizations, but destroy strong civilizations not because they feel threatened, but just for aesthetic reasons? I can’t be certain that’s false, but it seems to me that if they have let us continue existing this long, even though we are made of matter that can be used for something else, that has to be a conscious decision made out of something like morality. And because they’re omnipotent, they have the ability to satisfy all of their (not logically contradictory) goals at once without worrying about tradeoffs. That makes me think that whatever moral impulse has driven them to allow us to survive will probably continue to allow us to survive even if we start annoying them for some reason. When you’re omnipotent, the option of stopping the annoyance without harming anyone is just as easy as stopping the annoyance by making everyone involved suddenly vanish.
Three of these four options – x-risk, Unfriendly AI, and alien exterminators – are very very bad for humanity. I think worry about this badness has been a lot of what’s driven interest in the Great Filter. I also think these are some of the least likely possible explanations, which means we should be less afraid of the Great Filter than is generally believed.
Comments sorted by top scores.