The Great Filter is early, or AI is hard
post by Stuart_Armstrong · 2014-08-29T16:17:56.740Z · LW · GW · Legacy · 81 commentsContents
81 comments
Attempt at the briefest content-full Less Wrong post:
Once AI is developed, it could "easily" colonise the universe. So the Great Filter (preventing the emergence of star-spanning civilizations) must strike before AI could be developed. If AI is easy, we could conceivably have built it already, or we could be on the cusp of building it. So the Great Filter must predate us, unless AI is hard.
81 comments
Comments sorted by top scores.
comment by owencb · 2014-08-29T16:39:40.186Z · LW(p) · GW(p)
There's a valid argument here, but the conclusion sounds too strong. I think the level of proof which is required for "easily colonise the universe" in this context is much higher than in the context of your other post (which is about best guess scenarios), because if there is a Great Filter then something surprising happens somewhere. So we should consider whether even quite unlikely-sounding events like "we've misunderstood astrophysics" might be possible.
comment by kilobug · 2014-08-30T07:13:25.845Z · LW(p) · GW(p)
I'm still highly skeptical of the existence of the "Great Filter". It's one possible explanation to the "why don't we see any hint of existence of someone else" but not the only one.
The most likely explanation to me is that intelligent life is just so damn rare. Life is probably frequent enough - we know there are a lot of exoplanets, many have the conditions for life, and life seems relatively simple. But intelligent life ? It seems to me it required a great deal of luck to exist on Earth, and it does seem somewhat likely that it's rare enough so we are alone not only in the galaxy, but in a large sphere around us. The universe is so vast there probably is intelligent life elsewhere, but if we admit AI can colonize at 10% of c, and the closest is 100 million light years away and exists since only 1 billion of years, it didn't reach us yet.
This whole "we compute how likely intelligent life is using numbers coming from nowhere, we don't detect any intelligence, so we conclude there is a Great Filter" seems very fishy reasoning to me. Not detecting any intelligence should make us, first of all, revise down the probability of the hypothesis "intelligence life is frequent enough", before making us create new "epicycles" by postulating a Great Filter.
A few elements making it unlikely for intelligent life to exist frequently, and that's just a few :
life, especially technological civilization, requires lots of heavy elements, which didn't exist too early in the universe, meaning only stars about the same generation as the Sun have chance to have it ;
it took 5 billions of years after the planet existed to evolve on Earth, on the 6 billions it has before the Sun becomes too hot and vaporizes water on it ;
the dinosaur phase shows that it was easy for evolution to reach some local minima that didn't include intelligence, and it took a great deal of luck to have a cataclysm powerful enough to throw it out of the local minima, without doing too much damage and killing all complex life ;
the Sun is lucky to be in a mostly isolated region, where very few nearby supernova blast life on Earth, I don't think intelligent life could develop on a star too close to the galatic center, any single nova to close to it, and all complex life on Earth would be wiped out ;
the Moon, which is unusual, played, it seems, a major role in allowing intelligent life to appear, from stabilizing Earth movement (and therefore climate) to easing the transition from sea to land through tides.
↑ comment by Stuart_Armstrong · 2014-08-30T08:28:36.589Z · LW(p) · GW(p)
intelligent life is just so damn rare.
That's an early filter.
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2014-08-30T14:34:27.312Z · LW(p) · GW(p)
life, especially technological civilization, requires lots of heavy elements, which didn't exist too early in the universe, meaning only stars about the same generation as the Sun have chance to have it
Going off of this, what if life is somewhat common, but we're just one of the first life in the universe? That doesn't seem like an "early filter", so even if this possibility is really unlikely, it still would break your dichotomy.
Replies from: bogdanb↑ comment by bogdanb · 2014-08-31T07:38:23.729Z · LW(p) · GW(p)
The problem with that is that life on Earth appeared about 4 billion years ago, while the Milky Way is more than 13 billion years old. If life were somewhat common, we wouldn’t expect to be the first, because there was time for it to evolve several times in succession, and it had lots of solar systems where it could have done it.
A possible answer could be that there was a very strong early filter during the first part of the Milky Way’s existence, and that filter lessened in intensity in the last few billion years.
The only examples I can think of are elemental abundance (perhaps in a young galaxy there are much fewer systems with diverse enough chemical compositions) and supernova frequency (perhaps a young galaxy is sterilized by frequent and large supernovas much more often than an older one’s). But AFAIK both of those variations can be calculated well enough for a Fermi estimate from what we know, so I’d expect someone who knows the subject much better than I would have made that point already if they were plausible answers.
Replies from: TylerJay↑ comment by TylerJay · 2014-09-01T18:26:07.024Z · LW(p) · GW(p)
Even within the Milky Way, most "earthlike" planets in habitable zones around sunlike stars are on average 1.8 Billion years older than the Earth. If the "heavy bombardment" period at the beginning of a rocky planet's life is approximately the same length for all rocky stars, which is likely, then each of those 11 Billion potentially habitable planets still had 1.8 billion years during which life could have formed. On Earth, life originated almost immediately after the bombardment ended and the earth was allowed to cool. Even if the probability of each planet developing life in a period of 1 Billion years is mind-bogglingly low, we still should expect to see life forming on some of them given 20 Billion Billion planet-years.
Replies from: bogdanb, None↑ comment by [deleted] · 2014-09-08T03:25:23.110Z · LW(p) · GW(p)
most "earthlike" planets in habitable zones around sunlike stars are on average 1.8 Billion years older than the Earth.
That would push many of them over into Venus mode seeing as all stars increase in brightness slowly as they age and Earth will fall over into positive greenhouse feedback mode within 2 gigayears (possibly within 500 megayears).
However, seeing as star brightness increases with the 3.5th power of mass, and therefore lifetime decreases with the 2.5th power of mass, stars not much smaller than the sun can be pretty 'sunlike' while brightening much slower and having much longer stable regimes. This is where it gets confusing; are we an outlier in having such a large star (larger than 90% of stars in fact), or do these longer-lived smaller stars have something about them that makes it less likely that observers will find themselves there?
comment by [deleted] · 2014-08-29T22:18:33.591Z · LW(p) · GW(p)
This degree of insight density is why I love LW.
Someone who is just scanning your headline might get the wrong idea, though: It initially read (to me) as two alternate possible titles, implying that the filter is early and AI is hard and these two facts have a common explanation (when the actual content seems to be "at least one of these is true, because otherwise the universe doesn't make sense").
Replies from: sattcomment by [deleted] · 2014-08-29T20:04:22.155Z · LW(p) · GW(p)
Once AI is developed, it could "easily" colonise the universe.
I dispute this assumption. I think it is vanishingly unlikely for anything self-replicating (biological, technological, or otherwise) to survive trips from one island-of-clement-conditions (~ 'star system') to another.
Replies from: owencb, Stuart_Armstrong, James_Miller, Gunnar_Zarncke↑ comment by owencb · 2014-08-30T11:45:11.619Z · LW(p) · GW(p)
Nick Beckstead wrote up an investigation into this question, with the conclusion that current consensus points to it being possible.
↑ comment by Stuart_Armstrong · 2014-08-29T20:57:17.431Z · LW(p) · GW(p)
http://lesswrong.com/lw/hll/to_reduce_astronomical_waste_take_your_time_then/ : six hours of the sun's energy for every galaxy we could ever reach, at a redundancy of 40. Give a million years, we can blast a million probes per star at least. Some will get through.
Replies from: None↑ comment by [deleted] · 2014-08-29T21:16:01.910Z · LW(p) · GW(p)
6 hours of the sun's energy, or 15 billion years worth of current human energy use (or only a few trillion years human energy use in the early first millennium, it really was not exponential until the 19th/20th century and these days its more linear). The only way you get energy levels that high is with truly enormous stellar-scale engineering projects like Dyson clouds, which we see no evidence of when we look out into the universe in infrared - those are something we would actually be able to see. Again, if things of that sheer scale are something that intelligent systems don't get around to doing for one reason or another, then this sort of project would never happen.
Additionally, the papers referenced there have 'seed' masses sent to other GALAXIES massing grams with black-box arbitrary control over matter and the capacity to last megayears in the awful environment of space. Pardon me if I don't take that possibility very seriously, and adjust the energy figures up accordingly.
↑ comment by James_Miller · 2014-08-29T20:13:31.407Z · LW(p) · GW(p)
Really, you think that if our civilization survives another million years we won't be able to do this? At the very least we could freeze human embryos, create robots that turn the embryos into babies and then raise them, put them all on a slow star ship and send the ship to an earth like planet.
Replies from: None↑ comment by [deleted] · 2014-08-29T20:30:22.019Z · LW(p) · GW(p)
I think it's quite unlikely, yes.
It seems like a natural class of explanations for the fermi paradox, one which I am always surprised never gets more people coming up with it. Most people pile into 'intelligent systems almost never appear' or 'intelligent systems have extremely short lifespans'. Why not 'intelligent systems find it vanishingly difficult to spread beyond small islands'? It seems more reasonable to me than either of the two previous ones, as it is something that we haven't seen intelligent systems do yet (we are an example of one both arising and sticking around for a long time).
If I must point out more justification than that, I would immediately go with:
1 - All but one of our ships BUILT for space travel that have gone on to escape velocity have failed after a few decades and less than 100 AUs. Space is a hard place to survive in.
2 - All self-replicating systems on earth live in a veritable bath of materials and energy they can draw on; a long-haul space ship has to either use literally astronomical energy at the source and destination to change velocity, or 'live' off only catabolizing itself in an incredibly hostile environment for millennia at least while containing everything it needs to set up self-replication in a completely alien environment.
Edit: a friend of mine has brought my attention to this paper:
http://www.geoffreylandis.com/percolation.htp
it proposes a percolation model of interstellar travel in which there is a maximum possible colonization distance and a probability of any successful colonization spawning colonizers themselves. It avoids all three above-posited explanations for the fermi paradox and instead proposes a model of expansion that does not lead to exponential consumption of everything.
Replies from: Kyre, owencb, James_Miller↑ comment by Kyre · 2014-09-01T05:17:14.806Z · LW(p) · GW(p)
1 - All but one of our ships BUILT for space travel that have gone on to escape velocity have failed after a few decades and less than 100 AUs. Space is a hard place to survive in.
Voyagers 1 and 2 were launched in 1977, are currently 218 and 105 AU from the Sun, and are both are still communicating. They were designed to reach Jupiter and Saturn - Voyager 2 had mission extensions to Uranus and Neptune (interestingly, it was completely reprogrammed after the Saturn encounter, and now makes use of communication codes that hadn't been invented when it was launched).
Pioneers 10 and 11 were launched in 1972 and 73 and remained in contact until 2003 and 1995 respectively, with their failure being due to insufficient power for communication coming from their radioisotope power sources. Pioneer 10 stayed in communication to 80 AU.
New Horizons was launched in 2006 and is still going (encounter with Pluto next year). So, 3 out of 5 probes designed to explore the outer solar system are still going, 2 with 1970s technology.
Replies from: None↑ comment by [deleted] · 2014-09-01T07:09:39.091Z · LW(p) · GW(p)
The voyagers are 128 and 104 AUs out upon me looking them up - looks like I missed Voyager 2 hitting the 100 AU mark about a year and a half ago.
Still get what you are saying. Still not convinced that all that much has been done in the realm of spacecraft reliability recently aside from avoiding moving parts and having lots of redundancy, they have major issues quite frequently. Additionally all outer solar system probes are essentially rapidly catabolizing plutonium pellets they bring along for the ride with effective lifetimes in decades before they are unable to power themselves and before their instruments degrade from lack of active heating and other management that keeps them functional.
↑ comment by owencb · 2014-09-01T15:08:53.491Z · LW(p) · GW(p)
Thanks for the link to the paper with the percolation model. I think it's interesting, but the assumption of independent probabilities at each stage seems relatively implausible. You just need one civilization to hit upon a goal-preserving method of colonisation and it seems the probability should stick high.
↑ comment by James_Miller · 2014-08-29T20:34:27.715Z · LW(p) · GW(p)
OK, but even if you are right we know it's possible to send radio transmissions to other star systems. Why haven't we detected any alien TV shows?
Replies from: None↑ comment by [deleted] · 2014-08-29T20:48:32.356Z · LW(p) · GW(p)
Because to creatures such as us that have only been looking for a hundred years with limited equipment, a relatively 'full' galaxy would look no different from an empty one.
Consider the possibility that you have about 10,000 intelligent systems that can use radio-type effects in our galaxy (a number that I think would likely be a wild over-estimation given the BS numbers I occasionally half-jokingly calculate given what I know of the evolutionary history of life on Earth and cosmology and astronomy, but it's just an example). That puts each one, on average, in an otherwise 'empty' cube 900 light years on a side that contains millions of stars. EDIT: if you up it to a million intelligent systems, the cube only goes down to about 200 light years wide with just under half a million stars, I just chose 10,000 because then the cube is about the thickness of the galaxy's disc and the calculation was easy.
We would be unable to detect Earth's own omnidirectional radio leaks less than a light year away according to figures I have seen, and since omnidirectional signals decrease with the square of distance even to be seen 10 light years away you would need hundreds of times as much. Seeing as omnidirectional radio has decreased with technological sophistication here, I find that doubtful. We probably just would never see omnidirectional signals, and furthermore given the proportionality to the square of distance i doubt anybody would see ours either.
That leaves you looking for directional transmissions. Those might not even be radio, as optical transmissions can be sent directionally and are rather better for very long distances due to diffraction issues, but I will ignore that possibility. That means that you need a directional transmitter, and over the distances we are talking about a directional receiver. Such a directional signal could just randomly be pointed towards us along line of sight to a spacecraft or along a horizon, or it could be specifically sent out to other star systems. What are the odds that two points that do not know of each other's existence (given that human impacts on the atmosphere of earth are only two to three centuries old and radio even younger) happen to both have the transmitter point in the proper direction and the receiver look in the proper direction at the exact right moment?
In short, the only things we have really excluded reliably so far are truly huge engineering projects like dyson clouds (which you would see in the infrared), ridiculously powerful omnidirectional signals of a sort I find unlikely in our immediate neighborhood (tens of lightyears), or something that for some reason decides to spend a lot of time and effort pinging millions of nearby stars every few years or less for quite a large fraction of its history. We've sent out what, a dozen or two directional beams to other stars over half a century?
Replies from: chaosmage↑ comment by chaosmage · 2014-09-01T13:43:32.185Z · LW(p) · GW(p)
You're forgetting self-replicating colony ships.
Factories can already self-replicate given human help. They'll probably be able to do it on their own inside the next two decades. After that, we're looking at self-replicating swarms of drones that tend to become smaller and smaller, and eventually they'll fit on a spaceship and spread across the galaxy like a fungus, eating planets to make more drones.
That doesn't strictly require AGI, but AGI would have no discernible reason not to do that, and this has evidently not happened because we're here on this uneaten planet.
↑ comment by Gunnar_Zarncke · 2014-08-30T00:52:24.081Z · LW(p) · GW(p)
Once AI is developed, it could "easily" colonise the universe.
I also see this claimed often but my best guess also is that this might likely be the hard part. Getting into space is already hard. Fusion could be technologically impossible (or not energy positive).
Replies from: calef, TheMajor, Stuart_Armstrong↑ comment by calef · 2014-08-30T08:38:41.838Z · LW(p) · GW(p)
Fusion is technologically possible (c.f., the sun). It just might not be technologically easy.
Replies from: Gunnar_Zarncke↑ comment by Gunnar_Zarncke · 2014-08-30T09:37:44.100Z · LW(p) · GW(p)
The sun is not technology (=tools, machinery, modifications, arrangements and procedures)
↑ comment by TheMajor · 2014-08-30T09:09:26.483Z · LW(p) · GW(p)
It seems like there is steady progress at the fusion frontiers
Replies from: satt↑ comment by satt · 2014-08-30T17:16:49.996Z · LW(p) · GW(p)
Though in the case of ITER the "steady progress" is finishing pouring concrete for the foundations, not tweaking tokamak parameters for higher gain!
↑ comment by Stuart_Armstrong · 2014-08-30T08:26:15.732Z · LW(p) · GW(p)
Fission is sufficient.
Replies from: Gunnar_Zarncke↑ comment by Gunnar_Zarncke · 2014-08-30T09:36:29.317Z · LW(p) · GW(p)
Is this an opinion or a factual statement. If the latter I'd like to see some refs.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2014-08-30T20:00:48.948Z · LW(p) · GW(p)
http://www.fhi.ox.ac.uk/wp-content/uploads/intergalactic-spreading.pdf
Replies from: Gunnar_Zarncke↑ comment by Gunnar_Zarncke · 2014-08-30T21:59:20.218Z · LW(p) · GW(p)
Thank you. An interesting read. I found your treatment very thorough given its premises and approach. Sadly we disagree at a point which you seem to take as given without further treatment but which I question:
The ability and energy to set-up infrastructure to exploit interplanetary resources with sufficient net energy gain to sufficiently mine mercury (much less build a dyson sphere).
The problem here is that I do not have refereneces to actually back my opinion on this and I didn't have enough time yet to build my complexity theoretic and thermodynamics arguments into a sufficiently presentable form.
http://lesswrong.com/lw/ii5/baseline_of_my_opinion_on_lw_topics/
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2014-09-01T18:00:43.290Z · LW(p) · GW(p)
energy to set-up infrastructure to exploit interplanetary resources with sufficient net energy gain to sufficiently mine mercury
We already have solar panel setups with roughly the required energy efficiency.
comment by Toggle · 2014-08-30T00:01:59.400Z · LW(p) · GW(p)
Or, the simulation is running other star systems at a lower fidelity. Or, Cosmic Spirits from Beyond Time imposed life on our planet via external means, and abiogenesis is actually impossible. The application of sufficiently advanced intelligence may be indistinguishable from reality.
comment by James_Miller · 2014-08-29T17:07:15.152Z · LW(p) · GW(p)
It's also possible that AI used to be hard but no longer is because something in the universe recently changed. Although this seems extremely unlikely, The Fermi paradox implies that something very unlikely is indeed occurring.
Replies from: army1987, V_V↑ comment by A1987dM (army1987) · 2014-08-30T15:38:56.169Z · LW(p) · GW(p)
Not that unlikely, depending on what you mean by “recently”; for example, earlier stars had lower metallicity and hence were less likely to have rocky planets.
↑ comment by V_V · 2014-08-29T19:57:37.542Z · LW(p) · GW(p)
The Fermi paradox implies that something very unlikely is indeed occurring.
Or space colonization is just hard.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2014-08-29T20:55:20.325Z · LW(p) · GW(p)
The evidence seems to be that it's "easy" (see http://lesswrong.com/lw/hll/to_reduce_astronomical_waste_take_your_time_then/ ), at least over the thousand-million year range.
comment by Luke_A_Somers · 2014-08-29T17:04:51.460Z · LW(p) · GW(p)
Good job condensing the argument down.
comment by Lalartu · 2014-09-02T15:34:55.100Z · LW(p) · GW(p)
You are missing at least two options.
First, our knowledge of physics is far from complete, and there can be some reasons that make interstellar colonization just impossible.
Second, consider this: our building technology is vastly better than it was few thousands years ago, and our economic capabilities are much greater. Yet, noone among last century rulers was buried in tomb comparable to Egyptian pyramids. The common reply is that it takes only one expansionist civilization to take over the universe. But number of civilizations is finite, and colonization can be so unattractive that number of expansionists is zero.
comment by ColbyDavis · 2014-08-30T16:02:50.202Z · LW(p) · GW(p)
Has anybody suggested that the great filter may be that AIs are negative utilitarians that destroy life on their planet? My prior on this is not very high but it's a neat solution to the puzzle.
Replies from: MugaSofer, Stuart_Armstrong↑ comment by Stuart_Armstrong · 2014-09-01T18:01:41.590Z · LW(p) · GW(p)
And then it goes on to destroy all life in the universe...
comment by John_Maxwell (John_Maxwell_IV) · 2016-12-22T11:15:28.998Z · LW(p) · GW(p)
So the Great Filter must predate us, unless AI is hard.
There's a 3rd possibility: AI is not super hard, say 50 yrs away, but species tend to get filtered when they are right on the verge of developing AI. Which points to human extinction in the next 50 years or so.
This seems a little unlikely. A filter that only appeared on the verge of AI would likely be something technology-related. But different civs explore the tech tree differently. This only feels like a strong filter if the destructive tech was directly before superintelligence on the tree. Perhaps AI-powered weapons? Or something related to the Internet?
I wish I could speak to an expert historian on how frequently we see lopsided technology trees in the historical record. But it certainly seems like we could use an unreasonable amount of caution prior to developing any technology that creates a clear path to space colonization. If millions before us were not able to escape the maze alive, perhaps every obvious exit route should be assumed to contain a deadly trap.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2016-12-24T11:55:18.997Z · LW(p) · GW(p)
perhaps every obvious exit route should be assumed to contain a deadly trap.
But we also have to consider whether it's a true exit route, which would avoid the deadly traps that we don't understand.
comment by bogdanb · 2014-08-31T07:57:19.089Z · LW(p) · GW(p)
Once AI is developed, it could "easily" colonise the universe.
I was wondering about that. I agree with the could, but is there a discussion of how likely it is that it would decide to do that?
Let’s take it as a given that successful development of FAI will eventually lead to lots of colonization. But what about non-FAI? It seems like the most “common” cases of UFAI are mistakes in trying to create an FAI. (In a species with similar psychology to ours, a contender might also be mistakes trying to create military AI, and intentional creation by “destroy the world” extremists or something.)
But if someone is trying to create an FAI, and there is an accident with early prototypes, it seems likely that most of those prototypes would be programmed with only planet-local goals. Similarly, it doesn’t seem likely that intentionally-created weapon-AI would be programmed to care about what happens outside the solar system, unless it’s created by a civilization that already does, or is at least attempting, interstellar travel. Creators that care about safety will probably try to limit the focus, even imperfectly, both to make reasoning easier and to limit damage, and weapons-manufacturers will try to limit the focus for efficiency.
Now, I realize that a badly done AI could decide to colonize the universe even if its creators didn’t program it for that initially, and that simple goals can have that as an unforeseen consequence (like the prototypical paperclip manufacturer). But have we any discussion of how likely that is in a realistic setting? Perhaps the filter is that the vast majority of AIs limit themselves to their original solar system.
Replies from: VAuroch↑ comment by VAuroch · 2014-09-01T04:54:27.592Z · LW(p) · GW(p)
Energy acquisition is a useful subgoal for nearly any final goal and has non-starsystem-local scope. This makes strong AIs which stay local implausible.
Replies from: randallsquared, bogdanb↑ comment by randallsquared · 2014-09-03T15:24:49.020Z · LW(p) · GW(p)
Especially if the builders are concerned about unintended consequences, the final goal might be relatively narrow and easily achieved, yet result in the wiping out of the builder species.
↑ comment by bogdanb · 2014-09-05T23:19:41.728Z · LW(p) · GW(p)
If the final goal is of local scope, energy acquisition from out-of-system seems to be mostly irrelevant, considering the delays of space travel and the fast time-scales a strong AI seems likely to operate at. (That is, assuming no FTL and the like.)
Do you have any plausible scenario in mind where an AI would be powerful enough to colonize the universe, but do it because it needs energy for doing something inside its system of origin?
I might see one perhaps extending to a few neighboring systems in a very dense cluster for some strange reason, but I can’t imagine likely final goals (again, for its birth star-system) that it would need to spend hundreds of millenia even to take over a single galaxy, let alone leave it. (Which is of course no proof there isn’t; my question above wasn’t rhethorical.)
I can imagine unlikely accidents causing some sort of papercliper-scenario, and maybe vanishingly rare cases where two or more AIs manage to fight each other over long periods of time, but it’s not obvious to me why this class of scenarios should be assigned a lot of probability mass in aggregate.
Replies from: VAuroch↑ comment by VAuroch · 2014-09-06T08:48:40.496Z · LW(p) · GW(p)
Any unbounded goal in the vein of 'Maximize concentration of in this area' has local scope but potentially unbounded expenditure necessary.
Also, as has been pointed out for general satisficing goals (which most naturally local-scale goals will be); acquiring more resources lets you do the thing more to maximize the chances that you have properly satisfied your goal. Even if the target is easy to hit, being increasingly certain that you've hit it can use arbitrary amounts of resource.
Replies from: bogdanb↑ comment by bogdanb · 2015-02-17T20:50:56.478Z · LW(p) · GW(p)
Both good points, thank you.
Replies from: magnus-anderson↑ comment by Magnus Anderson (magnus-anderson) · 2024-10-14T19:21:36.721Z · LW(p) · GW(p)
Alternatively, a weapon-AI builds a dyson sphere, preventing any light from the star from escaping, eliminating the risk of a more advanced outside AI (which it can reason about much better than we can) from destroying it.
Or a poor planet-local AI does the same thing.
comment by Jan_Rzymkowski · 2014-08-30T12:22:53.436Z · LW(p) · GW(p)
Or, conversely, Great Filter doesn't prevent civilizations from colonising galaxies, and we've been colonised long time ago. Hail Our Alien Overlords!
And I'm serious here. Zoo hypothesis seems very conspiracy-theory-y, but generalised curiosity is one of the requirments for developing civ capable of galaxy colonisation, and powerful enough civ can sacrifice few star systems for research purposes, and it seem that most efficient way of simulating biological evolution or civ developement is actually having a planet develop on its own.
Replies from: army1987↑ comment by A1987dM (army1987) · 2014-09-01T12:12:44.714Z · LW(p) · GW(p)
See the last paragraph of this.
comment by blogospheroid · 2014-08-30T11:34:55.104Z · LW(p) · GW(p)
I'd like to repeat the comment I had made at "outside in" for the same topic, the great filter.
I think our knowledge of all levels – physics, chemistry, biology, praxeology, sociology is nowhere near the level where we should be worrying too much about the fermi paradox.
Our physics has openly acknowledged broad gaps in our knowledge by postulating dark matter, dark energy, and a bunch of stuff that is filler for – "I don’t know". We don't have physics theories that explain the smallest to the largest.
Coming to chemistry and biology, we’ve still not demonstrated abiogenesis. We have not created any new base of life other than the twisty strands mother nature already prepared and gave us everywhere. We don't know the causes of our mutations to predict them to any extent. We simply don’t know enough to fill in these gaps.
Coming to basic sustenance, we don’t know what are the minimum requirements for a self-contained multi generational habitat. The biosphere experiments were not complete in any manner.
We don’t know the code for intelligence. We don’t know the code for preventing our own bodily degradation.
We don’t know how to balance new knowledge acquisition and sustainability run a society. Our best centres of knowledge acquisition are IQ shredders (a term meant to highlight the fact that the most successful cities attract the highest IQ people and reduce their fertility compared to if they had remained back in small towns/rural areas) and not sustainable environmentally either. Patriarchy and castes work great in in static societies. We don’t know their equivalent in a growing knowledge society.
There are still many known ways in which we can screw up. Lets get all these basics right, repeatedly right and then wonder with our new found knowledge, according to these calculations, there is a X% chance that we should have been contacted. Why are we apparently alone in the universe?
Replies from: MugaSofer↑ comment by MugaSofer · 2014-09-06T17:43:56.543Z · LW(p) · GW(p)
If you aren't sure about something, you can't just throw up your hands, say "well, we can't be sure", and then behave as if the answer you like best is true.
We have math for calculating these things, based on the probability different options are true.
For example, we don't know for sure how abiogenesis works, as you correctly note. Thus, we can't be sure how rare it ought to be on Earthlike planets - it might require a truly staggering coincidence, and we would never know for anthropic reasons.
But, in fact, we can reason about this uncertainty - we can't get rid of it, but we can quantify it to a degree. We know how soon life appeared after conditions became suitable. So we can consider what kind of frequency that would imply for abiogenesis given Earthlike conditions and anthropic effects.
This doesn't give us any new information - we still don't know how abiogenesis works - but it does give us a rough idea of how likely it is to be nigh-impossible, or near-certain.
Similarly, we can take the evidence we do have about the likelihood of Earthlike planets forming, the number of nearby stars they might form around, the likely instrumental goals most intelligent minds will have, the tools they will probably have available to them ... and so on.
We can't be sure about any of these things - no, not even the number of stars! - but we do have some evidence. We can calculate how likely that evidence would be to show up given the different possibilities. And so, putting it all together, we can put ballpark numbers to the odds of these events - "there is a X% chance that we should have been contacted", given the evidence we have now.
And then - making sure to update on all the evidence available, and recalculate as new evidence is found - we can work out the implications.
comment by ChristianKl · 2014-08-29T21:21:07.117Z · LW(p) · GW(p)
Alternatively the only stable AGI has a morality that doesn't make it behave in a way where it simply colonises the whole universe.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2014-09-01T18:02:41.646Z · LW(p) · GW(p)
Not colonising the universe - many moralities could go with that.
Allowing potential rivals to colonise the universe... that's much rarer.
comment by baturinsky · 2023-03-21T13:53:42.175Z · LW(p) · GW(p)
What if there is something that can destroy entire universe, and sufficiently advanced civilization eventually does it?
comment by Neil (neil-warren) · 2023-01-05T18:33:53.624Z · LW(p) · GW(p)
Another possibility is that AI wipes us out and is also not interested in expansion.
Since expansion is something inherent to living beings, and AI is a tool built by living beings, it wouldn't make sense for its goals not to include expansion of some kind (i.e. it would always look at the universe with sighing eyes, thinking of all the paperclips that represents). But perhaps in an attempt to keep AI in line somehow we would constrain it to a single stream of resources? In which case it would not be remotely interested in anything outside of Earth?
It is probably possible to encode no desire at all for immortality and expansion in AI. Which means that there could be millions of AIs out there, built before Dyson Spheres, always, just sitting there as their home world dies. A pretty chilling thought actually. Also rather comical.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2023-01-06T11:02:56.027Z · LW(p) · GW(p)
The kind of misalignment that would have AI kill humanity - the urge for power, safety, and resources - is the same kind that would cause expansion.
Replies from: neil-warren↑ comment by Neil (neil-warren) · 2023-03-21T13:45:50.973Z · LW(p) · GW(p)
AI could eliminate us in its quest to achieve a finite end, and would not necessarily be concerned with long-term personal survival. For example, if we told an AI to build a trillion paperclips, it might eliminate us in the process then stop at a trillion and shut down.
Humans don't shut down after achieving a finite goal because we are animated by so many self-editing finite goals that there never is a moment in life where we go "that's it. I'm done". It seems to me that general intelligence does not seek a finite, measurable and achievable goal but rather a mode of being of some sorts. If this is true, then perhaps AGI wouldn't even be possible without the desire to expand, because a desire for expansion may only come with a mode-of-being oriented intelligence rather than a finite reward-oriented intelligence. But I wouldn't discount the possibility of a very competent narrow AI turning us into a trillion paperclips.
So narrow AI might have a better chance at killing us than AGI. The Great Filter could be misaligned narrow AI. This confirms your thesis.
comment by Lunin · 2014-09-05T16:28:59.049Z · LW(p) · GW(p)
There is a third alternative. Observed universe is limited, the probability of life arising from non-living matter is low, suitable planets are rare, evolution doesn't directly optimize for intelligence. Civilizations advanced enough to build strong AI are probably just too rare to appear in our light cone.
We could have already passed the 'Gread Filter' by actually existing in the first place.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2014-09-05T20:11:41.178Z · LW(p) · GW(p)
Yes, that's exactly what an early great filter means.
comment by [deleted] · 2014-08-30T19:46:55.434Z · LW(p) · GW(p)
So the Great Filter (preventing the emergence of star-spanning civilizations) must strike before AI could be developed.
Maybe AI is the Great Filter, even if it is friendly.
The friendly AI could determine that colonization of what we define as “our universe” is unnecessary or detrimental to our goals. Seems unlikely, but I wouldn’t rule it out.
Replies from: Stuart_Armstrong, None↑ comment by Stuart_Armstrong · 2014-09-01T18:02:53.527Z · LW(p) · GW(p)
Not colonising the universe - many moralities could go with that.
Allowing potential rivals to colonise the universe... that's much rarer.
comment by Kawoomba · 2014-08-29T20:50:54.186Z · LW(p) · GW(p)
A self-improving intelligence can indeed be the Great Filter, that is, if it has already reached us from elsewhere, potentially long ago.
Keeping in mind that the delta between "seeing evidence of other civilizations in the sky" (= their light-speeded signals reaching us) and "being enveloped by another civilization from the sky" (= being reached by their near-light-speeded probes) is probably negligible (order of 10^3 years being conservative?).
Preventing us from seeing evidence, or from seeing anything at all which it doesn't want us to see, would be trivial.
Yes, I'm onto you.
comment by John_Maxwell (John_Maxwell_IV) · 2014-08-30T05:27:17.745Z · LW(p) · GW(p)
It seems possible to me that if many intelligences reached our current stage, at least a few would have neurological structures that were relatively easy to scan and simulate. This would amount to AI being "easy" for them (in the form of ems, which can be sped up, improved, etc.)
comment by cameroncowan · 2014-08-29T18:54:31.637Z · LW(p) · GW(p)
I think we can file this under AI is hard because you have to create an intelligence that is so vast that it can have as close to apriori knowledge of its existence as we do. While I agree that once that intelligence exists that it may wish/want to control such vast resources and capability that it could quickly bring together the resources, experts, and others to create exploratory vehicles to begin the exploration and mapping process of the universe. However, I think you also have to realize that life happens and our Universe is a dynamic place. Ships would be lost, people would be killed, dangers would ensue, other civilizations might not be so friendly (and believe me guns, germs, and steel aren't going to solve it this time). A myriad of things could happen.
I would say yes to both. There is a great filter of life and the dynamism of the universe as well as the ability for AI to expand rapidly. How that comes together (or not) remains to be seen.
comment by [deleted] · 2014-08-29T16:39:53.723Z · LW(p) · GW(p)
Or the origin of life is hard.
Or the evolution of multicellular life is hard.
Or the evolution of neural systems is hard.
Or the breakaway evolution of human-level intelligence is hard.
(These are not the same thing as an early filter.)
Or none of that is hard, and the universe is filled with intelligences ripping apart galaxies. They are just expanding their presence at near the speed of light, so we can't see them until it is too late.
Replies from: DanArmak, James_Miller↑ comment by DanArmak · 2014-08-29T16:52:33.729Z · LW(p) · GW(p)
(These are not the same thing as an early filter.)
Why not? I thought the great filter was anything that prevented ever-expanding intelligence visibly modifying the universe, usually with the additional premise that most or all of the filtering would happen at a single stage of the process (hence 'great').
Or none of that is hard, and the universe is filled with intelligences ripping apart galaxies. They are just expanding their presence at near the speed of light, so we can't see them until it is too late.
If they haven't gotten here yet at near-lightspeed, that means their origins don't lie in our past; the question of the great filter remains to be explained.
Replies from: None↑ comment by [deleted] · 2014-08-29T17:55:23.713Z · LW(p) · GW(p)
Evolution is not an ever-expanding intelligence. Before the stage of recursively improving intelligence (human-level, at minimum), I wouldn't call it a filter. But maybe this is an argument about definitions.
Replies from: Wes_W↑ comment by Wes_W · 2014-08-29T18:16:58.388Z · LW(p) · GW(p)
You do seem to be using the term in a non-standard way. Here's the first use of the term from Hanson's original essay:
[...]there is a "Great Filter" along the path between simple dead stuff and explosive life.
The origin and evolution of life and intelligence are explicitly listed as possible filters; Hanson uses "Great Filter" to mean essentially "whatever the resolution to the Fermi Paradox is". In my experience, this also seems to be the common usage of the term.
↑ comment by James_Miller · 2014-08-29T20:16:11.372Z · LW(p) · GW(p)
If this is true than almost all civilizations at our stage of development would exist in the early universe and the Fermi paradox becomes "why is our universe so old?"
Replies from: None↑ comment by [deleted] · 2014-08-29T20:48:42.956Z · LW(p) · GW(p)
No, that's not at all obvious. We have absolutely no idea how hard it is to evolve intelligent life capable of creating recursive self-improving machine intelligence. I named a number of transitions in my post which may in fact have been much, much more difficult than we are giving credit. The first two probably aren't, given what I know about astrobiology. Life is probably common, and multicellular life arose independently on Earth at least 46 times, so there's probably not something difficult there that we're missing. I don't know anything about the evolutionary origins of neurons, so I won't speculate there.
The evolution of human-level intelligence, however, in a species capable of making and using tools really does seem to have been a freak accident. It happened on Earth only because a tribe of social primates got stuck for thousands of years under favorable but isolated living conditions due to ecological barriers, and vicious tribal politics drove run-away expansion of the cerebral cortex. The number of things which had to go just right to make that occur as it actualy happened makes it a very unlikely event.
Let me rephrase what I just said: there are pressures at work which cause evolution to actually select against higher intelligence. It is only because of an ecological freak accident that isolated a tribe of tool-making social primates in a series of caves on a sea cliff on the coast of Africa for >50,000 years where they were allowed to live in relative comfort making the prime selection pressure mastery of tribal politics, which led to expansion of the general intelligence capability of the cerebral cortex, which gave us modern humans. There are so many factors which had to go just right there that I'd say that is a very likely candidate for a great filter.
But it's all a moot point. We would expect a post-singularity expansionist intelligence to spread throughout the universe at close to the speed of light. Why don't we see other intelligences? Because if they were in our past history light cone, then they would have already consumed Earth and we wouldn't be here to ask the question. We should expect to see an empty sky, no matter how likely or unlikely intelligent life is.
If the question is "why is the universe old, when if we picked a random possible civilization it would more likely be an earlier one?" Well the assumption there is wrong. We don't get to inject our priors onto the universe. Maybe we are one of the very uncommon late civilizations that just happened to be very far away from any of the other super intelligences, sparing us (for now). You say that is unlikely, but you are basing your calculations of probability on a self-selected prior. Who knows why we exist so late. We just do. We don't get to extract information from that observation because we don't know the universe's priors.
Replies from: James_Miller↑ comment by James_Miller · 2014-08-30T17:54:15.328Z · LW(p) · GW(p)
We don't get to inject our priors onto the universe.
It is my understanding that if we have priors we absolutely must inject them onto the universe to formulate the best possible mental map of it.
Who knows why we exist so late. We just do. We don't get to extract information from that observation because we don't know the universe's priors.
But we do have lots of information about this. For example, the reason is not that earth is the only planet in our galaxy. And we have the potential of gaining lots more information such as if we find extraterrestrial life. I'm sure you don't mean to imply that if we do not have a complete understanding of a phenomenon we must ignore that phenomenon when formulating beliefs.
Replies from: None↑ comment by [deleted] · 2014-09-02T16:30:24.697Z · LW(p) · GW(p)
What I mean is that you are injecting assumptions about how you came to be a conscious entity on Earth in whatever year your mother conceived you, as opposed to any other place or time in the history of the universe.
Maybe it's true that for every sentient being you assign equal probability and then it would look very odd indeed that you ended up in a early stage civilization in an old, empty universe.
Or, maybe coherent worlds count as a single state, therefore greater probability mass is given to later civilizations which exist in a good deal many more Everett branches.
Or more likely it is something completely different. The very idea that I was 'born into' a sentient being chosen at random stinks of Cartesian dualism when I really look at it. It seems very unlikely to me that the underlying mechanism of the universe is describable at that scale.