UFAI cannot be the Great Filter

post by Thrasymachus · 2012-12-22T11:26:41.010Z · LW · GW · Legacy · 93 comments

Contents

  Introduction 
  'Where is everybody?'
None
93 comments

[Summary: The fact we do not observe (and have not been wiped out by) an UFAI suggests the main component of the 'great filter' cannot be civilizations like ours being wiped out by UFAI. Gentle introduction (assuming no knowledge) and links to much better discussion below.]

Introduction 

The Great Filter is the idea that although there is lots of matter, we observe no "expanding, lasting life", like space-faring intelligences. So there is some filter through which almost all matter gets stuck before becoming expanding, lasting life. One question for those interested in the future of humankind is whether we have already 'passed' the bulk of the filter, or does it still lie ahead? For example, is it very unlikely matter will be able to form self-replicating units, but once it clears that hurdle becoming intelligent and going across the stars is highly likely; or is it getting to a humankind level of development is not that unlikely, but very few of those civilizations progress to expanding across the stars. If the latter, that motivates a concern for working out what the forthcoming filter(s) are, and trying to get past them.

One concern is that advancing technology gives the possibility of civilizations wiping themselves out, and it is this that is the main component of the Great Filter - one we are going to be approaching soon. There are several candidates for which technology will be an existential threat (nanotechnology/'Grey goo', nuclear holocaust, runaway climate change), but one that looms large is Artificial intelligence (AI), and trying to understand and mitigate the existential threat from AI is the main role of the Singularity Institute, and I guess Luke, Eliezer (and lots of folks on LW) consider AI the main existential threat.

The concern with AI is something like this:

  1. AI will soon greatly surpass us in intelligence in all domains. 
  2. If this happens, AI will rapidly supplant humans as the dominant force on planet earth.
  3. Almost all AIs, even ones we create with the intent to be benevolent, will probably be unfriendly to human flourishing.

Or, as summarized by Luke:

... AI leads to intelligence explosion, and, because we don’t know how to give an AI benevolent goals, by default an intelligence explosion will optimize the world for accidentally disastrous ends. A controlled intelligence explosion, on the other hand, could optimize the world for good. (More on this option in the next post.) 

So, the aim of the game needs to be trying to work out how to control the future intelligence explosion so the vastly smarter-than-human AIs are 'friendly' (FAI) and make the world better for us, rather than unfriendly AIs (UFAI) which end up optimizing the world for something that sucks.

 

'Where is everybody?'

So, topic. I read this post by Robin Hanson which had a really good parenthetical remark (emphasis mine):

Yes, it is possible that the extremely difficultly was life’s origin, or some early step, so that, other than here on Earth, all life in the universe is stuck before this early extremely hard step. But even if you find this the most likely outcome, surely given our ignorance you must also place a non-trivial probability on other possibilities. You must see a great filter as lying between initial planets and expanding civilizations, and wonder how far along that filter we are. In particular, you must estimate a substantial chance of “disaster”, i.e., something destroying our ability or inclination to make a visible use of the vast resources we see. (And this disaster can’t be an unfriendly super-AI, because that should be visible.)

This made me realize an UFAI should also be counted as an 'expanding lasting life', and should be deemed unlikely by the Great Filter.

Another way of looking at it: if the Great Filter still lies ahead of us, and a major component of this forthcoming filter is the threat from UFAI, we should expect to see the UFAIs of other civilizations spreading across the universe (or not see anything at all, because they would wipe us out to optimize for their unfriendly ends). That we do not observe it disconfirms this conjunction.

[Edit/Elaboration: It also gives a stronger argument - as the UFAI is the 'expanding life' we do not see, the beliefs, 'the Great Filter lies ahead' and 'UFAI is a major existential risk' lie opposed to one another: the higher your credence in the filter being ahead, the lower your credence should be in UFAI being a major existential risk (as the many civilizations like ours that go on to get caught in the filter do not produce expanding UFAIs, so expanding UFAI cannot be the main x-risk); conversely, if you are confident that UFAI is the main existential risk, then you should think the bulk of the filter is behind us (as we don't see any UFAIs, there cannot be many civilizations like ours in the first place, as we are quite likely to realize an expanding UFAI).]

A much more in-depth article and comments (both highly recommended) was made by Katja Grace a couple of years ago. I can't seem to find a similar discussion on here (feel free to downvote and link in the comments if I missed it), which surprises me: I'm not bright enough to figure out the anthropics, and obviously one may hold AI to be a big deal for other-than-Great-Filter reasons (maybe a given planet has a 1 in a googol chance of getting to intelligent life, but intelligent life 'merely' has a 1 in 10 chance of successfully navigating an intelligence explosion), but this would seem to be substantial evidence driving down the proportion of x-risk we should attribute to AI.

What do you guys think?

93 comments

Comments sorted by top scores.

comment by [deleted] · 2012-12-22T18:39:17.234Z · LW(p) · GW(p)

What if post-singularity civilizations expand at the speed of light? Then we should not expect to see anything:

It looks like we are going to have less than 200 years between interstellar detectability and singularity. So the chance of us being around at the same time (adjusted for distance) as another civilization to a resolution of a few hundred years seems quite low.

Life will only get to the point of asking questions like these on worlds that haven't been ground up for resources, so we can only be outside the "expansion cone" of any post-singularity civilization. If the expansion cone and the light cone are close (within a few hundred years), then, given that we are outside of the expansion cone, we are probably outside the light cone as well. So the AI-as filter doesn't get falsified by observing no AIs.

It doesn't even have to be a filter, though it probably is; 100% of civilizaions could successfully navigate the intelligence explosion and we would see nothing, because we can only exist in the last corner of the universe that hasn't been ground up by them.

This is all assuming lightspeed expansion. Here's a few ideas: a single nanoseed catapulted at 99.9 % the speed of light, followed by a laser encoding the instructions. Cross galaxy light-lag would be only 100 years. How to slow it down on arrival is unknown... Another possibility is actually creating spaceships out of light; some kind of super laser that would excite whatever it hit in just the right way to create a nanoseed. This one seems mush less plausible, but I wouldn't bet against the engineering skill of a superintelligence.

Replies from: timtyler, Kawoomba, DanielVarga, Oligopsony, JoshuaZ
comment by timtyler · 2012-12-23T02:03:12.014Z · LW(p) · GW(p)

What if post-singularity civilizations expand at the speed of light? Then we should not expect to see anything. [...]

Then it spreads throughout the galaxy in 100,000 light years. The Fermi paradox is not really affected - there are still no alien superintelligent machines visible here.

Replies from: None
comment by [deleted] · 2012-12-23T07:18:23.870Z · LW(p) · GW(p)

Why is it not affected? If we assume they expand at a negligible fraction of the speed of light, we expect them to be visible from the outside for their entire lifetime (which may be very long). On the other hand if we expect them to expand at nearly the speed of light, we expect them to be detectable from outside for only a few hundred years.

The other side of the galaxy could very well be already consumed by an alien civilization.

Replies from: timtyler
comment by timtyler · 2012-12-23T13:19:19.446Z · LW(p) · GW(p)

E.g. check with http://en.wikipedia.org/wiki/Fermi_paradox

The rate of expansion makes very little difference, and a high rate of expansion is not listed as a possible resolution.

Replies from: None
comment by [deleted] · 2012-12-23T18:13:40.712Z · LW(p) · GW(p)

That article has little about the effect of expansion. Why does it not affect it? What is wrong with my argument that it should matter?

A near-c rate of expansion drastically reduces the volume of space that a given civilization is observable from. What specifically is wrong with this?

Replies from: CarlShulman
comment by CarlShulman · 2012-12-23T18:38:00.013Z · LW(p) · GW(p)

If intelligent life was common and underwent such expansion, then there would be very few new-arising lonely civilizations later in the history of the universe (the real estate for their evolution already occupied or filled with signs of intelligence). The overwhelming majority of civilizations evolving with empty skies would be much younger.

So, whether you attend to the number of observers with our observations, or the proportion of all observers with such observations, near-c expansion doesn't help resolve the Fermi paradox.

Another way of thinking about it is: we are a civilization that has developed without seeing any sign of aliens, developed on a planet that had not been colonized by aliens. Colonization would have prevented our observations just as surely as alien transmissions or a visibly approaching wave of colonization.

Replies from: None
comment by [deleted] · 2012-12-23T21:31:54.549Z · LW(p) · GW(p)

I still don't get it.

Assume life is rare/filtered, we straightforwardly expect to see what we see (empty sky).

Assume life is common and the singularity comes quickly and reliably, and colonization proceeds at the speed of light, then condition on the fact that we are pre-singularity. As far as I can tell, a random young civilization still expects empty skies, possibly slightly less because of the relatively small volume of spacetime where we would observe an approaching colonization wave.

So the observation of empty skies is only very weak evidence against life being common, given that this singularity stuff is sound.

The latter hypothesis is more specific, but I already believe all those assumptions (quick, reliable, and near-c).

Given that I take those singularity assumptions seriously (not just hypothetically), and given that we are where we are in the history of the universe, the fermi paradox seems resolved for me; I find it unlikely that a given young civilization would observe any other civilization, no matter the actual rate of life. If we did observe another isolated civilization it would be pretty much falsify my "quick,reliable, and lightspeed" singularity belief.

However, as you say, that "given that we are where we are in the history of the universe" is worrying. I predict most young civilizations to be early (because the universe gets burned up quickly), and I predict most civilizations to not be young, given that life is common. When we observe ourselves to be young and late (are we actually late?), fermi's paradox results. I guess in this case fermi's paradox is that we observed something that is a priori unlikely, and we wonder what unlikely alternate hypotheses this digs up (the above, for one). However, anthropics is very confusing...

Replies from: timtyler, CarlShulman
comment by timtyler · 2012-12-23T23:25:40.409Z · LW(p) · GW(p)

I predict most young civilizations to be early (because the universe gets burned up quickly), and I predict most civilizations to not be young, given that life is common. When we observe ourselves to be young and late (are we actually late?), Fermi's paradox results.

Fermi's paradox also makes mention of the fact that there are billions of stars in the galaxy that are billions of years older than ours, many of them having habitable planets. Some reasons have prevented any of these from spawning a galactic colonization wave - and those reasons are of interest to us.

comment by CarlShulman · 2012-12-23T21:38:17.589Z · LW(p) · GW(p)

When we observe ourselves to be young and late (are we actually late?), fermi's paradox results.

Yes and yes.

comment by Kawoomba · 2012-12-22T22:09:42.237Z · LW(p) · GW(p)

Here's a few ideas: a single nanoseed catapulted at 99.9 % the speed of light, followed by a laser encoding the instructions. Cross galaxy light-lag would be only 100 years. How to slow it down on arrival is unknown... Another possibility is actually creating spaceships out of light; some kind of super laser that would excite whatever it hit in just the right way to create a nanoseed.

As envisioned in Olaf Stapledon's classic Last and First Men, [free here]:

(Read, then guess the year of publication!)

In respect of the future, we are now setting about the forlorn task of disseminating among the stars the seeds of a new humanity. For this purpose we shall make use of the pressure of radiation from the sun, and chiefly the extravagantly potent radiation that will later be available. We are hoping to devise extremely minute electro-magnetic "wave-systems," akin to normal protons and electrons, which will be individually capable of sailing forward upon the hurricane of solar radiation at a speed not wholly incomparable with the speed of light itself. This is a difficult task. But, further, these units must be so cunningly inter-related that, in favourable conditions, they may tend to combine to form spores of life, and to develop, not indeed into human beings, but into lowly organisms with a definite evolutionary bias toward the essentials of human nature. These objects we shall project from beyond our atmosphere in immense quantities at certain points of our planet's orbit, so that solar radiation may carry them toward the most promising regions of the galaxy. The chance that any of them will survive to reach their destination is small, and still smaller the chance that any of them will find a suitable environment. But if any of this human seed should fall upon good ground, it will embark, we hope, upon a somewhat rapid biological evolution, and produce in due season whatever complex organic forms are possible in its environment. It will have a very real physiological bias toward the evolution of intelligence. Indeed it will have a much greater bias in that direction than occurred on the Earth in those sub-vital atomic groupings from which terrestrial life eventually sprang.

Year of publication? Rot13: avargrra guvegl!

comment by DanielVarga · 2012-12-24T01:23:47.338Z · LW(p) · GW(p)

Here are a couple of scattered short LW comments where I discussed this possibility and considered counterarguments and implementations.

Replies from: None
comment by [deleted] · 2012-12-24T01:44:27.544Z · LW(p) · GW(p)

Interesting. You seem to have exactly the same thoughts as me.

How do you think one might slow down a .999*c von neuman probe at the destination?

Replies from: DanielVarga, Pentashagon
comment by DanielVarga · 2012-12-25T17:11:36.075Z · LW(p) · GW(p)

I am not a physicist, so I didn't and couldn't do the calculations, but I don't really believe that classic probes can reach .999c. They would be pulverised by intergalactic material. Even worse, literal .999c would not be fast enough for this fancy "hits us before we know it" filter idea to work. As I explained in some of the above-quoted threads, my bet would definitely be on the things you called "spaceships out of light". A sufficiently advanced civilisation might switch from atoms to photons as their substrate. The only resource they would extract from the volume of space they consume would be negentropy, so they wouldn't need any slowing down or seeds. Again, I am not a physicist. I discussed this with some physicists, and they were sceptic, but their objections seemed to be of the engineering kind, not theoretic kind, and I'm not sure they sufficiently internalized "don't bet against the engineering skill of a superintelligence".

For me, one source of inspiration for this light-speed expansion idea was Stanislav Lem's "His Master's Voice", where precisely tuned radio waves are used to catalyse the formation of DNA-based life on distant planets. (Obviously that's way too slow for the purposes we discuss here.)

Replies from: Oscar_Cunningham, None
comment by Oscar_Cunningham · 2013-01-12T14:30:48.865Z · LW(p) · GW(p)

photons as their substrate

Photons can't interact with each other (by the linearity of Maxwell's equations) and so can't form a computational substrate on their own. This doesn't rule out "no atoms" computing in general though.

EDIT: I'm wrong. When you do the calculations in full quantum field theory there is a (extremely) slight interaction (due to creations and destructions of electron-postitron pairs, which in some sense destroy the linearity). I don't know if this is enough to support computers.

comment by [deleted] · 2012-12-26T19:12:03.983Z · LW(p) · GW(p)

They would be pulverised by intergalactic material.

That's actually concerning. Maybe it isn't possible to shoot matter intact across the galaxy... Would have to do the calculations with interstellar particle density.

Also, surely you mean "interstellar"? I was only thinking of interstellar travel for now; assuming intergalactic is impossible or whatever.

Even worse, literal .999c would not be fast enough for this fancy "hits us before we know it" filter idea to work.

Not for intergalactic, but the galaxy is 100k lightyears across. 0.999c would get you a lag behind the light of 100 years, which is on the same order of magnitude as the time between detectability and singularity (looks like < 200 years for us).

A sufficiently advanced civilisation might switch from atoms to photons as their substrate. The only resource they would extract from the volume of space they consume would be negentropy, so they wouldn't need any slowing down or seeds.

How would one eat a star without slowing down, even in principle?

precisely tuned radio waves are used to catalyse the formation of DNA-based life on distant planets. (Obviously that's way too slow for the purposes we discuss here.)

This is closer to what I was thinking, but of course if you can catalyze DNA, you can catalyze arbitrary nanomachines. Exactly how this would work is a mystery to me... (also, doing it with radio waves is needlessly difficult, surely you'd use something precise and ionizing like UV, Xrays, or gamma)

Replies from: DanielVarga
comment by DanielVarga · 2012-12-30T11:10:12.848Z · LW(p) · GW(p)

Also, surely you mean "interstellar"? I was only thinking of interstellar travel for now; assuming intergalactic is impossible or whatever.

When you look at it from a Fermi paradox perspective, you have to be able to account for many hundred million years of expansion, because there can be many civilizations that are that much older than us. We are talking about some crazy thing that is supposed to be able to consume a galaxy with almost-optimal speed. I don't expect galaxy boundaries to stop it completely, neither by intention nor by necessity. I am not even sure that it has to treat intergalactic space as the long boring travel between the rare interesting parts. Maybe all it really needs is empty space.

0.999c would get you a lag behind the light of 100 years, which is on the same order of magnitude as the time between detectability and singularity (looks like < 200 years for us).

Interesting point.

How would one eat a star without slowing down, even in principle?

Note that I speculated about photons as a substrate. Maybe major reorganization of atoms in unnecessary, and it can just fill the space around the star, and utilize the star as a photon source.

comment by Pentashagon · 2012-12-24T06:55:29.793Z · LW(p) · GW(p)

Fire a particle accelerator that can fire a smaller von neuman probe at -.999c. The particle accelerator could be built and assembled during the trip if it's too unwieldy to fire directly.

comment by Oligopsony · 2012-12-26T22:47:41.647Z · LW(p) · GW(p)

An implicit assumption here is that alien civilizations have an observation weight of zero.

If complex space-faring civilizations have spread across the galaxy to produce lots of observers capable of anthropic reasoning, why aren't we in one?

If they don't, doesn't that just reframe the Filter? Technological evolution into Blindsight-style Scramblers sure sounds like extinction to me.

comment by JoshuaZ · 2012-12-23T02:31:22.011Z · LW(p) · GW(p)

What if post-singularity civilizations expand at the speed of light?

This is sort of valid but it is extremely unlikely. Even if expansion occurs at say .99% of light then the problem will still exist. One needs to be expanding extremely close (implausibly close?) to the speed of light for this explanation to work.

Replies from: None
comment by [deleted] · 2012-12-23T07:30:58.599Z · LW(p) · GW(p)

(implausibly close?)

We have particle accelerators that achieve Lorentz factors of 7,500. I proposed a Lorentz factor of 22. Never mind a superintelligence, we, are on the brink of being able to accelerate nanomachines to that speed (assuming we had nanomachines).

The only implausible thing is being able to decelerate non-destructively at the target, and none of us have given that even 5 whole minutes of serious thought, never mind a couple trillion superintelligent FLOPS.

Replies from: Armok_GoB, CannibalSmith
comment by Armok_GoB · 2012-12-31T19:14:07.457Z · LW(p) · GW(p)

Nanobot is hard to de-accelerate, but a robust femtobot might do better.

Hmm, using the femtobot, would it being charged and entering a conductive material slow it down due to that induction thingy, like a magnet dropped down a copper tube? Or maybe having a conductive right shaped bot, and launching it into a ludicrously strong magnetic field of a neutron star or something.?

Another option is to launch a black hole in front of it, and give both the probe and black hole extremely strong negative charge; the black hole will absorb impacting matter (also solving the problem of interstellar dust) slowing it down by averaging, simultaneously clearing a safe path for the probe and gently pushing it back as it gets closer and the charges repel.

Replies from: None
comment by [deleted] · 2012-12-31T22:45:08.550Z · LW(p) · GW(p)

Femto? Explain.

The black hole idea is interesting. Does it even have to be a black hole? Any big non-functional absorbent mass at the front would do, right? Maybe only a black hole would be reliable...

Maybe not even a mass. If the probe had a magnetic field, you might be able to do things with the bussard ramjet idea to slow you down and control (charged) collisions.

Replies from: Armok_GoB
comment by Armok_GoB · 2013-01-01T17:47:27.197Z · LW(p) · GW(p)

not very good but good enough: http://en.wikipedia.org/wiki/Femtotech

ANd I were just brainstorming, your guess is as good as mine. But yea a tiny neutron star might work.

comment by CannibalSmith · 2012-12-25T13:32:39.516Z · LW(p) · GW(p)

Here are my five minutes: nanomachines need to carry a charge to be accelerable, right? Well, it works the other way too - they will decelerate on their own in destination's Van Allen belts.

Replies from: None
comment by [deleted] · 2012-12-26T18:58:25.808Z · LW(p) · GW(p)

They don't actually decelerate in the Van Allen belts, though. Magnetic fields apply a force to a charged particle perpendicular to it's direction of motion. F*V = Deceleration Power = 0. Also worth noting that a charged nanomachine has a much higher mass/charge ratio than the usual charged particles (He2+, H+, and e-), so it would be much less affected.

I was actually thinking of neutralizing the seed at the muzzle to avoid troublesome charge effects.

comment by Izeinwinter · 2013-01-05T14:21:07.203Z · LW(p) · GW(p)

Hmm.

Okay, filters that would produce results consistent with observation.

1: Politics: Aka: "The Berzerker's Garden" The first enduring civilization in our galaxy rose many millions of years before the second, and happened to be both highly preservationist and highly expansionist. They have outposts buried in the deep crust of every planet in the galaxy, including earth. Whenever a civilization arises that is both inclined and able to turn the galaxy entire into fast food joints/smily faces/ect, arises said civilization very suddenly disappears. The berzerkers cannot be fought, and cannot be fooled, because they have been watching the entirety of history, and their contingency plans for us predate the discovery of fire. If we are really lucky, they will issue a warning before annihilating us.

2: Physics is booby trapped: One of the experiments every technological civilization inevitably conducts while exploring the laws of the universe has an unforeseeable, and planet-wrecking result. We are screwed.

3: Economics: The minimal mass of a technological "ecology" capable of sustaining itself outside of a compatible biosphere is just too large to fit into a star ship. The interlocking chains of expertise, material extraction and recycling, energy production and so on, and so forth, flat out cannot be compacted down enough to be moved. No such thing as a von-neuman probe or a colony ship can be built. Civilizations expand to the limit of how far spare parts and help can can be sent, and then halt.

4: Diversion: Advancing tech opens "frontiers" much, much more attractive than star flight before starflight becomes possible. Alternate timeline gates, uploads into the underlying computational substrate of the universe, ect, ect.

5: Anthropic engineering. Advanced civilizations have proof of the manyworlds and anthropic principles - And use them. Which from outside any given circle of coordination looks like collective suicide. So the universe is full of empty planets, and every civilization has all of it to themselves.

Replies from: scarcegreengrass
comment by scarcegreengrass · 2016-12-29T19:07:58.314Z · LW(p) · GW(p)

I'm from the future & i just want to thank you for these unusual solutions.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-23T00:02:13.996Z · LW(p) · GW(p)

AFAIK the idea that "UFAI exacerbates, and certainly does not explain, the question of the Great Filter" is standard belief among SIAI rationalists (and always has been, i.e., none of us ever thought otherwise that I can recall).

Replies from: None
comment by [deleted] · 2012-12-23T13:45:27.481Z · LW(p) · GW(p)

I was just going to quote your comment on Overcoming Bias to emphasise this.

AFAIK, all SIAI personnel think and AFAIK have always thought that UFAI cannot possibly explain the Great Filter; the possibility of an intelligence explosion, Friendly or unFriendly or global-economic-based or what-have-you, resembles the prospect of molecular nanotechnology in that it makes the Great Filter more puzzling, not less. I don't view this as a particularly strong critique of UFAI or intelligence explosion, because even without that the Great Filter is still very puzzling - it's already very mysterious.

I think some people may be misinterpreting you as believing this because many people understand your advocacy as implying "UFAI is the biggest baddest existential risk we need to deal with". Assuming a late filter not explained by UFAI suggests there is an unidentified risk in our future that is much likelier than an uncontrolled intelligence explosion.

Replies from: CarlShulman, Eliezer_Yudkowsky, timtyler
comment by CarlShulman · 2012-12-23T19:36:12.372Z · LW(p) · GW(p)

Assuming a late filter

That's a big assumption, both uncertain and decisive if made.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-23T21:53:24.524Z · LW(p) · GW(p)

I think some people may be misinterpreting you as believing this because many people understand your advocacy as implying "UFAI is the biggest baddest existential risk we need to deal with".

It is; I don't particularly think the answer to the Great Filter is a Bigger Threat that comes after this. There's a possibility that most species like ours happen to be inside the volume of some earlier species's "F"AI's enforced Prime Directive with a restriction threshold (species are allowed to get as far as ours, but are not allowed to colonize galaxies) but if so I'm not sure what our own civilization ought to do about that. I suspect, and certainly hope, that there's actually a hidden rarity factor.

But I do think some fallacy of the form, "This argument would make UFAI more threatening - therefore UFAI-fearers must endorse it - but the argument is wrong, ha ha!" might have occurred.

Replies from: CarlShulman
comment by CarlShulman · 2012-12-24T02:59:13.347Z · LW(p) · GW(p)

But I do think some fallacy of the form, "This argument would make UFAI more threatening - therefore UFAI-fearers must endorse it - but the argument is wrong, ha ha!" might have occurred.

I think this is it. However, there are at least a few enthusiasts, even if they are relatively peripheral, who do tend to engage in such indiscriminate argument. Sort of like internet skeptics who confabulate wrong arguments for true skeptical conclusions in the course of comment thread combat that the scientists they are citing would not endorse.

comment by timtyler · 2012-12-23T20:33:12.056Z · LW(p) · GW(p)

Assuming a late filter not explained by UFAI suggests there is an unidentified risk in our future that is much likelier than an uncontrolled intelligence explosion.

What has prevented local living systems from colonising the universe so far has been delays - not risks.

comment by falenas108 · 2012-12-22T20:13:50.407Z · LW(p) · GW(p)

And this disaster can’t be an unfriendly super-AI, because that should be visible

This is not necessarily true. If the goals of the AI do not involve a rapid acquisition of resources even outside its solar system, then we would not see evidence for it (E.g, wireheading that does not involve creating as many sentient organisms as possible).

However, because there would be many instances of this, AI being the filter is probably still not likely. If it's very likely for UAI to be screwed up in a self-contained way, we would not expect to see evidence of life. If UAI has a non-negligible chance to gobble up everything it sees for energy, then we would expect to se it.

Replies from: fubarobfusco, None
comment by fubarobfusco · 2012-12-23T19:27:06.732Z · LW(p) · GW(p)

wireheading that does not involve creating as many sentient organisms as possible

Sure. For instance, consider the directive "Make everyone on Earth live as happily as possible for as long as the sun lasts." Solution: Wirehead everyone, then figure out how to blow up the sun, then shut down — mission accomplished.

Replies from: CarlShulman
comment by CarlShulman · 2012-12-24T02:55:00.701Z · LW(p) · GW(p)

Not if the system is optimizing for the probability of success and can cheaply send out probes to eat the universe and use it to make sure the job is finished lest something go wrong (e.g. the sun-destroyer [???] failed, or aliens resuscitate the Sun under whatever criterion of stellar life is used).

Replies from: fubarobfusco
comment by fubarobfusco · 2012-12-24T09:04:58.818Z · LW(p) · GW(p)

"Our analysis of the alien probe shows that its intended function is to ... um ... go back to a star that blew up ten thousand years ago and make damn sure that it's blown up and not just fooling."

comment by [deleted] · 2014-05-30T09:34:58.766Z · LW(p) · GW(p)

A UFAI that doesn't go around eating stars to make paper-clips is probably already someone's attempted FAI. Bringing arbitrarily large sums of mass-energy and negentropy under one's control is a Basic AI Drive, so you have to program the utility function to actually penalize it.

Replies from: falenas108
comment by falenas108 · 2014-05-30T13:55:38.865Z · LW(p) · GW(p)

Only if the AI has goals that both require additional energy, and don't have a small, bounded success condition.

For example, if an UFAI for humans has a goal that requires humans to be there, but is not allowed to create/lead to the creation of more, then if all humans are already dead it won't do anything.

comment by Oscar_Cunningham · 2012-12-22T11:50:32.476Z · LW(p) · GW(p)

For galactic civilisations I'd guess that there would be a strong first mover advantage. If one civilisation (perhaps controlled be an AI) started expanding 1000 years before another then any conflict between them would likely be won by the civilisation that started capturing resources first.

But what if none of them know which of them expanded first? There might be several forces colonising the galaxy, and all keeping extremely quiet so that they don't get noticed and destroyed by and older civilisation. Thus no need for a great filter, and even if UFAI were common we wouldn't observe it colonising the galaxy.

Replies from: timtyler, None
comment by timtyler · 2012-12-22T14:15:02.281Z · LW(p) · GW(p)

The same way different species hid from each other to avoid being wiped out? You can't expand and hide. And you must expand - so you can't realistically hide.

Replies from: benelliott
comment by benelliott · 2012-12-22T14:57:19.261Z · LW(p) · GW(p)

The fairly obvious distinction is the species do not have centralling planning, or even sophisticated communication between individual members. Civilisations can and do have such things, and AIs also seem likely to.

Replies from: timtyler
comment by timtyler · 2012-12-22T15:39:38.427Z · LW(p) · GW(p)

Civiliszations could hide - if they were stupid - or if they didn't care about the future. I wasn't suggesting hiding was a strategy that was not available at all - just that it would not be an effective survival strategy.

Replies from: benelliott, Kawoomba
comment by benelliott · 2012-12-22T16:28:25.018Z · LW(p) · GW(p)

That may be the case, but the fact that species did not hide is no evidence for it, as species could not realistically have hid even if it would have been optimal

Also, I would like to point out that where the hiding strategy is feasible, on the individual level, it is very common among animals.

In fact, arguably we do have a few cases of species 'hiding', such as the coelocanth, which has gone over a hundred million years without much significant evolution, vastly longer than most species survive.

Basically, I do not see any reason to believe either of the following assertions.

You can't expand and hide.

you must expand

Replies from: timtyler
comment by timtyler · 2012-12-23T23:52:15.836Z · LW(p) · GW(p)

Perhaps I overstated my case. Hiding and expanding are different optimisation targets that pull in pretty different directions. It is challenging to expand while hiding effectively - since farming suns tends to leave a visible thermodynamic signature which is visible from far away and is expensive to eliminate. I expect civilizations will typically strongly prioritize expanding over hiding.

Replies from: benelliott
comment by benelliott · 2012-12-24T01:15:27.071Z · LW(p) · GW(p)

Okay, in that case I now agree with the first part of your claim, I will accept that there is certainly a trade-off, perfect expanding does not involve much hiding and perfect hiding does not involve much expanding.

So, to move on to the other side, why do you expect expanding to be more prevalent than hiding. It seems to me that Oscar Cunningham's argument for why hiding might be preferable is quite convincing, or at any rate reduces it to a non-trivial problem of game theory and risk aversion.

Replies from: timtyler
comment by timtyler · 2012-12-24T01:36:14.972Z · LW(p) · GW(p)

Camouflage is pretty unlikely to be an effective defense against an oncoming colonisation wave. I figure the defense budget will be spent more on growth and weapons than camouflage.

Replies from: benelliott
comment by benelliott · 2012-12-24T11:19:25.979Z · LW(p) · GW(p)

For reasons of technological progress, I suspect a hiding civilisation could destroy an younger expanding civilisation before it was hit by the colonisation wave. If this is the case, it becomes a matter of how likely you are to be the oldest civilisation, how likely the oldest civilisation is to expand or hide, and how much you value survival relative to growth. If the first is low and the last is high, then hiding seems like quite a good strategy.

Replies from: timtyler
comment by timtyler · 2012-12-24T12:04:16.169Z · LW(p) · GW(p)

For reasons of technological progress, I suspect a hiding civilisation could destroy an younger expanding civilisation before it was hit by the colonisation wave.

That sounds like the wave's leading edge to me.

If this is the case, it becomes a matter of how likely you are to be the oldest civilisation, how likely the oldest civilisation is to expand or hide, and how much you value survival relative to growth. If the first is low and the last is high, then hiding seems like quite a good strategy.

The issues as I see them are different. Much depends on whether progress "maxes out". If it doesn't the most mature civilization probaby just wins - in which case, hiding is irrelevant. If the adversaries are well matched they may attempt to find some other resolution besides a big fight which could weaken both of them. Again, hiding won't help.

IMO, assuming the oldest civilization is in hiding is not a good way to start analysing this issue.

Replies from: benelliott
comment by benelliott · 2012-12-24T12:51:23.300Z · LW(p) · GW(p)

That sounds like the wave's leading edge to me.

I am confused by this sentence, and cannot parse what it means.

Much depends on whether progress "maxes out". If it doesn't the most mature civilization probaby just wins

If it knows that its then oldest, then yes it wins. The whole point of Oscar-Cunningham's comment is that it might not know this.

To model it as a simple game, 100 people are all put in separate rooms. One of them is designated as the 'big player', nobody knows who they are including them. Each has two choices expand or hide. If the big player expands then they receive a large pay-off and everyone else gets nothing. If the big player hides, then everyone who hides gets a small pay-off, and everyone who expands gets nothing.

Obviously much depends of the relative size of the large and small pay-offs, but it is not trivially obvious to me that expanding is the optimal strategy here.

If the adversaries are well matched they may attempt to find some other resolution besides a big fight which could weaken both of them. Again, hiding won't help.

Against an equally matched foe, attempts to negotiate are inherently highly risky, if negotiations break down, then one of them may well destroy the other, given the possibility of a first mover advantage, one civilisation may decide to attack if negotiations merely look likely to break down, applying the game theory backwards, we get an extremely volatile situation where as soon as anything ceases to go absolutely perfectly both sides attack. Hiding from an equally matched civilisation may well be much safer than trying to talk to them.

Furthermore, if you become aware of an equally matched civilisation hiding from you, it may be better to continue to pretend you are not aware, rather than opening negotiations straight away. This may go to rather high levels of I know You know I know and as long a mutual knowledge isn't attained both can survive.

Replies from: Armok_GoB
comment by Armok_GoB · 2012-12-31T19:47:00.068Z · LW(p) · GW(p)

The more realistic version of that game always has expanding, since we know the total payout on expanding is greater than the total on hiding, and the big player is allowed to share the resources equally if she wants to.

Replies from: benelliott
comment by benelliott · 2013-01-01T01:57:32.114Z · LW(p) · GW(p)

I fail to see that this carries.

We modify the game to make sure that the pay-off for successful expanding exceeds all the hiding pay-offs put together, and we also allow the big player, after the fact, to share their expanding pay-out if they want to.

Clearly, not sharing produces a higher pay-off than sharing, so the big-player will not do this. Negotiating in advance doesn't work, as it requires revealing yourself, so once you've done that you've made your move and its no longer "in advance".

If a civilisations utility is linear in its size, then it is always wise for it to expand. If it is risk-averse, which seems plausible, (most of us would not accept a plan which had a 50% chance of colonising Mars and a 50% chance of wiping out humanity), then it may still be wise to hide. If all civilisations are risk averse, hiding is a Nash equilibrium.

Replies from: Armok_GoB
comment by Armok_GoB · 2013-01-01T17:36:42.266Z · LW(p) · GW(p)

The total payout must be higher because in the hiding scenario a lot of negentropy is wasted into nowhere, in the natural lifecycle of stars and the like. The universe is a pie of a fixed size, but one that gradually rots away you take to long deciding who gets to eat it.

And it might be the case that nobody in fact chose to share, but due to game theory it still matters that they had the option.

Also, things like TDT allow for coordination even while hiding, and in fact seems to be one of the assumptions behind this thing in the first place.

Replies from: benelliott
comment by benelliott · 2013-01-01T21:50:48.393Z · LW(p) · GW(p)

The total payout must be higher because in the hiding scenario a lot of negentropy is wasted into nowhere, in the natural lifecycle of stars and the like. The universe is a pie of a fixed size, but one that gradually rots away you take to long deciding who gets to eat it.

I wasn't disputing this.

And it might be the case that nobody in fact chose to share, but due to game theory it still matters that they had the option.

Game theory is not magic. If there is an option that nobody intends to take, and everyone knows that no-one intends to take it, and everyone knows that, etc, then this option has no effect on the game.

Also, things like TDT allow for coordination even while hiding, and in fact seems to be one of the assumptions behind this thing in the first place.

This is more promising, but I would be a lot more convinced to see the logic actually worked through rather than just using "TDT, therefore everyone is nice" as a magic wand.

comment by Kawoomba · 2012-12-22T19:15:24.772Z · LW(p) · GW(p)

Obligatory "extraterrestrial superintelligence has put a planetarium-like illusion around the Earth that appears and behaves exactly like real spacetime would in the absence of extraterrestrials."

Replies from: timtyler
comment by timtyler · 2012-12-22T22:15:42.678Z · LW(p) · GW(p)

That's quite a different situation from the one in the context, which was:

There might be several forces colonising the galaxy, and all keeping extremely quiet so that they don't get noticed and destroyed by and older civilisation.

Replies from: Kawoomba
comment by Kawoomba · 2012-12-22T22:20:13.560Z · LW(p) · GW(p)

Yes, I did not link to that to either refute or support your point, it was merely mentioning an interesting article on the "civilizations in hiding" tangent. For the public good, you know.

comment by [deleted] · 2012-12-22T20:24:35.713Z · LW(p) · GW(p)

But what if none of them know which of them expanded first? There might be several forces colonising the galaxy, and all keeping extremely quiet so that they don't get noticed and destroyed by and older civilisation. Thus no need for a great filter, and even if UFAI were common we wouldn't observe it colonising the galaxy.

This requires either:

  • Interstellar travel is much slower than seems to be possible (a non-trivial fraction of the speed of light)
  • "Colonizing" or rather fully exploiting the resources of a star system or other object takes a long time and is also for a long time more economical than just expanding again to grab the low hanging fruit a few light years away.
  • That no civilization in our galaxy has a head start long enough to win. My best estimate is that a few hundred thousand years before any other is more than enough.

It seems much likelier that we are alone in the galaxy. Either civilizations are pretty rare or we are the oldest one. If the latter is true this seems anthropic evidence in favour of the simulation hypothesis.

Your argument works much better on a much larger scale, for example it does take millions of years for light to travel between galaxies.

But what if none of them know which of them expanded first? There might be several forces colonising the Virgo Supercluster, and all keeping extremely quiet so that they don't get noticed and destroyed by and older civilisation. Thus no need for a great filter, and even if UFAI were common we wouldn't observe it colonising the Virgo Supercluster.

~110 or ~200 million year head start on intelligent civilization building life on a Earth like planet still doesn't seem obviously unlikely.

But what if none of them know which of them expanded first? There might be several forces colonising the visible universe, and all keeping extremely quiet so that they don't get noticed and destroyed by and older civilisation. Thus no need for a great filter, and even if UFAI were common we wouldn't observe it colonising the visible universe.

This is almost certainly true, but at these scales the speed limit of the universe is a potent ally. By the time anyone notices you are doing anything many hundreds of millions of years of you already doing whatever you wanted to do with the local matter have passed.

Also see metric expansion of space. The farther away an object is, the faster it recedes from us.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-12-24T06:19:15.654Z · LW(p) · GW(p)

It seems much likelier that we are alone in the galaxy. Either civilizations are pretty rare or we are the oldest one. If the latter is true this seems anthropic evidence in favour of the simulation hypothesis.

Or it could be anthropic evidence that the first mover advantage is so large that the first civilization to expand prevents all others from even developing.

comment by Academian · 2012-12-27T07:01:25.744Z · LW(p) · GW(p)

The relevant notion of intelligence for a singularity is optimization power, and it's not obvious that we aren't already witnessing the expansion of such an intelligence. You may have already had these thoughts, but you didn't mention them, and I think they're important to evaluating the strength of evidence we have against UFAI explosions:

What do agents with extreme optimization power look like? One way for them to look is a rapidly-expanding-space-and-resource-consuming process which at some point during our existence engulfs our region of space destroys us, which we haven't seen happen yet. Another way for them to look is such an optimization process that has already engulfed our region of space. And if that were the case, we would necessarily be a byproduct of or noise term in that process, and the laws of physics actually meet that description.

(I had this thought when I first asked myself to look for naturally-occurring examples of extremely powerful optimization processes, and the laws of physics were the best examples I could think of. E.g., an agent wishing to minimize the L^2 norm of the differential of energy with respect to time would produce an environment that conserves energy.)

It's difficult and conjunction-fallacious to come with an agent whose utility function would exactly be to have an environment that follows our particular laws of physics with our particular initial conditions, so one should not update heavily in favor of this idea based on observing the laws of physics to be true. What I'm saying is that we only have seen evidence against UFAIs that could have expanded from regions of nearby space and interrupted human progress, not UFAIs that could have expanded before and perhaps caused the existence of the observable universe.

Replies from: scarcegreengrass
comment by scarcegreengrass · 2016-12-29T19:13:07.405Z · LW(p) · GW(p)

This is similar to the Burning the Cosmic Commons paper by Robin Hanson, which considers whether the astronomical environment we observe might be the leftovers of a migrating extraterrestrial ecosystem that left a long time ago.

comment by [deleted] · 2012-12-23T01:03:44.086Z · LW(p) · GW(p)

I take issue with the assumption that the only two options are perpetual expansion of systems derived from an origin of life out into the universe, and destruction via some filter. The universe just might not be clement to expansion of life or life's products beyond small islands of habitability, like our world.

You cannot assume that the last few hundred years of our history is typical, or that you can expect similar exponentiation into the future. I would argue that it is a fantastic anomaly and regression to the mean is far more likely.

Replies from: RomeoStevens, timtyler
comment by RomeoStevens · 2012-12-23T19:00:36.442Z · LW(p) · GW(p)

it doesn't need to be for us, only for things we build.

comment by timtyler · 2012-12-23T01:52:20.363Z · LW(p) · GW(p)

I take issue with the assumption that the only two options are perpetual expansion of systems derived from an origin of life out into the universe, and destruction via some filter. The universe just might not be clement to expansion of life or life's products beyond small islands of habitability, like our world.

Right - but we can see how far apart the stars in our galaxy are, and roughly what it would take to travel between them. Intragalactic travel looks as though it will be relatively trivial - for advanced living systems.

Replies from: None
comment by [deleted] · 2012-12-24T04:40:16.171Z · LW(p) · GW(p)

I fail to see how your second sentence follows from your first. What do you mean by relatively trivial?

Replies from: orthonormal
comment by orthonormal · 2012-12-26T06:00:12.531Z · LW(p) · GW(p)

Human beings in the 1970s were able to throw a hefty chunk of metal (eventually) out of the Solar System, so interstellar travel doesn't require any future breakthroughs (though such breakthroughs would make it easier). See Project Orion) for an early look at the feasibility of interstellar travel without space elevators, uploading, nanotechnology, or AI.

comment by Kawoomba · 2012-12-22T11:36:22.153Z · LW(p) · GW(p)

Good post, good explanation. I agree. I saw the recent comment on OB that probably sparked you making this topic, I was thinking of posting it fleetingly before akrasia kicked in. So, thanks.

A throwaway parenthesized remark from RH that nevertheless should be of major importance, because it lowers the credence we should assign to the argument that "UFAI is a good great filter candidate, and a great filter is a good explanation for the Fermi paradox, ergo we should raise our belief in the the verisimilitude of UFAI occurring."

Replies from: CarlShulman, timtyler
comment by CarlShulman · 2012-12-22T16:20:15.091Z · LW(p) · GW(p)

"because it lowers the credence we should assign to the argument that "UFAI is a good great filter candidate, and a great filter is a good explanation for the Fermi paradox, ergo we should raise our belief in the the verisimilitude of UFAI occurring.""

Can you identify some people who ever held or promoted this view? I don't know of any writers who have actually made this argument. It's pretty absurd on its face, basically saying that instead of there being super-convergence among biological civilizations not to colonize the galaxy, there is super-convergence among autonomous robotic civilizations not to colonize.

Replies from: Kawoomba, timtyler
comment by Kawoomba · 2012-12-22T19:05:16.858Z · LW(p) · GW(p)

You are correct; I cannot.

I did however, find plenty of refutations of precisely that argument, from the SI4 mailing list to various blogs. Related, Robin Hanson wrote this 2 years ago:

Let us call an AI unambitious if its values have no use for the rest of the universe. Then if the great filter is the main reason to think existential risks are likely, we should worry much more about unambitious unfriendly AI than just an unfriendly AI. Since designing an ambitious AI seems lots easier than designing a friendly one, maybe ambition should be the AI designer first priority.

I suppose that having seen some of those refutations, I falsely overestimated the importance of the argument that was being refuted:

I thought that to merit public refutations, there must be a certain number of people believing in it. If there are, I couldn't identify any.

Maybe the association occurs from "uFAI" being so closely related to "x-risk", and "x-risk" being so closely related to "the Great Filter". No transitivity this time.

Replies from: CarlShulman
comment by CarlShulman · 2012-12-22T19:46:15.061Z · LW(p) · GW(p)

Maybe the association occurs from "uFAI" being so closely related to "x-risk", and "x-risk" being so closely related to "the Great Filter". No transitivity this time.

I think this may cause confusion for some casual observers, so it's worth reiterating the refutation, but it's also worth noting that no one has seriously pressed the refuted argument.

comment by timtyler · 2012-12-26T00:15:11.041Z · LW(p) · GW(p)

There are certainly some who think machine intelligence may account for the Fermi paradox. For instance, here's George Dvorsky on the topic. Also, the Wikipedia article on the Fermi paradox lists "a badly programmed super-intelligence" as a possible cause.

Replies from: CarlShulman
comment by CarlShulman · 2012-12-26T02:37:44.884Z · LW(p) · GW(p)

Thanks for the links Tim. Yes, it certainly gets included in exhaustive laundry lists of Fermi Paradox explanations (Dvorsky has covered many proposed Fermi Paradox solutions, including very dubious ones). The Fermi Paradox wiki page also includes the following weird explanation:

technological singularity...Theoretical civilizations of this sort may have advanced drastically enough to render communication impossible. The intelligences of a post-singularity civilization might require more information exchange than is possible through interstellar communication, for example.

comment by timtyler · 2012-12-22T14:52:30.963Z · LW(p) · GW(p)

A throwaway parenthesized remark from RH that nevertheless should be of major importance [...]

Hang on, we've known this for years, right? This is not new information.

comment by [deleted] · 2012-12-23T13:59:13.497Z · LW(p) · GW(p)

Early or late great filter?

I'm currently leaning strongly towards late filter, because many of the proposed early filters seem to not be such big barriers. We've for example found a bunch of exoplanets in the last decade or so and several of those seem plausibly in the habitable zone. Life on Earth arose very early in its history so if life arising is the hard and rare step I would expect there to be many more hundreds of millions or even billions of years of conditions on Earth being seemingly ripe for it arising and it not doing so.

" Abiogenesis likely occurred between 3.9 and 3.5 billion years ago, in the Eoarchean era (the time after the Hadean era in which the Earth was essentially molten)."

Obviously maybe the conditions ripe for life being there at all is the tricky part. Another possible hard step is multicellular life.

Correction: Multicelular life seems easy eukaryotic life not so much.

The Cambrian explosion seems to have happened like a billion or more years after it "could" have happened.

Replies from: None, CarlShulman, Tripitaka
comment by [deleted] · 2012-12-23T20:05:53.784Z · LW(p) · GW(p)

Not necessarily - there are those that argue that the cambrian explosion might have had more to do with the increase in atmospheric oxygen over geological time than evolution, and we find vague evidence of multicellular creatures (worm-track type impressions in the seafloor, some strange radially-symmetrical things buried in the sediment) up to a billion and a half years ago. Oxygen lets you have big energy-gobbling multicellular creatures easier, and blocks the destructive ultraviolet radiation that would have previously sterilized the above-water land when it turns to ozone. We also might just see an explosion because that is when hard body parts that fossilize easier appeared.

If the explosion was evolution-driven it could have been due to some kind of runaway arms race between predators and prey, or due to the final establishment of the developmental plans of the various animal phyla that could then be modularly tweaked to enable diversification and rapid evolution.

comment by CarlShulman · 2012-12-23T19:52:50.353Z · LW(p) · GW(p)

Another possible hard step is multicellular life.

It's the move to eukaryotic life, or "complex cells" that's unique. Multicellularity given eukaryotic status seems easy, but eukaryote status happened only once, and about halfway through the habitable lifetime of the Earth.

Replies from: None
comment by [deleted] · 2012-12-23T21:44:18.902Z · LW(p) · GW(p)

Thank you for correcting me on this, it has been some time since I thought of this.

Replies from: CarlShulman
comment by CarlShulman · 2012-12-23T22:33:57.214Z · LW(p) · GW(p)

Life on Earth arose very early in its history so if life arising is the hard and rare step I would expect there to be many more hundreds of millions or even billions of years of conditions on Earth being seemingly ripe for it arising and it not doing so.

This argument is faulty if there is more than one hard step (and the update is not that strong, although significant, even with one step). See Robin's paper for details.

comment by Tripitaka · 2012-12-23T14:43:37.381Z · LW(p) · GW(p)

The jump to multicellular life seems to be pretty easy, actually. To quote Wikipedia:

Multicellularity has evolved independently dozens of times in the history of Earth, for example once for plants, once for animals, once for brown algae, but perhaps several times for fungi, slime molds, and red algae. The wikipedia source-link

Replies from: None
comment by [deleted] · 2012-12-23T19:57:54.951Z · LW(p) · GW(p)

It seems to be remarkably easy for eukaryotes, with their excessive number of genes (probably accumulated via drift and non-adaptive processes) which can be co-opted for cell-to-cell communication. There are those that argue that prokaryotes are too heavily optimized for efficient fast reproduction to make huge multicellular complexes- though it turns out that they actually do specialize themselves a bit when they are growing in colonies or biofilms to provide for the colony as a whole.

comment by John_Maxwell (John_Maxwell_IV) · 2012-12-23T01:15:58.087Z · LW(p) · GW(p)

Really, it seems like any kind of superintelligent AI, friendly or unfriendly, would result in expanding intelligence throughout the universe. So perhaps a good statement would be: "If you believe the Great Filter is ahead of us, that implies that most civilizations get wiped out before achieving any kind of superintelligent AI, meaning that either superintelligent AI is very hard, or wiping out generally comes relatively early." (It seems possible that we already got lucky with the Cold War... http://www.guardian.co.uk/commentisfree/2012/oct/27/vasili-arkhipov-stopped-nuclear-war)

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-23T04:48:06.050Z · LW(p) · GW(p)

Unless intelligent life is already almost-extremely rare, that's not nearly enough 'luck' to explain why everyone else is dead, including aliens who happen to be better at solving coordination problems (imagine SF insectoid aliens).

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2012-12-23T05:52:22.392Z · LW(p) · GW(p)

Yeah, of course.

comment by timtyler · 2012-12-22T14:26:47.069Z · LW(p) · GW(p)

Katja says:

The large minimum total filter strength contained in the Great Filter is evidence for larger filters in the past and in the future.

That's true - but anthropic evidence seems kind-of trumped by the direct observational evidence that we have already invented advanced technology and space travel, which took many billions of years. From here, expansion shouldn't be too difficult - unless, of course, we meet more-advanced aliens.

Other civilizations may possibly be expanding too by now - SETI is still too small and young to say much about that directly. Probably not within our galaxy, but only because us and them becoming civilised at the same time would be quite a coincidence.

Replies from: CarlShulman, CarlShulman
comment by CarlShulman · 2012-12-22T16:38:44.231Z · LW(p) · GW(p)

Robin's use of the Great Filter argument relies on the SIA, which (if one buys it) allows one to rule out a priori the possibility that the development of beings like us is very rare. Absent that, if one's prior for the development of life is flatter than for things like nuclear war (it would be much less surprising for less than one in 10^100 planets to evolve intelligent life than for less than 1 in 10^100 civilizations like ours to avoid self-destruction with advanced technology) then you get much less update in favor of future filters.

OTOH, the SIA also strongly supports the possibility that we're a simulation (if we assign a 1 in 1 million probability to sims being billions of times more numerous, than we should assign more credence to that than to being in the basement), which warps the Great Filter argument into something almost unrecognizable. See this paper for a discussion of the interactions with SIA.

comment by CarlShulman · 2012-12-24T03:04:24.640Z · LW(p) · GW(p)

Probably not within our galaxy, but only because us and them becoming civilised at the same time would be quite a coincidence.

On a log plot, the difference between the number of stars in the Solar System and the number of stars in the Milky Way is smaller than the difference between the number of stars in the Milky Way and the number of stars that we can reach before the expansion of the universe removes them from our grasp. How much probability mass would you place on alien civilizations within reachable space? Within our galaxy?

comment by [deleted] · 2023-05-31T14:49:26.358Z · LW(p) · GW(p)

Interesting. However, I'd like to propose an alternative: The real probability of another alien civilisation being inside our universe shard, that is the area of the universe that us humans can possibly explore below the speed of light, is very low. So there might be predatory super intelligence that has wiped out the civilisation that made it, but we're just not in its universe shard.

comment by tim · 2012-12-27T06:40:00.984Z · LW(p) · GW(p)

When you (and Robin) say "because [UFAI] should be visible," that seems to imply that there are a significant number of potential observer moments that occur where we can see evidence for a UFAI but the UFAI is not yet able to break us down into spare parts. I've always assumed that if a UFAI was created in our lightcone, we would be extinct in very short amount of time. Thus, the assertion "UFAI is not the great filter because we don't see any" is similar to saying "giant asteroids aren't the great filter because we don't see any smashing into earth and extinctifying us." That is, of course we don't observe these things because if they occurred, we wouldn't be around to observe them. Is the assumption that once a UFAI has advanced enough to be observable by us, it would be traveling to devour the rest of its observable universe at near-light-speed obviously silly for some reason I'm missing?

comment by turchin · 2012-12-24T14:18:13.396Z · LW(p) · GW(p)

Alien UFAI could be dangerous for us if we find its radiosignal as a result of SETI seach. His messages could content a bite which could lure us to built a copy of alien AI based on schemas which he will send to us in this messages.

D. Carrigan wrote about it: http://home.fnal.gov/~carrigan/SETI/SETI_Hacker.htm

Some simple natural selection reason imply that UFAI radiosignals should dominate in all SETI signals if any exist. And the goal of such UFAI is to convert the Earth to another radio backon which will send his own code futher.

My article on the topic: Is SETI dangerous? http://ru.scribd.com/doc/7428586/Risks-of-SETI-Is-SETI-Dangerous

comment by John_Maxwell (John_Maxwell_IV) · 2012-12-24T13:24:38.226Z · LW(p) · GW(p)

I have a hard time imagining a filter that could've wiped out all of a large number of civilizations that got to our current point or further. That's not to say that future x-risks aren't an issue--it just feels implausible that no civilization would've been able to successfully coordinate with regard to them or avoid developing them. (E.g. bonobos seem substantially more altruistic than humans and are one of the most intelligent non-human species.)

Also, I thought of an interesting response to the Great Filter, assuming that we're pretty sure it actually is ahead of us: Halt all technological development ASAP and stay on this planet chilling out. It's possible that other civilizations already have done this (having realized that the Great Filter was an issue and there was likely deadly tech in their future)--if they had, we wouldn't know about it.

comment by Yosarian2 · 2013-01-07T22:59:18.864Z · LW(p) · GW(p)

Of course, that might just mean that 99.9% of all civilizations destroy themselves in the roughly 100 years between the invention of the nuclear bomb and the invention of AGI.

comment by Mestroyer · 2012-12-24T02:25:21.462Z · LW(p) · GW(p)

This should be "UFAI can't be the only great filter." Nothing says that once you get past a great filter, you are home free. Maybe we already passed a filter on life originating in the first place, or a technology-using species evolving, but UFAI is another filter that still has an overwhelming probability of killing us if nothing else does first.

The fact that UFAI can't be the only great filter certainly screens off the presence of a great filter as evidence of UFAI being a great filter, but there are good arguments directly from how UFAI would work that indicate that it is pretty big danger.

Replies from: CarlShulman
comment by CarlShulman · 2012-12-24T03:13:01.170Z · LW(p) · GW(p)

You're talking about something being a disaster by our lights, but not a filter that prevents something from Earth colonizing the galaxy. It's confusing to keep using the term 'filter' for concerns about the composition of a future colonizing civilization/the fate of humanity rather than discussion of the Fermi paradox.