If there IS alien super-inteligence in our own galaxy, then what it could be like?

post by Coacher · 2016-02-26T11:55:46.779Z · LW · GW · Legacy · 59 comments

Contents

59 comments

For a moment lets assume there is some alien intelligent life on our galaxy which is older than us and that it have succeeded in creating super-intelligent self-modifying AI.

Then what set of values and/or goals it is plausible for it to have, given our current observations (I.e. that there is no evidence of it`s existence)?

Some examples:

It values non-interference with nature (some kind of hippie AI)

It values camouflage/stealth for it own defense/security purposes.

It just cares about exterminating their creators and nothing else.

 

Other thoughts?

59 comments

Comments sorted by top scores.

comment by James_Miller · 2016-02-26T16:13:28.290Z · LW(p) · GW(p)

It has a belief that capturing the resources of the galaxy would not increase its security nor further its objectives. It doesn't mind interfering with us, which it is implicitly doing by hiding its presence and giving us the Fermi paradox which in turn influences our beliefs about the Great Filter.

Replies from: AABoyles
comment by AABoyles · 2016-02-26T18:43:43.240Z · LW(p) · GW(p)

For example, maybe they figured out how to convince it to accept some threshold of certainty (so it doesn't eat the universe to prove that it produced exactly 1,000,000 paperclips), it achieved its terminal goal with a tiny amount of energy (less than one star's worth), and halted.

comment by DanArmak · 2016-02-26T17:35:07.422Z · LW(p) · GW(p)

We should also consider what beliefs or knowledge it could have that would cause it to stay home. For instance:

  • The Universe is a simulation and making parts of it more complex or expensive to simulate would shorten the life of other complex parts, would crash the simulation, would fail because the simulation wouldn't allow it, would draw undesired attention from the simulators, etc.
  • If their civilization spread to other stars, lightspeed limits would make them effectively independent, and eventually value drift or selfishness would cause conflict or would harm their existing goals
  • There is some danger out there, perhaps a relic of an even older civilization, and the best course is to hide from it; humans have not yet been attacked because we only started beaming out radio signals less than a hundred years ago
Replies from: James_Miller
comment by James_Miller · 2016-02-26T17:49:20.801Z · LW(p) · GW(p)

If (1) or (2) is correct they should destroy us.

Replies from: DanArmak
comment by DanArmak · 2016-02-26T18:06:35.364Z · LW(p) · GW(p)

If (1) is true, it would prevent them from spreading all over the galaxy, so they can't find us (to destroy us) this early in our evolution, a hundred years after we used radio. They might still destroy us relatively soon,, but they wouldn't be present in every star system. (Also, the fact we evolved in the first place might be evidence against this scenario.)

If (2) is true, they would have to somehow destroy all other life in the galaxy without risking that destroying-mechanism value-drifting or being used by a rival faction. This might be hard. Also, their values might not endorse destroying aliens without even contacting them.

Replies from: James_Miller
comment by James_Miller · 2016-02-26T18:18:49.623Z · LW(p) · GW(p)

If (1) is true the aliens should fear any type of life developing on other planets because that life would greatly increase the complexity of the galaxy. My guess is that life on earth has, for a very long time, done things to our atmosphere that would allow an advanced civilization to be aware that our planet harbors life.

Replies from: AABoyles, MakoYass
comment by AABoyles · 2016-02-26T18:42:11.280Z · LW(p) · GW(p)

This is actually a fairly healthy field of study. See, for example, Nonphotosynthetic Pigments as Potential Biosignatures.

comment by mako yass (MakoYass) · 2016-03-08T10:15:02.392Z · LW(p) · GW(p)

Note, sending probes out any distance may increase computational requirements. Approximations are no longer sufficient when an agent's eye comes up very close to them. Unless we can expect the superintelligence to detect these signs from a great distance, from the home star, it might not afford to see them.

Also worth considering: Probes that close their eyes to everything but life-supporting planets, so that it wont notice the low grain of approximations and approximations can continue to be used in its presence.

comment by AABoyles · 2016-02-26T18:27:03.183Z · LW(p) · GW(p)

It may have discovered some property of physics which enabled it to expand more efficiently across alternate universes, rather than across space in any given universe. Thus it would be unlikely to colonize much of any universe (specifically, ours).

Replies from: James_Miller, MakoYass
comment by James_Miller · 2016-02-26T18:50:32.804Z · LW(p) · GW(p)

If physics allows for the spreading across alternative universes at a rate greater than that at which you can spread across our universe, the Fermi paradox becomes even more paradoxical.

Replies from: None
comment by [deleted] · 2016-02-26T20:11:29.250Z · LW(p) · GW(p)

Not necessarily - See Hilbert's Paradox.

https://en.wikipedia.org/wiki/Hilbert%27s_paradox_of_the_Grand_Hotel

Replies from: TimS, James_Miller, Raiden
comment by TimS · 2016-02-26T22:12:56.779Z · LW(p) · GW(p)

This is only a paradox under naive definitions of infinity. Once one starts talking about cardinality, the "paradoxical" nature of the thought experiment fades away.

In other words, this is not really responsive to James_Miller's comment.

comment by James_Miller · 2016-02-26T21:03:24.965Z · LW(p) · GW(p)

If an infinite number of aliens have the potential to make contact with us (which I realize isn't necessarily implied by your comment) then some powerful subset must be shielding us from contact.

comment by Raiden · 2016-02-28T13:46:12.525Z · LW(p) · GW(p)

Infinity is really confusing.

comment by mako yass (MakoYass) · 2016-03-08T10:18:26.042Z · LW(p) · GW(p)

I am not a cosmologist so forgive me if this theory is deranged, but what about Dark Matter? Is it possible there are vast banks of usable energy there, and that the ability to transition one's body to dark-matter would make it easier for a civ to agree to turn away from the resources available in light matter?

comment by Gavin · 2016-02-27T19:34:28.218Z · LW(p) · GW(p)

Similar to some of the other ideas, but here are my framings:

  1. Virtually all of the space in the universe have been taken over by superintelligences. We find ourselves observing the universe from one of these rare areas because it would be impossible for us to exist in one of the colonized areas. Thus, it shouldn't be too surprising that our little area of non-colonization is just now popping out a new superintelligence. The most likely outcome for an intelligent species is to watch the area around them become colonized while they cannot develop fast enough to catch up.

  2. A dyson-sphere level intelligence knows basically everything. There is a limit to knowledge and power that can be approached. Once a species has achieved a certain level of power it simply doesn't need to continue expanding in order to guarantee its safety and the fulfillment of its values. Continued expansion has diminishing returns and it has other values or goals that counterbalance any tiny desire to continue expanding.

Replies from: James_Miller, RedErin
comment by James_Miller · 2016-02-27T20:47:52.059Z · LW(p) · GW(p)

Evolution should favor species that have expansion as a terminal value.

Replies from: None, Dagon
comment by [deleted] · 2016-02-29T10:39:54.317Z · LW(p) · GW(p)

Why terminal?

comment by Dagon · 2016-02-28T20:07:34.363Z · LW(p) · GW(p)

Care to show the path for that? Evolution favors individual outcomes, and species are a categorization we apply after the fact.

Survival of genotype is more likely for chains of individuals that value some diversity of environment and don't get all killed by a single local catastrophe, but it's not clear at all that this extends beyond subplanetary habitat diversity.

Replies from: James_Miller
comment by James_Miller · 2016-02-28T21:04:09.064Z · LW(p) · GW(p)

Care to show the path for that?

The Amish.

If you are not subject to the Malthusian trap, evolution favors subgroups that want to have lots of offspring. Given variation in a population not subject to the Malthusian trap concerning how many children each person wants to have, and given that one's preferences concerning children are in part genetically determined, the number of children the average member of such a species wants to have should steadily increase.

Replies from: drethelin
comment by drethelin · 2016-02-29T19:18:27.044Z · LW(p) · GW(p)

Aren't the Amish (and other fast-spawning tribes) a perfect example of how this doesn't lead to universal domination? They're all groups that either embrace primitivity or are stuck in it, and to a large extent couldn't maintain their high reproductive rate without parasitism on surrounding cultures.

Replies from: James_Miller
comment by James_Miller · 2016-02-29T19:29:15.583Z · LW(p) · GW(p)

Depends on how you define domination. Over the long run if trends continue the Amish will dominate through demography. I don't think the Amish are parasites since they don't take resources from the rest of us.

Replies from: drethelin
comment by drethelin · 2016-03-02T07:31:16.468Z · LW(p) · GW(p)

They are parasitic on our infrastructure, healthcare system, and military. Amish reap the benefits of modern day road construction methods to transport their trade goods, but could not themselves construct modern day roads. Depending on the branch of Amish, a significant number of their babies are born in modern-day hospitals, something they could not build themselves and which contributes to their successful birth rate.

Replies from: James_Miller
comment by James_Miller · 2016-03-02T16:10:37.117Z · LW(p) · GW(p)

Everything you wrote is also true of my family, but because of specialization and trade we are not parasites.

Replies from: drethelin
comment by drethelin · 2016-03-03T00:28:06.048Z · LW(p) · GW(p)

Last time I checked you weren't arguing that your family was going to dominate the world through breeding.

comment by RedErin · 2016-03-01T15:26:20.809Z · LW(p) · GW(p)

But it is unethical to allow all the suffering that occurs on our planet.

Replies from: Coacher, Lumifer
comment by Coacher · 2016-03-01T18:06:55.035Z · LW(p) · GW(p)

Compared to what alternative?

comment by Lumifer · 2016-03-01T15:59:19.183Z · LW(p) · GW(p)

That depends on your ethical system, doesn't it?

comment by turchin · 2016-02-27T19:01:23.580Z · LW(p) · GW(p)
  1. Alien AI is using SETI-attack strategy, but to convince us to bite and also to be sure that we have very strong computers which are able to run its code, it makes its signal very subtle and complex, so it is not easy to find. We didn't find it yet but will soon find. I wrote about SETI attack here: http://lesswrong.com/lw/gzv/risks_of_downloading_alien_ai_via_seti_search/

  2. Alien AI exist in form of alien nanobots everywhere (including my room and body), but they do not interact with us and try to hide from microscopy.

  3. They are berserk and will be triggered to kill us if we reach unknown threshold, most likely creation of AI or nanotech.

2 may include 3.

Replies from: Coacher
comment by Coacher · 2016-03-01T18:03:13.695Z · LW(p) · GW(p)
  1. This looks far fetched, but interesting strategy. Does it perhaps ever occur in nature? I.e. do any predators wait for their prey to become stronger/smarter, before luring them into the trap?

  2. I guess they could, but to what end?

  3. Why wait?

Replies from: Lumifer, turchin
comment by Lumifer · 2016-03-01T18:13:44.173Z · LW(p) · GW(p)

Does it perhaps ever occur in nature?

Notice how, say, the Andamanese are entirely safe from online phishing scams or identity theft.

comment by turchin · 2016-03-01T20:19:57.697Z · LW(p) · GW(p)
  1. Andamanese ))
  2. Maybe alien nanobots control part of the galaxy which is concurred by host civilization and prevent any other civilization to invade or appear.
  3. Observational selection: we could find our selves only in civilization which berserkers have high attack treshold (or do not exist).
comment by AABoyles · 2016-02-26T18:23:04.007Z · LW(p) · GW(p)

The superintelligence could have been written to value-load based on its calculations about an alien (to its creators) superintelligence (what Bostrom refers to as the "Hail Mary" approach). This could cause it to value the natural development of alien biology enough to actively hide its activities from us.

Replies from: James_Miller, AABoyles
comment by James_Miller · 2016-02-26T23:15:05.448Z · LW(p) · GW(p)

Then it should also have not caused us to falsely have a Fermi paradox and so believe in a great filter. It could have done this in numerous ways including by causing us to think that planets rarely form.

comment by AABoyles · 2016-02-26T18:28:15.053Z · LW(p) · GW(p)

...Think of the Federation's "Prime Directive" in Star Trek.

Replies from: _rpd, Coacher
comment by _rpd · 2016-02-27T21:40:47.653Z · LW(p) · GW(p)

Or we are an experiment (natural or artificial) that yields optimal information when unmanipulated or manipulated imperceptibly (from our point of view).

comment by Coacher · 2016-02-27T09:48:49.715Z · LW(p) · GW(p)

Or the way we try to keep isolated people isolated (https://en.wikipedia.org/wiki/Uncontacted_peoples)

Replies from: James_Miller
comment by James_Miller · 2016-02-27T15:03:34.997Z · LW(p) · GW(p)

Crazy Idea--What if we are an isolated people and the solution to the Fermi paradox is that aliens have made contact with earth, but our fellow humans have decided to keep this information from us. Yes, this seems extremely unlikely, but so do all other solutions to the Fermi paradox.

Replies from: g_pepper, Coacher
comment by g_pepper · 2016-03-01T18:31:38.820Z · LW(p) · GW(p)

That is the basic idea behind the X-Files TV series and various UFO conspiracy theories, isn't it?

comment by Coacher · 2016-03-01T18:18:06.823Z · LW(p) · GW(p)

Then why would they even contact those few people?

Replies from: James_Miller
comment by James_Miller · 2016-03-01T19:43:04.442Z · LW(p) · GW(p)

It might not be direct contract but rather our astronomers have long since detected signs of alien life, but this has been kept from us.

comment by HungryHobo · 2016-02-29T17:58:15.322Z · LW(p) · GW(p)

To throw one out there, perhaps the first superintelligence was created by a people very concerned about AI risk and friendliness and one of it's goals is simply to subtly suppress (by a very broad definition) unfriendly AI's in the rest of the universe while minimizing disruption otherwise.

comment by NancyLebovitz · 2016-02-27T16:04:35.980Z · LW(p) · GW(p)

They place a high value on social unity, so spreading out over distances which would make it hard to keep a group-- or a mind-- together doesn't happen.

comment by Bound_up · 2016-03-01T19:13:45.547Z · LW(p) · GW(p)

Maybe it uploaded all the minds it was programmed to help, and then ran them all on a series of small, incredibly efficient computers, only sending duplicates across space for the security of redundancy.

A few hundred parallel copies around as many stars would be pretty darn safe, and they wouldn't have any noticeable effect on the environment. We could have one around our sun right now without noticing.

And maybe the potential for outside destruction is better met by stealth than by salient power.

If it doesn't favor just making more people to have around, why should it ever go on beyond that?

comment by turchin · 2016-03-01T11:28:00.067Z · LW(p) · GW(p)

May be the AI had existed in the Galaxy and halted by some internal reasons, but leaved after it some self-replicating remnants, which are only partly intelligent and so unable to fall in the same trap. Their behavior would look absurd to us, and that is why we can't find them.

Replies from: Coacher
comment by Coacher · 2016-03-01T18:23:49.792Z · LW(p) · GW(p)

Adding additional unneeded assumptions does not make hypothesis more likely. Just halting and not leaving any retarded children explains observations just as well if not better.

comment by buybuydandavis · 2016-02-27T02:16:53.455Z · LW(p) · GW(p)

given our current observations (I.e. that there is no evidence of it`s existence)?

Our current observation is that we haven't detected and identified any evidence of their existence.

Another option: Maybe they're not hiding, their just doing their thing, and don't leak energy in a way that is obvious to us.

Replies from: Coacher
comment by Coacher · 2016-02-27T09:24:01.595Z · LW(p) · GW(p)

Usually lack of evidence is evidence of lacking. But given their existence AND lack of evidence, I think probability of purposefully hiding (or at least being cautious about not showing off too much) is bigger than they just doing their thing and we just don't see it even though we are looking really hard.

Replies from: buybuydandavis
comment by buybuydandavis · 2016-03-01T05:49:11.607Z · LW(p) · GW(p)

Usually lack of evidence is evidence of lacking.

Big difference between there being a lack of evidence, and a lack of an ability to detect and identify evidence which exists.

I think people are rather cheeky to assume that we necessarily have the ability to detect a SI.

Replies from: Coacher
comment by Coacher · 2016-03-01T09:17:24.626Z · LW(p) · GW(p)

There is no difference in saying that there is no evidence and that there might be evidence, but we don't have ability to detect it. Does god exist? Well maybe there is plenty evidence that it does, we just don't have the ability to see it?

Replies from: buybuydandavis
comment by buybuydandavis · 2016-03-01T10:51:21.562Z · LW(p) · GW(p)

Big difference.

You don't know how much money is in my wallet. I do. You have no evidence, and you don't have a means to detect it, but it doesn't mean there is no evidence to be had.

That third little star off the end of the milky way may be a gigantic alien beacon transmitting a spread spectrum welcome message, but we just haven't identified it as such, or spent time trying to reconstruct the message from the spread spectrum signal.

We see it. We record it at observatories every night. But we haven't identified it as a signal, nor decoded it.

Replies from: gjm, Coacher
comment by gjm · 2016-03-01T15:50:09.636Z · LW(p) · GW(p)

There is indeed a difference between "we have observed good evidence of X" and "there is something out there that, had we observed it, would be good evidence of X".

Even so, absence of observed evidence is evidence of absence.

How strong it is depends, of course, on how likely it is that there would be observed evidence if the thing were real. (I don't see anyone ignoring that fact here.)

comment by Coacher · 2016-03-01T15:00:24.643Z · LW(p) · GW(p)

It seems you have some uncommon understanding of what word evidence means. Evidence is peace of information, not some physical thing.

Replies from: Lumifer
comment by Lumifer · 2016-03-01T15:29:54.043Z · LW(p) · GW(p)

Evidence is peace of information

I like this :-)

comment by AABoyles · 2016-02-26T18:35:24.184Z · LW(p) · GW(p)

Maybe they figured out how to convince it to accept some threshold of certainty (so it doesn't eat the universe to prove that it produced exactly 1,000,000 paperclips), it achieved its terminal goal with a tiny amount of energy (less than one star's worth), and halted.

comment by AABoyles · 2016-02-26T18:08:01.267Z · LW(p) · GW(p)

The most obvious and least likely possibility is that the superintelligence hasn't had enough time to colonize the galaxy (i.e. it was created very recently).

Replies from: Houshalter
comment by Houshalter · 2016-02-29T01:26:58.349Z · LW(p) · GW(p)

That's very unlikely. The universe is billions of years old, yet it would take only mere thousands of years to colonize the galaxy. Maybe millions if they aren't optimally efficient, but still a short time in the history of Earth.

Replies from: AABoyles
comment by AABoyles · 2016-03-01T16:09:22.151Z · LW(p) · GW(p)

I'm aware. Note that I did call it the "least likely possibility."

comment by TRIZ-Ingenieur · 2016-02-27T12:07:34.961Z · LW(p) · GW(p)

Obviously Singleton AIs have a high risk to get extinct by low probability events before they initiate Cosmic Endowment. Otherwise we would have found evidence. Given the foom development speed a singeton AI might decide after few decades that it does not need human assistance any more. It extinguishes humankind to maximize its resources. Biological life had billions of years to optimize even against rarest events. A gamma ray burst or any other stellar event could have killed this Singleton AI. How we are currently designing AI will definetely not lead to a Singleton AI that will mangle its mind for 10 million years until it decides about the future of humankind.