[Link] The Dominant Life Form In the Cosmos Is Probably Superintelligent Robots

post by Gunnar_Zarncke · 2014-12-20T12:28:50.420Z · LW · GW · Legacy · 28 comments

Contents

28 comments

An Article on Motherboard reports about  Alien Minds by Susan Schneider who claiThe Dominant Life Form In the Cosmos Is Probably Superintelligent Robots. The article is crosslinked to other posts about superintelligence and at the end discusses the question why these alien robots leave us along. The arguments puts forth on this don't convince me though. 

 

28 comments

Comments sorted by top scores.

comment by Shmi (shminux) · 2014-12-23T22:37:49.202Z · LW(p) · GW(p)

Just wanted to repeat my previous conjecture that we would not recognize an intelligence sufficiently different from ours as one. After all, any human actions can eventually be reduced to the underlying physics. And the problem of detecting agents without knowing their goals in advance seems to be an open one, as far as I can tell, attracting little interest.

Thus the "Great Filter" is just our blindness to all other forms of agency.

comment by Kawoomba · 2014-12-20T14:32:10.751Z · LW(p) · GW(p)

To the extent that a dichotomy between "biological, evolutionary evolved intelligent" life and "superintelligent robots" will even make sense in the far future (utopias, dystopias, in most any scenario in which intelligence still exists), we'd probably refer to ourselves as "superintelligent robots" at that stage.

The idea that long after a point when uploading was feasible, we'd still stick meatbags in spaceships to travel to distant stars, is somewhat ludicrous. Unless aliens, for unknown quirks in their utility function, really value their biological forms, and got negentropy to waste, it's a no-brainer that they'd travel/live in some resource-optimized (e.g. electronic) form.

There will always be a causal link from an "artificial" intelligence back to some naturally evolved intelligence. Whether we call the invariable "jumps" in between (e.g. humans creating AI) as breaking that link, or merely as a transformation, is just a definitional quibble. After a certain point in a civilization's evolution, we'd call them "robots" (Even if that word has some strange etymological connotation, since the majority wouldn't necessarily be 'working'.).

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2014-12-21T09:16:05.676Z · LW(p) · GW(p)

There will always be a causal link from an "artificial" intelligence back to some naturally evolved intelligence.

A causal link yes.

we'd probably refer to ourselves as "superintelligent robots" at that stage.

But only if we did manage to enhance or upload or whatever else during the transition with the distinguishing feature that we'd perceive this as a continuation of our identity.

But if we created a AGI that didn't care for our values and facilitated our transition but just did whatever else its utility function decreed than you'd still have 'superintelligent robots' but humans would be the bootloader for them just like dinos were the bootloader for mammals (kind of).

Replies from: chaosmage
comment by chaosmage · 2014-12-23T15:12:05.041Z · LW(p) · GW(p)

tl/dr: "Survival" inside an AGI does not require Friendliness, but only that it is able to create models of us that are good enough for us to accept as genuine copies.

I don't think the AGI needs to care for our values in order to facilitate our transition. For the sake of argument, lets assume an AGI that doesn't care about human values - the Paperclip Maximizer will do.

Couldn't this AGI, if it so chose, easily create something that we'd accept as a continuation of our identity? A digital copy of a human that is so convincing that this person (and the people that know him or her) could accept it as identical? Or a hyper-persuasive philosophy that tells people their non-copyable features (say consciousness) are nonessential?

I imagine that it could (alternative discussed below). Which leads to the question: Would it?

I think it would (alternative discussed below). Any AGI that wants self-preservation would want to minimize risk of conflict by appearing helpful or at least non-threatening - at least until the cost of appearing so is greater that the cost of being repeatedly nuked. If it can convince people it is offering genuine immortality in upload form, its risk of being nuked is greatly reduced. It could delete the (probably imperfect) models after humans aren't a threat anymore if - and only if - it it so sure it'll never need specimina of the best work of Darwinian evolution again that it'd rather turn the comparatively tiny piece of computronium they exist in into paperclips. But how could it be sure?

So unless it is much, much better at nanotech than it is at modeling people, I do expect an Earth AGI would contain at least some vestiges of human identity (maybe even more of those than of the vestiges of oak or flatworm identity). Of course they would be irrelevant to almost the entire rest of the system, because they're not good enough at making paperclips to matter.

This leaves the scenarios where my assumptions are wrong and the Paperclip Maximizer is somehow either unable to create a persuasive "transition" offer, or decides against making it. Such Paperclip Maximizer variants don't seem superintelligent to me (more like Grey Goo), but I suppose they could be built. However, in this case, its lack of human values is only a problem because it also lacks the ability to model humans and a credible deterrent. The former of these two might be an easier problem than Friendlyness, if we're only talking about our survival (as superintelligent robots or whatever) inside that AGI, not about a goal of actually having a say in what it does.

comment by advancedatheist · 2014-12-20T15:11:24.845Z · LW(p) · GW(p)

The Superintelligent Robot in My Garage, in other words. These conjectures for why ET's have to exist, but we haven't observed them yet, sound increasingly desperate and crazy. People hold on to this belief any way because the advanced ET has become a place-filler for the gods and demons we've recently believed in.

Replies from: JoshuaZ, Dentin
comment by JoshuaZ · 2014-12-21T16:00:07.542Z · LW(p) · GW(p)

The absence of apparent intelligent life is a serious issue. It has nothing to do with believing in gods or demons, but is a real problem that we need to figure out the cause for. If the Great Filter is behind us, there's no serious issue. If the Great Filter is in front of us, then we need to be very concerned. These are deep questions that also have practical implications. The fact that the particular explanation from the article isn't a very good one is incidental.

Replies from: None, jacob_cannell
comment by [deleted] · 2014-12-22T06:12:50.915Z · LW(p) · GW(p)

The Great Filter provides no explanation here. There are four options: early filter, late filter, multiple (early and late) filters, and no filters. In the first three cases the existence of early filters is explanation for an empty sky. (We are getting very close to the capability to build von Neuman probes though, so I'm not sure an o sky is evidence for a late filter.) In the case of no filter however, we can expect that any intelligence would start expanding into the universe at near the speed of light. So the fact that our light cone doesn't contain other intelligences, so far as we can see, is also consistent with no filter. An observation which is evidence for a proposition being both true and false doesn't provide any useful evidence.

Replies from: RobbBB, Punoxysm, JoshuaZ
comment by Rob Bensinger (RobbBB) · 2014-12-24T07:34:27.324Z · LW(p) · GW(p)

In the 'there's no filter, and colonization bubbles just expand too rapidly for other organisms to get advance warning' scenario, there's a fairly small window of time between 'the first organisms evolve' and 'no more organisms evolve ever again'. But in the absence of an early filter, that small window should occur early in the universe's lifespan, not late. The fact that we live in an old universe suggests that there must be an early filter of some sort (particularly if colonization is easy).

Replies from: None
comment by [deleted] · 2014-12-28T16:31:01.695Z · LW(p) · GW(p)

The universe is a big, big place. It also becomes isolated relatively fast due to accelerating inflationary effects. There will probably be many intelligences out there even our most distant descendants will never meet.

Ultimately though you're making assumptions about the prior distribution of intelligent life which isn't warranted with a sample size of 1.

Replies from: lavalamp
comment by lavalamp · 2015-01-04T16:36:54.890Z · LW(p) · GW(p)

An extremely low prior distribution of life is an early great filter.

comment by Punoxysm · 2014-12-23T18:47:45.582Z · LW(p) · GW(p)

We are getting very close to the capability to build von Neuman probes though, so I'm not sure an o sky is evidence for a late filter.

I am highly skeptical of this statement.

We haven't built a machine that can get out of our solar system and land on a planet in another.

We haven't made machines that can copy themselves terrestrially.

Making something that can get out of the solar system, land on another planet, then make (multiple) copies of itself seems huge leap beyond either of the other two issues.

Even an AGI that could self-replicate might have enormous difficulty getting to another planet and turning its raw resources into copies of itself.

Replies from: None
comment by [deleted] · 2014-12-28T16:34:20.258Z · LW(p) · GW(p)

But an AGI wouldn't be an AGI if it wasn't able to figure out how to solve the problem of getting from here to there and using in-situ resources to replicate itself. I hate to make arguments from definitions, but that's kinda the case here. If an intelligence can't solve that solvable problem, it really isn't a general intelligence now is it?

So how far are we from making an (UF)AGI? 15 years? 50 years? 100 years? That's still a cosmic blink in the eye.

Replies from: Wes_W
comment by Wes_W · 2014-12-28T17:58:48.419Z · LW(p) · GW(p)

But an AGI wouldn't be an AGI if it wasn't able to figure out how to solve the problem of getting from here to there and using in-situ resources to replicate itself.

It remains to be seen whether we humans can do that. Does this mean we might not be general intelligences, either? That seems like a slightly silly and very nonstandard way to use the term.

Replies from: None
comment by [deleted] · 2014-12-29T06:31:29.547Z · LW(p) · GW(p)

Eh... read up on the literature. Pay special attention to studies done by the British Interplanetary Society and the Advanced Automation in Space NASA workshop in the 80's, not to mention all the work done by early space advocates, some published, some not.

We can say with some confidence that we know how to do something even if it hasn't yet been reduced to practice or yet practially demonstrated. 19th century thinkers showed how rockets could in principle be built to enable human exploration of space. And they were right, pretty much on every point -- we still use and cite their work today.

We have done enough research on automated exploration and kinematic self-replicating machines* to say that it is definitively possible (life being an example), and within our reach if we had pockets and conviction deep enough to create it.

Replies from: Wes_W
comment by Wes_W · 2014-12-29T06:43:47.987Z · LW(p) · GW(p)

We can say with some confidence that we know how to do something even if it hasn't yet been reduced to practice or yet practially demonstrated.

Right, I'm objecting to the claim that

pockets and conviction deep enough to create it.

are included by definition when we say "general intelligence". (That, or I'm totally misunderstanding you.)

Replies from: None
comment by [deleted] · 2014-12-29T07:09:04.909Z · LW(p) · GW(p)

You must be misunderstanding me, because what you just said seems like a total non-sequitur. What does pockets and deep conviction have to do with general intelligence indeed?

Replies from: Wes_W
comment by Wes_W · 2014-12-29T07:35:34.905Z · LW(p) · GW(p)

Starting with a rhetorical question was probably a bad idea. Let me try again:

But an AGI wouldn't be an AGI if it wasn't able to figure out how to solve the problem of getting from here to there and using in-situ resources to replicate itself.

I don't think this is true. An AGI which - due to practical limitations - cannot eat its future light cone can still be generally intelligent. Humans are potentially an example (minus the "artificial", but that isn't the relevant part).

Claiming that general intelligence can eat the universe by definition seems to suffer the same problem as the Socrates/hemlock question. It would mean we can't call something generally intelligent until we see it able to eat the universe, which would require not just theoretical knowledge but also the resources to pull it off. And if that's the requirement, then human general intelligence is an open question and we'd have zero known examples of general intelligence.

This does not seem to fit how we'd like to use the term to point to "the kind of problem-solving humans can do", so I think it's a bad definition.

(AGI can probably eat the universe, but that's more like a theorem than a definition.)

Replies from: None
comment by [deleted] · 2014-12-29T10:12:35.383Z · LW(p) · GW(p)

Ok there was an unstated assumption, but that assumption was that the AGI has physical effectors. Those effectors could be nearly anything, since with enough planning and time nearly any physical effector could be used to bootstrap your way to any other capability.

So many posts back I was asserting that even un augmented human beings have the capability to eat our collective future light cone. It's a monumental project, yes, with probably hundreds of years before the first starships leave. But once they do our future descendents would be expanding into the cosmos at a fairly large fraction of the speed of light. I'll point you to the various studies done by the British Interplanetary Society and others on interstellar colonization if you don't want to take my word for it.

So if regular old homo sapiens can do it, but an AGI with physical effectors can't, then I seriously question how general that intelligence is.

Replies from: Punoxysm
comment by Punoxysm · 2014-12-30T18:22:03.634Z · LW(p) · GW(p)

Yes; please provide those links.

And remember that getting to this level of industrial capacity on earth followed from millions of years of biological evolution and thousands of years of cultural evolution in Earth's biosphere. Why would one ship full of humans be able to replicate that success somewhere else?

Similarly, an AGI that can replicate itself with an industrial base at its disposal might not be able to when isolated from those resource (it's still an AGI).

comment by JoshuaZ · 2014-12-23T16:08:34.753Z · LW(p) · GW(p)

In the case of no filter however, we can expect that any intelligence would start expanding into the universe at near the speed of light.

Questionable premise. It isn't at all clear how close one can get to the speed of light, and even if this were occurring at 10% of the speed of light rather 99% the situation would look drastically different.

comment by jacob_cannell · 2014-12-22T05:30:27.711Z · LW(p) · GW(p)

The Great Filter dogma has a number of problems.

If you start with any reasonable universe simulation model prior and sample it to get a prior probability distribution over habitable planets forming - you get lots and lots of them. You would need enormous evidence - like actually searching all of the galaxy - to overcome this prior.

Furthermore, we need to factor in universe selection in the multiverse. Biophilic universes can probably engineer space-time by creating artificial new universes - effectively shaping the prior probability distribution over the entire multiverse. (even if the physics of this seem low probability, it amplifies itself from nothing) Thus, life is far far more likely than it otherwise should be, because life is a necessary component of replicating universes, and replicating universes dominate the multiverse.

And finally, the entire idea of the great filter is based on an extremely specific and low probability model of future evolution of superintelligent civilization - dyson spheres and other nonsense.

See my reply here. Basically entropy/temperature is computational stupidity, and advanced civilizations need to move into a low entropy environment (like the intergalactic medium), becoming cold dark matter.

Replies from: JoshuaZ
comment by JoshuaZ · 2014-12-23T16:21:10.282Z · LW(p) · GW(p)

If you start with any reasonable universe simulation model prior and sample it to get a prior probability distribution over habitable planets forming - you get lots and lots of them

You don't need anything so fancy to get this conclusion. Empirical data and very basic models suggest that there are a lot of planets.

You would need enormous evidence - like actually searching all of the galaxy - to overcome this prior.

This does not follow, since you don't what the probabilities of other aspects other than habitable planets arising are. Moreover, we have effectively searched far beyond just one galaxy: if civilizations are common then they aren't doing anythng at all to show that: there's no sighn of stellar lifting, building Dyson spheres, or anything else that looks unnatural. That's a serious problem. We live in a universe that apparently has a lot of life bearing planets and the evidence shows that there's very little large-scale civilization. So what should one conclude?

Biophilic universes can probably engineer space-time by creating artificial new universes

"Probably" This means what? Why is this even a located hypothesis? Why should a universe being likely to have lots life be a universe more likely to have more new universes?

effectively shaping the prior probability distribution over the entire multiverse. (even if the physics of this seem low probability, it amplifies itself from nothing)

This is assuming an extremely strong form of multiverse, not just one that has differences in the fundamental constants of physics (questionable itself) but ones where the laws of physics themselves can divege. I see no good reason to assume such. Moreover, if one considers anything like that to be likely, it makes the situation even worse, because it is another reason to expect to see lots of civilizations, which we don't.

And finally, the entire idea of the great filter is based on an extremely specific and low probability model of future evolution of superintelligent civilization - dyson spheres and other nonsense.

Calling something nonsense doesn't make it go away. And yes, any specific construction may or may not occur- but the idea that civilizations exist and are leaving massive amounts of energy to go completely to waste requires an explanation. One should be worried when one is labeling stellar engineering as "nonsense" while taking engineering new universes in a broad multiverse setting as given. One of these is much closer to the established laws of physics.

See my reply here. Basically entropy/temperature is computational stupidity, and advanced civilizations need to move into a low entropy environment (like the intergalactic medium), becoming cold dark matter.

This is at best confused. Yes, doing operations takes energy. But you'd still rather use the available energy to do computations. There's no point in wasting it. As for the idea that this somehow involves "cold dark matter"- you are claiming that they are doing their computations made out of what exactly? Hidden MACHOs?

comment by Dentin · 2014-12-20T18:24:20.288Z · LW(p) · GW(p)

Actually, I don't find these conjectures desperate or crazy at all, because it's what I'd do. Moving off this biological substrate onto something more reliable, hardy, efficient, and copyable seems like a no-brainer level "Good Idea". If there's any advanced intelligent life out there at all, one of its highest priorities is going to be finding more durable substrate to live in than sloppy, unoptimized and non-designed biochemistry.

Replies from: Lalartu, advancedatheist
comment by Lalartu · 2014-12-20T20:29:18.537Z · LW(p) · GW(p)

Point is, most likely there aren't any advanced (that is, starfaring, dysonspherebuilding and so on) civilizations at all.

Replies from: passive_fist
comment by passive_fist · 2014-12-22T00:55:07.895Z · LW(p) · GW(p)

More and more (as we continue to move up the ladder of technological progress) this is seeming like a valid and plausible hypothesis. Which is very disconcerting to me.

Replies from: chaosmage
comment by chaosmage · 2014-12-23T14:07:15.026Z · LW(p) · GW(p)

Wouldn't the opposite be more disconcerting?

Replies from: passive_fist
comment by passive_fist · 2014-12-23T21:03:48.136Z · LW(p) · GW(p)

Having a dyson-sphere-building alien race right next door would be worrisome, but not having any such civilizations at all? That's very worrisome, because it means the Great Filter could be waiting for us.

comment by advancedatheist · 2014-12-20T20:30:23.499Z · LW(p) · GW(p)

I heard a story about a guy in the UK who has spent a fortune to make his "biological substrate" look like Kim Kardashian's. Just because humans can think of doing all kinds of things, it doesn't follow in the least that (1) nonhuman things vaguely analogous to humans exist elsewhere in the universe; and (2) these things would want to do anything to themselves like what you imagine.

Basically this whole ET idea has turned into a waste of time for people who have the IQ's to do more productive things with their minds. ET's live beyond our world, they live forever, and they can assume different "substrates." For a site devoted to exploring cognitive biases, why don't you explore how this ET fantasy has basically become a replacement for god beliefs rationalists would reject in other contexts?