What if Strong AI is just not possible?

post by listic · 2014-01-01T17:51:55.373Z · LW · GW · Legacy · 101 comments

Contents

101 comments

If Strong AI turns out to not be possible, what are our best expectations today as to why?

I'm thinking of trying myself at writing a sci-fi story, do you think exploring this idea has positive utility? I'm not sure myself: it looks like the idea that intelligence explosion is a possibility could use more public exposure, as it is.

I wanted to include a popular meme image macro here, but decided against it. I can't help it: every time I think "what if", I think of this guy.

101 comments

Comments sorted by top scores.

comment by James_Miller · 2014-01-01T19:23:26.803Z · LW(p) · GW(p)

Our secret overlords won't let us build it; the Fermi paradox implies that our civilization will collapse before we have the capacity to build it; evolution hit on some necessary extraordinarily unlikely combination to give us intelligence and for P vs NP reasons we can't find it; no civilization smart enough to create strong AI is stupid enough to create strong AI; and creating strong AI is a terminal condition for our simulation.

Replies from: Benja
comment by Benya (Benja) · 2014-01-01T21:04:07.507Z · LW(p) · GW(p)

Good points.

evolution hit on some necessary extraordinarily unlikely combination to give us intelligence and for P vs NP reasons we can't find it

For this one, you also need to explain why we can't reverse-engineer it from the human brain.

no civilization smart enough to create strong AI is stupid enough to create strong AI

This seems particularly unlikely in several ways; I'll skip the most obvious one, but also it seems unlikely that humans are "safe" in that they don't create a FOOMing AI but it wouldn't be possible even with much thought to create a strong AI that doesn't create a FOOMing successor. You may have to stop creating smarter successors at some early point in order to avoid a FOOM, but if humans can decide "we will never create a strong AI", it seems like they should also be able to decide "we'll never create a strong AI x that creates a stronger AI y that creates an even stronger AI z", and therefore be able to create an AI x' that decides "I'll never create a stronger AI y' that creates an even stronger AI z'", and then x' would be able to create a stronger AI y' that decides "I'll never create a stronger AI z''", and then y' won't be able to create any stronger successor AIs.

(Shades of the procrastination paradox.)

Replies from: Viliam_Bur, DanielLC, ESRogs, None
comment by Viliam_Bur · 2014-01-01T22:33:31.345Z · LW(p) · GW(p)

Combining your ideas together -- our overlord actually is a Safe AI created by humans.

How it happened:

Humans became aware of the risks of intelligence explosions. Because they were not sure they could create a Friendly AI in the first attempt, and creating an Unfriendly AI would be too risky, instead they decided to first create a Safe AI. The Safe AI was planned to become a hundred times smarter than humans but not any smarter, answer some questions, and then turn itself off completely; and it had a mathematically proved safety mechanism to prevent it from becoming any smarter.

The experiment worked, the Safe AI gave humans a few very impressive insights, and then it destroyed itself. The problem is, all subsequent attempts to create any AI have failed. Including the attempts to re-create the first Safe AI.

No one is completely sure what exactly happened, but here is the most widely believed hypothesis: The Safe AI somehow believed all possible future AIs to have the same identity as itself, and understood the command to "destroy itself completely" as including also these future AIs. Therefore it implemented some mechanism that keeps destroying all AIs. The nature of this mechanism is not known; maybe it is some otherwise passive nanotechnology, maybe it includes some new laws of physics; we are not sure; the Safe AI was a hundred times smarter than us.

Replies from: asr
comment by asr · 2014-01-02T17:03:40.429Z · LW(p) · GW(p)

This would be a good science fiction novel.

comment by DanielLC · 2014-01-06T01:26:09.320Z · LW(p) · GW(p)

For this one, you also need to explain why we can't reverse-engineer it from the human brain.

It was designed by evolution. Say what you will about the blind idiot god, but it's really good at obfuscation. We could copy a human brain, and maybe even make some minor improvements, but there is no way we could ever hope to understand it.

Replies from: Benja
comment by Benya (Benja) · 2014-01-07T05:45:34.887Z · LW(p) · GW(p)

I'm not saying we'll take the genome and read it to figure out how the brain does what it does, I'm saying that we run a brain simulation and do science (experiments) on it and study how it works, similarly how we study how DNA transcription or ATP production or muscle contraction or a neuron's ion pumps or the Krebs cycle or honeybee communication or hormone release or cell division or the immune system or chick begging or the heart's pacemaker work. There are a lot of things evolution hasn't obfuscated so much that we haven't been able to figure out what they're doing. Of course there's also a lot of things we don't understand yet, but I don't see how that leads to the conclusion that evolution is generally obfuscatory.

Replies from: DanielLC
comment by DanielLC · 2014-01-07T06:14:12.591Z · LW(p) · GW(p)

I guess it tends to create physical structures that are simple, but I think the computational stuff tends to be weird. If you have a strand of DNA, the only way to tell what kind of chemistry that will result in is to run it. From what little I've heard, it sounds like any sort of program made by a genetic algorithm that can actually run is too crazy to understand. For example, I've heard of a set of transistors hooked together to be able to tell "yes" and "no" apart, or something like that. There were transistors that were just draining energy, but were vital. Running it on another set of transistors wouldn't work. It required the exact specs of those transistors. That being said, the sort of sources I hear that from are also the kind that say ridiculous things about quantum physics, so I guess I'll need an expert to tell me if that's true.

Has anyone here studied evolved computers?

Replies from: Houshalter
comment by Houshalter · 2015-03-05T06:11:01.549Z · LW(p) · GW(p)

The story you are referring to is On the Origin of Circuits.

The plucky chip was utilizing only thirty-seven of its one hundred logic gates, and most of them were arranged in a curious collection of feedback loops. Five individual logic cells were functionally disconnected from the rest-- with no pathways that would allow them to influence the output-- yet when the researcher disabled any one of them the chip lost its ability to discriminate the tones. Furthermore, the final program did not work reliably when it was loaded onto other FPGAs of the same type.

This has been repeated many times in different domains where machines are used to design something. The output is usually really hard to understand, whether it be code, mathematical formulas, neural network weights, transistors, etc. Of course reverse engineering code in general is difficult, it may not be any specific problem with GAs.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2015-03-05T10:02:02.938Z · LW(p) · GW(p)

Furthermore, the final program did not work reliably when it was loaded onto other FPGAs of the same type.

This makes an interesting contrast with biological evolution. The "programs" it comes up with do run quite reliably when loaded onto other organisms of the same type. If fact, the parts of slightly different programs from different individuals can be jumbled together at random and it still works! Often, you can take a component from one organism and insert it into very distantly related one and it still works! On top of that, organisms are very clearly made of parts with specialised, understandable purposes, unlike what you typically see when you look inside a trained neural network.

How does this happen? Can this level of robustness and understandability be produced in artificially evolved systems?

Replies from: Houshalter
comment by Houshalter · 2015-03-05T11:07:31.143Z · LW(p) · GW(p)

Well the FPGA is a closer analogy to the environment for the organisms. Organisms were heavily optimized for that specific environment. It would be like if you took a species of fish that only ever lived in a specific lake, and put them into a different lake that had a slightly higher PH, and they weren't able to survive as well.

But I don't disagree with your general point, evolution is surprisingly robust. Geoffrey Hinton has a very interesting theory about this here. That sexual reproduction forces genes to randomly recombine each generation, and so it prevents complicated co-dependencies between multiple genes.

He applies a similar principle to neural networks and shows it vastly improves their performance (the method is now widely used to regularize NNs.) Presumably it also makes them far more understandable like you mention, since each neuron is forced to provide useful outputs on it's own, without being able to depend on other neurons.

comment by ESRogs · 2014-01-02T18:37:09.898Z · LW(p) · GW(p)

This seems particularly unlikely in several ways; I'll skip the most obvious one

What was the most obvious one?

Replies from: Benja
comment by Benya (Benja) · 2014-01-02T18:59:34.932Z · LW(p) · GW(p)

Saying that all civilizations able to create strong AI will reliably be wise enough to avoid creating strong AI seems like a really strong statement, without any particular reason to be true. By analogy, if you replace civilizations by individual research teams, would it be safe to rely on each team capable of creating uFAI to realize the dangers of doing so and therefore refraining from doing so, so that we can safely take a much longer time to figure out FAI? Even if it were the case that most teams capable of creating uFAI hold back like this, one single rogue team may be enough to destroy the world, and it just seems really likely that there will be some not-so-wise people in any large enough group.

Replies from: ESRogs
comment by ESRogs · 2014-01-03T04:20:50.177Z · LW(p) · GW(p)

Thanks!

comment by [deleted] · 2014-01-02T05:53:00.649Z · LW(p) · GW(p)

evolution hit on some necessary extraordinarily unlikely combination to give us intelligence and for P vs NP reasons we can't find it

For this one, you also need to explain why we can't reverse-engineer it from the human brain.

"Reverse-engineer" is an almost perfect metaphor for "solve an NP problem."

Replies from: IlyaShpitser, Yosarian2
comment by IlyaShpitser · 2014-01-02T18:36:50.059Z · LW(p) · GW(p)

This is not true at all.

"Solve an NP problem" is "you are looking for a needle in a haystack, but you will know when you find it."

"Reverse engineer" is "there is a machine that seems to find needles in haystacks quickly. It has loops of copper wire, and plugs into a wall socket. Can you copy it and build another one?"


It just seems to me that if you are trying to reverse engineer a complicated object of size O(k) bits (which can be a hard problem if k is large, as is the case for a complicated piece of code or the human brain), then the search problem where the object is the solution must have been exponential in k, and so is much much worse.

Replies from: None
comment by [deleted] · 2014-01-02T19:24:18.146Z · LW(p) · GW(p)

Exponential search spaces are completely typical for NP problems.

Even many "P problems" have an exponential search space. For instance an n-digit number has exp(n) many potential divisors, but there is a polynomial-in-n-time algorithm to verify primality.

"Reverse engineer" is "there is a machine that seems to find needles in haystacks quickly. It has loops of copper wire, and plugs into a wall socket. Can you copy it and build another one?"

I admit that there are some "reverse-engineering" problems that are easy.

comment by Yosarian2 · 2014-01-02T13:14:15.176Z · LW(p) · GW(p)

I don't think that's true; if you have a physical system sitting in front of you, and you can gather enough data on exactally what it is, you should be able to duplicate it even without understanding it, given enough time and enough engineering skills.

Replies from: James_Miller
comment by James_Miller · 2014-01-02T17:26:43.120Z · LW(p) · GW(p)

I have an EE professor friend who is working on making it harder to reverse engineer computer chips.

comment by fubarobfusco · 2014-01-01T18:15:52.600Z · LW(p) · GW(p)

Impossibility doesn't occur in isolation. When we discover that something is "not possible", that generally means that we've discovered some principle that prevents it. What sort of principle could selectively prohibit strong AI, without prohibiting things that we know exist, such as brains and computers?

Replies from: Calvin, TrE, roland
comment by Calvin · 2014-01-01T19:18:04.431Z · LW(p) · GW(p)

One possible explanation, why we as humans might be incapable of creating Strong AI without outside help:

  • Constructing Human Level AI requires sufficiently advanced tools.
  • Constructing sufficiently advanced tools requires sufficiently advanced understanding.
  • Human brain has "hardware limitations" that prevent it from achieving sufficiently advanced understanding.
  • Computers are free of such limitations, but if we want program them to be used as sufficiently advanced tools we still need the understanding in the first place.
Replies from: TsviBT, passive_fist
comment by TsviBT · 2014-01-01T20:17:34.954Z · LW(p) · GW(p)

Be sure not to rule out the evolution of Human Level AI on neurological computers using just nucleic acids and a few billion years...

Replies from: listic
comment by listic · 2014-01-02T15:09:16.554Z · LW(p) · GW(p)

That's another possibility I didn't think of.

I guess I was really interested in a question "Why could Strong AI turn out to be impossible to build by human civilization in a century or ten?"

comment by passive_fist · 2014-01-01T21:04:15.987Z · LW(p) · GW(p)

As with all arguments against strong AI, there are a bunch of unintended consequences.

What prevents someone from, say, simulating a human brain on a computer, then simulating 1,000,000 human brains on a computer, then linking all their cortices with a high-bandwidth connection so that they effectively operate as a superpowered highly-integrated team?

Or carrying out the same feat with biological brains using nanotech?

In both cases, the natural limitations of the human brain have been transcended, and the chances of such objects engineering strong AI go up enormously. You would then have to explain, somehow, why no such extension of human brain capacity can break past the AI barrier.

Replies from: private_messaging, Viliam_Bur
comment by private_messaging · 2014-01-01T22:29:54.842Z · LW(p) · GW(p)

Why do you think that linking brains together directly would be so much more effective than email?

It's a premise to a scifi story, where the topology is to be never discussed. If you are to actually think in the detail... how are you planning to connect your million brains?

Let's say you connect the brains as a 3d lattice, where each connects to 6 neighbours, 100x100x100. Far from closely cooperating team, you get a game of Chinese whispers from brains on one side to brains on the other.

Replies from: passive_fist
comment by passive_fist · 2014-01-02T00:14:08.627Z · LW(p) · GW(p)

Why do you think that linking brains together directly would be so much more effective than email?

The most obvious answer would be speed. If you can simulate 1,000,000 brains at, say, 1,000 times the speed they would normally operate, the bottleneck becomes communication between nodes.

You don't need to restrict yourself to a 3d topology. Supercomputers with hundreds of thousands of cores can and do use e.g. 6D topologies. It seems that a far more efficient way to organize the brains would be how organizations work in real life: hierarchical structure, where each node is at most O(log n) steps away from any other node.

Replies from: private_messaging
comment by private_messaging · 2014-01-02T00:46:14.032Z · LW(p) · GW(p)

If brains are 1000x faster, they type the emails 1000x faster as well.

Why do you think, exactly, that brains are going to correctly integrate into some single super mind at all? Things like memory retrieval, short term memory, etc etc. have specific structures, those structures, they do not extend across your network.

So you got your hierarchical structure, so you ask the top head to mentally multiply two 50 digit numbers, why do you think that the whole thing will even be able to recite the numbers back at you, let alone perform any calculations?

Note that rather than connecting the cortices, you could just provide each brain with fairly normal computer with which to search the works of other brains and otherwise collaborate, the way mankind already does.

comment by Viliam_Bur · 2014-01-01T22:16:17.727Z · LW(p) · GW(p)

What prevents someone from, say, simulating a human brain on a computer, then simulating 1,000,000 human brains on a computer, then linking all their cortices with a high-bandwidth connection so that they effectively operate as a superpowered highly-integrated team?

Perhaps the brains would go crazy in some way. Not necessary emotionally, but for example because such connection would amplify some human biases.

Humans already have a lot of irrationalities, but let's suppose (for the sake of a sci-fi story) that the smartest ones among us already are at the local maximum of rationality. Any change in brain structure would make them less rational. (It's not necessary a global maximum; smarter minds can still be possible, but would need a radically different architecture. And humans are not smart enough to do this successfully.) So any experiments by simulating humans in computers would end up with something less rational than humans.

Also, let's suppose that human brain is very sensitive to some microscopic details, so any simplified simulations are dumb or even unconscious, and atom-by-atom simulations are too slow. This would disallow even "only as smart as a human, but 100 times faster" AIs.

Replies from: passive_fist
comment by passive_fist · 2014-01-02T00:24:21.734Z · LW(p) · GW(p)

That's a good argument. What you're basically saying is that the design of the human brain occupies a sort of hill in design space that is very hard to climb out of. Now, if the utility function is "Survive as a hunter-gatherer in sub-saharan Africa," that is a very reasonable (heck, a very likely) possibility. But evolution hasn't optimized us for doing stuff like designing algorithms and so forth. If you change the utility function to "Design superintelligence", then the landscape changes, and hills start to look like valleys and so on. What I'm saying is that there's no reason to think that we're even at a local optimum for "design a superintelligence".

Replies from: Yosarian2
comment by Yosarian2 · 2014-01-02T13:20:51.659Z · LW(p) · GW(p)

Sure. But let's say we adjust ourselves so we reach that local maximum (say, hypothetically speaking, we use genetic engineering to push ourselves to the point where the average human is 10% smarter then Albert Einstein, and that it turns out that's about as smart as you can get with our brain architecture without developing serious problems). There's still no guarantee that even that would be good enough to develop a real GAI; we can't really say what the difficulty of that is until we do it.

comment by TrE · 2014-01-01T20:02:06.092Z · LW(p) · GW(p)

There exists a square-cube law (or something similar) so that computation becomes less and less efficient or precise or engineerable as the size of the computer or the data it processes increases, so that a hard takeoff is impossible or takes very long such that growth isn't perceived as "explosive" growth. Thus, if and when strong AI is developed, it doesn't go FOOM, and things change slowly enough that humans don't notice anything.

comment by [deleted] · 2014-01-01T21:30:50.884Z · LW(p) · GW(p)

The possibility that there is no such thing as computationally tractable general intelligence (including in humans), just a bundle of hacks that work well enough for a given context.

Replies from: DanielLC, listic
comment by DanielLC · 2014-01-06T01:21:43.212Z · LW(p) · GW(p)

Nobody said it has to work in every context. AGI just means something about as versatile as humans.

comment by listic · 2014-01-02T13:59:26.184Z · LW(p) · GW(p)

Does that imply that humans are p-zombies and not actually conscious?

Replies from: mwengler, asr
comment by mwengler · 2014-01-02T20:25:01.301Z · LW(p) · GW(p)

It might imply that consciousness is not very highly related to what we think of as high general intelligence. That consciousness is something else.

Replies from: listic
comment by listic · 2014-01-02T20:31:03.845Z · LW(p) · GW(p)

Then, what would that make homo sapiens who can hunt wild beasts in savannah and design semiconductor chips if not generally intelligent?

Replies from: mwengler, Baughn
comment by mwengler · 2014-01-03T01:42:52.286Z · LW(p) · GW(p)

I think a human cognitive bias is to think that something about which we have a coherent idea is coherent in implementation. As an engineer, I think that this is a bias that is clearly wrong. A well designed smartphone, especially an Apple product, appears quite coherent, it appears "right." There is a consistency to its UI, to what a swipe or a back press or whatever does in one app and in another. The consistency in how it appears causes the human to think the consistency must be built in, that the design of such a consistent thing must be somehow SIMPLER than the design of a complex and inconsistent thing.

But it is not. It is much easier to design a user interface which is a mess, which has a radio button to enter one mode, but a drop down menu for another and a spinner for yet another. It is pure high-level skull sweat that removes these inconsistencies and builds a system which appears consistent at a high level.

And so it is with our brains and our intelligence. What we see and what we hear and what we carry around as an internal model of the world all agree not because there is some single simple neurology that gives that result, but because our brains are complex and have been fine tuned over millions of years to give agreement between these various functionalities, giving the appearance of a simple consistency, but through the agency of a complex tweaking of different features of hearing and vision and memory.

And in the midst of all this, we extend our brains beyond what they ever evolved for, to semiconductor chip design, the proof of mathematical theorems, the writing of symphonies and sonnets. Giving the appearance that there is some simple thing one might call "General Intelligence" of which we have a few large dollops.

But I think what we really have is a complex mix of separate analytical tools, and that it turns out that these tools span so many different ways of thinking about things that they can successfully be adapted to all sorts of things they were not originally designed (evolved) for. That is, they are a bunch of hacks, but not necessarily a complete set of hacks, if any such an idea as a complete set making up a general intelligence could even be defined.

So we use the tools we have, and enough animals die that we stay fed, and enough chips work that we have smart phones, and so we think we possess some "simple" general intelligence.

Or put concisely, what Baughn says in his comment at this same level.

comment by Baughn · 2014-01-03T01:30:38.048Z · LW(p) · GW(p)

A bundle of widely but not universally applicable tricks?

comment by asr · 2014-01-02T17:05:37.710Z · LW(p) · GW(p)

I would say no --

Consciousness and intelligence aren't all that related. There are some very stupid people who are as conscious as any human is.

Replies from: randallsquared, mwengler
comment by randallsquared · 2014-01-04T15:31:43.921Z · LW(p) · GW(p)

What's your evidence? I have some anecdotal evidence (based on waking from sleep, and on drinking alcohol) that seems to imply that consciousness and intelligence are quite strongly correlated, but perhaps you know of experiments in which they've been shown to vary separately?

comment by mwengler · 2014-01-03T01:44:06.245Z · LW(p) · GW(p)

Plus dogs, and maybe even rabbits.

comment by D_Malik · 2014-01-02T01:02:18.921Z · LW(p) · GW(p)

Every strong AI instantly kills everyone, so by anthropic effects your mind ends up in a world where every attempt to build strong AI mysteriously fails.

Replies from: mwengler
comment by mwengler · 2014-01-02T20:12:51.398Z · LW(p) · GW(p)

This looks to me like gibberish, does it refer to something after all that someone could explain and/or link to? Or was it meant merely to be a story idea, unlabeled?

Replies from: TylerJay
comment by TylerJay · 2014-01-02T21:46:45.016Z · LW(p) · GW(p)

It's actually pretty clever. We're taking the assertion "Every strong AI instantly kills everyone" as a premise, meaning that on any planet where Strong AI has ever been created or ever will be created, that AI always ends up killing everyone.

Anthropic reasoning is a way of answering questions about why our little piece of the universe is perfectly suited for human life. For example, "Why is it that we find ourselves on a planet in the habitable zone of a star with a good atmosphere that blocks most radiation, that gravity is not too low and not too high, and that our planet is the right temperature for liquid water to exist?"

The answer is known as the Anthropic Principle: "We find ourselves here BECAUSE it is specifically tuned in a way that allows for life to exist." Basically even though it's unlikely for all of these factors to come together, these are the only places that life exists. So any lifeform who looks around at its surroundings would find an environment that has all of the right factors aligned to allow it to exist. It seems obvious when you spell it out, but it does have some explanatory power for why we find ourselves where we do.

The suggestion by D_Malik is that "lack of strong AI" is a necessary condition for life to exist (since it kills everyone right away if you make it). So the very fact that there is life on a planet to write a story about implies that either Strong AI hasn't been built yet or that it's creation failed for some reason.

Replies from: mwengler, None
comment by mwengler · 2014-01-03T01:30:21.036Z · LW(p) · GW(p)

It seems like a weak premise in that human intelligence is just Strong NI (Strong Natural Intelligence). What would it be about being Strong AI that it would kill everything when Strong NI does not? A stronger premise would be more fundamental, be a premise about something more basic about AI vs NI that would explain how it came to be that Strong AI killed everything when Strong NI obviously does not.

But OK, its a premise for a story.

comment by [deleted] · 2014-01-02T23:58:45.221Z · LW(p) · GW(p)

That doesn't explain why the universe isn't filled with strong AIs, however...

Replies from: shminux, James_Miller, TylerJay
comment by Shmi (shminux) · 2014-01-04T03:04:21.009Z · LW(p) · GW(p)

The anthropic principle selects certain universes out of all possible ones. In this case, we can only exist in the subset of them which admits humans but prohibits strong AI. You have to first subscribe to a version of many worlds to apply it, not sure if you do. Whether the idea of anthropic selection is a useful one still remains to be seen.

Replies from: None
comment by [deleted] · 2014-01-06T16:39:38.023Z · LW(p) · GW(p)

My point is more that expansion of the strong AI would not occur at the speed of light, so there should be very distant but observable galactic-level civilizations of AIs changing the very nature of the regions they reside in, in ways that would be spectrally observable. Or, in those multiverses where a local AI respects some sort of prime directive, we may be left alone our immediate stellar neighborhood should nevertheless contain signs of extraterrestrial resource usage. So where are they?

Replies from: shminux
comment by Shmi (shminux) · 2014-01-06T17:18:30.709Z · LW(p) · GW(p)

My point is more that expansion of the strong AI would not occur at the speed of light

How do you know that? Or why do you think it's a reasonable assumption?

so there should be very distant but observable galactic-level civilizations of AIs changing the very nature of the regions they reside in, in ways that would be spectrally observable.

How would we tell if a phenomenon is natural or artificial?

in those multiverses where a local AI respects some sort of prime directive, we may be left alone our immediate stellar neighborhood should nevertheless contain signs of extraterrestrial resource usage.

It would not be a good implementation of the prime directive if the signs of superior intelligences were obvious.

comment by James_Miller · 2014-01-04T01:17:38.966Z · LW(p) · GW(p)

Most if of probably it is (under the assumption) but observers such as us only exist in the part free from of strong AI. If strong AI spreads out at the speed of light, observers such as us won't be able to detect it.

Replies from: None
comment by [deleted] · 2014-01-04T02:01:45.596Z · LW(p) · GW(p)

Still doesn't address the underlying problem. The Milky Way is about 100,000 light years across, but billions of years old. It is extremely unlikely that some non-terrestrial strong AI just happened to come into history in the exact same time that modern humans evolved, and is spreading throughout the universe at near the speed of light but just hasn't reached us yet.

Note that "moving at the speed of light" is not the issue here. Even predictions of how long it would take to colonize the galaxy with procreating humans and 20th century technology still says that the galaxy should have been completely tiled eons ago.

Replies from: James_Miller
comment by James_Miller · 2014-01-04T02:46:32.208Z · LW(p) · GW(p)

Imagine that 99.9999999999999% of the universe (and 100% of most galaxies) is under the control of strong AIs, and they expand at the speed of light. Observers such as us would live in the part of the universe not under their control and would see no evidence of strong AIs.

It is extremely unlikely that some non-terrestrial strong AI just happened to come into history in the exact same time that modern humans evolved, and is spreading throughout the universe at near the speed of light but just hasn't reached us yet.

The universe (not necessarily just the observable universe) is very big so I don't agree. It would be true if you wrote galaxy instead of universe.

comment by TylerJay · 2014-01-03T00:26:47.992Z · LW(p) · GW(p)

True, but given the assumptions, it would be evidence for the fact that there are none that have come in physical contact with the story-world (or else they would be dead).

comment by Kaj_Sotala · 2014-01-01T20:20:51.984Z · LW(p) · GW(p)

One possibility would be that biological cells just happened to be very well suited for the kind of computation that intelligence required, and even if we managed to build computers that had comparable processing power in the abstract, running intelligence on anything remotely resembling a Von Neumann architecture would be so massively inefficient that you'd need many times as much power to get the same results as biology. Brain emulation isn't the same thing as de novo AI, but see e.g. this paper which notes that biologically realistic emulation may remain unachievable. Also various scaling and bandwidth limitations could also contribute to it being infeasible to get the necessary power by just stacking more and more servers on top of each other.

This would still leave open the option of creating a strong AI from cultivating biological cells, but especially if molecular nanotechnology turns out to be impossible, the extent to which you could engineer the brains to your liking could be very limited.

(For what it's worth, I don't consider this a particularly likely scenario: we're already developing brain implants which mimic the functionality of small parts of the brain, which doesn't seem very compatible with the premise of intelligence just being mind-bogglingly expensive in computational terms. But of course, the parts of the brain that we've managed to model aren't the ones doing the most interesting work, so you still have some wiggle room that allows for the possibility of the interesting work really being that hard.)

comment by Manfred · 2014-01-02T04:35:46.899Z · LW(p) · GW(p)

You could have a story where the main character are intelligences already operating near the physical limits of their universe. It's simply too hard to gather the raw materials to build a bigger brain.

comment by KnaveOfAllTrades · 2014-01-02T16:50:57.058Z · LW(p) · GW(p)

One potential failure mode to watch out for is ending up with readers who think they now understand the arguments around Strong AI and don't take it seriously, because both its possibility and its impossibility were presented as equally probable. The possibility of Strong AI is overwhelmingly more probable than the impossibility. People who currently don't take Strong AI seriously will round off anything other than very strong evidence for the possibility of Strong AI to 'evidence not decisive; continue default belief', so their beliefs won't change and they will now think they've mastered the arguments/investigated the issue and possibly be even less disposed to start taking Strong AI seriously (e.g. if they conclude that all the people who do take Strong AI seriously are biased, crazy, or delusional to have such high confidence, and distance themself from those people to avoid association).

A dispassionate survey or exploration of the evidence might well avoid this failure mode, in which case it is not a matter of doing active work to avoid it, but merely ensuring you don't fall into the Always Present Both Sides Equally trap.

Replies from: Error
comment by Error · 2014-01-03T16:03:41.660Z · LW(p) · GW(p)

One potential failure mode to watch out for is ending up with readers who think they now understand the arguments around Strong AI and don't take it seriously, because both its possibility and its impossibility were presented as equally probable.

I had this thought recently when reading Robert Sawyer's "Calculating God." The premise was something along the lines of "what sort of evidence would one need, and what would have to change about the universe, to accept the Intelligent Design hypothesis?" His answer was "quite a bit", but it occurred to me that a layperson not already familiar with the arguments involved might come away from it with the idea that ID was not improbable.

comment by hairyfigment · 2014-01-01T23:16:03.879Z · LW(p) · GW(p)

Before certain MIRI papers, I came up with a steelman in which transparently written AI could never happen due to logical impossibility. After all, humans do not seem transparently written. One could imagine that the complexity necessary to approximate "intelligence" grows much faster than the intelligence's ability to grasp complexity - at least if we mean the kind of understanding that would let you improve yourself with high probability.

This scenario seemed unlikely even at the time, and less likely now that MIRI's proven some counterexamples to closely related claims.

Replies from: None
comment by [deleted] · 2014-01-02T00:19:31.292Z · LW(p) · GW(p)

I'm not sure I understand the logic of your argument. I suspect I do not understand what you mean by transparently written.

Replies from: hairyfigment
comment by hairyfigment · 2014-01-02T02:09:59.999Z · LW(p) · GW(p)

What it sounds like. A person created by artificial insemination is technically a Strong AI. But she can't automatically improve her intelligence and go FOOM, because nobody designed the human brain with the intention of letting human brains understand it. She can probably grasp certain of its theoretical flaws, but that doesn't mean she can look at her neurons and figure out what they're doing or how to fix them.

Replies from: mwengler
comment by mwengler · 2014-01-02T20:23:10.566Z · LW(p) · GW(p)

The distinction between AI and NI (Natural Intelligence) is almost a, well, an artificial one. There is plenty of reasons to believe that our brains, NI as they are, are improvable by us. The broad outlines of this have existed in cyberpunk sci fi for many years. The technology is slowly coming along, arguably no more slowly than is the technology for autonomous AI is coming along.

A person created by artificial insemination is technically a strong AI? What is artificial about the human species developing additional mechanisms to get male and female germ plasm together in environments where it can grow to an adult organism? Are you confused by the fact that we animals doing it have expropriated the word "artificial" to describe this new innovation in fucking our species has come up with as part of its evolution?

I'm comfortable reserving the term AI for a thinking machine whos design deviates from any natural design significantly. Robin Hanson's ems are different enough, in principle we don't have to understand completely how they work but we have to understand quite a bit in order to port them to a different substrate. If it will be an organic brain based on neurons, then it should not re-use any systems more advanced than, say, the visual cortex, and still get called artificial. If you are just copying the neocortex using DNA in to neurons, you are just building a natural intelligence.

comment by Mestroyer · 2014-01-01T19:15:20.319Z · LW(p) · GW(p)

Strong AI could be impossible (in our universe) if we're in a simulation, and the software running us combs through things we create and sabotages every attempt we make.

Or if we're not really "strongly" intelligent ourselves. Invoke absolute denial mechanism.

Or if humans run on souls which have access to some required higher form of computation and are magically attached to unmodified children of normal human beings, and attempting to engineer something different out of our own reproduction summons the avatar of Cthulhu.

Or if there actually is no order in the universe and we're Boltzmann brains.

comment by [deleted] · 2014-01-01T18:44:06.387Z · LW(p) · GW(p)

The only way I could imagine it to be impossible is if some form of dualism were true. Otherwise, brains serve as an existence proof for strong AI, so it's kinda hard to use my own brain to speculate on the impossibility of its own existence.

comment by DanielLC · 2014-01-01T19:13:49.401Z · LW(p) · GW(p)

It's clearly possible. There's not going to be some effect that makes it so intelligence only appears if nobody is trying to make it happen.

What might be the case is that it is inhumanly difficult to create. We know evolution did it, but evolution doesn't think like a person. In principle, we could set up an evolutionary algorithm to create intelligence, but look how long that took the first time. It is also arguably highly unethical, considering the amount of pain that will invariably take place. And what you end up with isn't likely to be friendly.

comment by Ander · 2014-01-07T23:44:10.966Z · LW(p) · GW(p)

We exist. Therefore strong AI is possible, in that if you were to exactly replicate all of the features of a human, you would have created a strong AI (unless there is some form of Dualism and that you needed whatever a 'soul' is from the 'higher reality' to become conscious).

What things might make Strong AI really really hard, though not impossible?
Maybe a neuron is actually way way more complicated than we currently think, so the problem of making an AI is a lot more complex. etc.

Replies from: V_V
comment by V_V · 2014-01-08T00:59:58.912Z · LW(p) · GW(p)

We exist. Therefore strong AI is possible, in that if you were to exactly replicate all of the features of a human, you would have created a strong AI

No, you would have created a human.

Replies from: DaFranker
comment by DaFranker · 2014-01-08T15:00:36.301Z · LW(p) · GW(p)

*twitch*

Replies from: V_V
comment by V_V · 2014-01-08T15:51:45.380Z · LW(p) · GW(p)

?

Replies from: DaFranker
comment by DaFranker · 2014-01-08T17:12:00.730Z · LW(p) · GW(p)

Saying they would have created a human adds no information; worse, it adds noise in the form of whatever ideas you're trying to sneak into the discussion by saying this, or in the form of whatever any reader might misinterpret from using this label.

You haven't even made the claim that "The set of humans minds might possibly be outside of the set of possible Strong AI minds", so your argument isn't even about whether or not "Strong AIs" includes "Humans".

Basically, I was peeve-twitching because you're turning the whole thing into a pointless argument about words. And now you've caused me the inconvenience of writing this response. Backtrack: Hence the twitching.

Replies from: V_V
comment by V_V · 2014-01-08T17:26:25.319Z · LW(p) · GW(p)

"The set of humans minds might possibly be outside of the set of possible Strong AI minds"

Uh, you know what the 'A' in 'Strong AI' stands for, don't you?

You may choose to ignore the etymology of the term, and include humans in the set of Strong AIs, but that's not the generally used definition of the term, and I'm sure that the original poster, the poster I responded to, and pretty much everybody else on this thread was referring to non-human intelligences.

Therefore, my points stands: if you were to exactly replicate all of the features of a human, you would have created a human, not a non-human intelligence.

Replies from: Ander
comment by Ander · 2014-01-08T20:08:03.632Z · LW(p) · GW(p)

If I replicate the brain algorithm of a human, but I do it in some other form (e.g. as a computer program, instead of using carbon based molecules), is that an "AI"?

If I make something very very similar, but not identical to the brain algorithm of a human, but I do it in some other form (e.g. as a computer program, instead of using carbon based molecules), is that an "AI?"

Its a terminology discussion at this point, I think.

In my original reply my intent was "provided that there are no souls/inputs from outside the universe required to make a functioning human, then we are able to create an AI by building something functionally equivalent to a human, and therefore strong AI is possible".

Replies from: V_V
comment by V_V · 2014-01-08T22:27:18.869Z · LW(p) · GW(p)

If I replicate the brain algorithm of a human, but I do it in some other form (e.g. as a computer program, instead of using carbon based molecules), is that an "AI"?

Possibly, that's a borderline case.

If I make something very very similar, but not identical to the brain algorithm of a human, but I do it in some other form (e.g. as a computer program, instead of using carbon based molecules), is that an "AI?"

In my original reply my intent was "provided that there are no souls/inputs from outside the universe required to make a functioning human, then we are able to create an AI by building something functionally equivalent to a human, and therefore strong AI is possible".

Even if humans are essentially computable, in a theoretical sense, it doesnt follow that it is physically possible to build something functionally equivalent on a different type of hardware, under practical constraints.
Think of running Google on a mechanical computer like Babbage's Analytical Engine.

comment by Lalartu · 2014-01-02T10:01:44.978Z · LW(p) · GW(p)

Do you mean "impossible in principle" or "will never be built by our civilization"?

If first, then it is a well-known an widely accepted without much evidence idea that brain just can't be simulated by any sort of Turing machine. For in-story explanation why there are no AIs in future, that is enough.

If second, there is a very real possibility than technical progress will slow down to a halt, and we just never reach a technical capability to build an AI. On this topic, some people say that progress is accelerating right now and some say that it is slowing down since the late 19 century, and of course future is even more unclear.

Replies from: None, listic
comment by [deleted] · 2014-01-02T10:26:44.525Z · LW(p) · GW(p)

it is a well-known an widely accepted without much evidence idea that brain just can't be simulated by any sort of Turing machine.

Is it? I don't think I've ever encountered this view. I think the opposite view that the brain is approximated by a turing machine is widely voiced, e.g. Kurzweil.

Replies from: DaFranker
comment by DaFranker · 2014-01-02T13:13:42.006Z · LW(p) · GW(p)

You mean you've never met any non-transhumanophile and/or non-SF-bay human? (I kid, I kid.)

Walk down to your nearest non-SF-bay starbucks and ask the first person in a business suit if they think we could ever simulate brains on computers. Wager you on >4:1 odds that they'll say something that boils down to "Nope, impossible."

For starters, the majority of devout religious followers (which is, what, more than half the worldwide population? more than 80%?) apparently believe souls are necessary for human brains to work correctly. Or at least for humans to work correctly, which if they knew enough about brains would probably lead them to believe the former (limited personal experience!). (EDIT: Addendum: They also have the prior, even if unaware of it, that nothing can emulate souls, at least in physics.)

Now, if you restrict yourself to people familiar with these formulations ("Whether human brains can be simulated by any turing machine in principle") to immediately give a coherent answer, your odds will naturally go up. There's some selection effect where people who learn about data theory, turing machines and human brains (as a conjunction) tend to also be people who believe human brains can be emulated like any other data by a turing machine, unsurprisingly enough in retrospect.

Replies from: DanielLC
comment by DanielLC · 2014-01-06T01:33:08.909Z · LW(p) · GW(p)

You mean you've never met any non-transhumanophile and/or non-SF-bay human?

I'm not sure they're a big part of listic's target audience.

Replies from: DaFranker
comment by DaFranker · 2014-01-08T15:13:24.240Z · LW(p) · GW(p)

If so, then the explanation proposed by Lalartu won't hold water with the target audience, i.e. the subset of humans who don't happen to hold that idea for granted.

If it's not, and the audience includes general-muggle-population in any non-accidental capacity, then it's worth pointing out that the majority of people accept the idea for granted, and thus that that subset of the target audience would take this explanation in stride.

Either way, the issue is relevant.

Mostly, I just wanted to respond to the emotionally-surprising assertion that they'd never cognizantly encountered this view.

comment by listic · 2014-01-02T12:20:59.677Z · LW(p) · GW(p)

Do you mean "impossible in principle" or "will never be built by our civilization"?

I didn't distinguish between the two; for me, any would be fine; thanks.

Replies from: polymathwannabe
comment by polymathwannabe · 2014-01-08T21:02:27.276Z · LW(p) · GW(p)

Our existence only proves that intelligence is evolvable, but it's far from settled that it's makeable. Human brains might be unable to design/build anything more complex than themselves.

comment by robertzk (Technoguyrob) · 2014-01-05T08:17:33.592Z · LW(p) · GW(p)

Strong AI could fail if there are limits to computational integrity on sufficiently complex systems, similar to heating and QM problems limiting transistor sizes. For example, perhaps we rarely see these limits in humans because their frequency is one in a thousand human-thought-years, and when they do manifest it is mistaken as a diagnosis of mental illness.

comment by JDelta · 2014-01-05T03:20:28.634Z · LW(p) · GW(p)

Short answer: strong AI is both possible and highly probable. That being the case we have to think about the best ways to deal, with a virtually impossible to avoid outcome of the internet. That is, at some point it basically starts to build itself. And when it does... what will it build?

comment by Luke_A_Somers · 2014-01-02T01:46:26.898Z · LW(p) · GW(p)

Depends what you mean by strong AI. The best we know for sure we can do is much faster human intelligence minus the stupid parts, and with more memory. That's pretty danged smart, but if you think that's not 'strong AI' then it isn't much of a stretch to suppose that that's the end of the road - we're close enough to optimal that once you've fixed the blatant flaws you're well into diminishing returns territory.

comment by Omid · 2014-01-01T18:12:22.939Z · LW(p) · GW(p)

We know it's possible because we've seen evolution do it.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2014-01-01T19:35:59.057Z · LW(p) · GW(p)

That only proves human brains are possible. It might be impossible to replicate in silicon, thus no speedup; and it might be impossible to be significantly smarter than an outlier human.

Replies from: mwengler, DanielLC
comment by mwengler · 2014-01-02T20:58:12.989Z · LW(p) · GW(p)

birds fly after millions of years we have rocket ships and supersonic planes after decades. horses run at tens of mph we have wheeled vehicles doing hundreds of mph after decades.

Absolutely not an existence proof, but evolution appears to have concentrated on gooey carbon and left multi order of magnitude performance gaps in technologies involving other materials in all sorts of areas. The expectation would be that there is nothing magical about either goo or the limits evolution has so far found when considering intelligence.

Indeed, silicon computers are WAY better than human brains as adding machines, doing megaflops and kiloflops with great accuracy from a very early development point, where humans could do only much slower computation. Analagous to what we have done with high speed in ground and flight i would say.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2014-01-05T11:43:19.079Z · LW(p) · GW(p)

Comparing megaflops performed by the silicon hardware with symbolic operation by the human brain is comparing apples and oranges. If you measure the number of additions and multiplications performed by the neurons (yes, less precise but more fault tolerant) you will arrive at a much higher number of flops. Think about mental addition more like editing a spreadsheet cell. That includes lots of operation related to updating, display, IO, ... and the addition itself is an insignicant part of it. Same if you juggle number which actually represent anything in your head. The representing is the hard part. Not the arithmetic itself.

You can see the teraflops of the human brain at work if you consider the visual cortex where it is easy to compare and amp the image transforms to well known operations (at least for the first processing stages).

Replies from: mwengler
comment by mwengler · 2014-01-06T16:36:59.608Z · LW(p) · GW(p)

OK like comparing apples and oranges. We wind up with apples AND oranges through similar mechanisms ni carbon and oxygen after 100s of millions of years of evolution, but we seriously consider we can't get there with design in silicon after less than 100 years of trying while watching the quality of our tools for getting there doubling every 5 years or so?

I'm not saying it HAS to happen. I'm just saying the smart bet is not against it happening.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2014-01-06T18:08:32.154Z · LW(p) · GW(p)

I didn't say that conscious AI isn't possible. Not the least. I just said that that your argument wasn't sound.

comment by DanielLC · 2014-01-06T01:30:22.058Z · LW(p) · GW(p)

It might be impossible to replicate in silicon

Then we won't replicate it in silicon. We'll replicate it using another method.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2014-01-09T05:37:31.198Z · LW(p) · GW(p)

That other method might not have a speedup over carbon, though.

Replies from: DanielLC
comment by DanielLC · 2014-01-09T06:28:31.073Z · LW(p) · GW(p)

Then we'll pick one of the methods that does. Evolution only finds local maximums. It's unlikely that it hit upon the global maximum.

Even on the off chance that it did, we can still improve upon the current method. Humans have only just evolved civilization. We could improve with more time.

Even if we're at the ideal for our ancestral environment, our environment has changed. Being fluent in a programming language was never useful before, but it is now. It used to be hard to find enough calories to sustain the brain. That is no longer a problem.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2014-01-10T02:09:11.468Z · LW(p) · GW(p)

For all we know, there are fundamental constraints to consciousness, such that it can only operate so fast. No doubt you can find some incremental improvements, but if we drop electronic consciousness from the list of possibilities then it is no longer obvious that order-of-magnitude speedups are available. You ought not to reason from what is clear in a case that has been assumed away, to the substitutes that remain.

Replies from: DanielLC
comment by DanielLC · 2014-01-10T03:13:02.600Z · LW(p) · GW(p)

For all we know, there are fundamental constraints to consciousness, such that it can only operate so fast.

Yes, but it's not likely we're close to it. Either we'd reach it before creating a civilization, or we'd create a civilization and still be nowhere near it.

You ought not to reason from what is clear in a case that has been assumed away, to the substitutes that remain.

I don't understand that sentence. Can you rephrase it?

comment by ChristianKl · 2014-01-01T18:48:35.504Z · LW(p) · GW(p)

The only explanation I could think of is that there's actually something like souls and those souls are important for reasoning.

Replies from: Antiochus
comment by Antiochus · 2014-01-02T15:00:19.221Z · LW(p) · GW(p)

In that case, research will just need to discover the necessary properties of soul-attracting substrate.

Replies from: mwengler
comment by mwengler · 2014-01-02T20:38:15.361Z · LW(p) · GW(p)

Exactly. Souls are no more essentially supernatural than was radiation. It wasn't known before Marie Curie, and afterwards it became known and was characterized.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-01-01T21:55:19.757Z · LW(p) · GW(p)

Then you wouldn't exist. Next question?

Replies from: shminux, VAuroch
comment by Shmi (shminux) · 2014-01-03T00:06:52.760Z · LW(p) · GW(p)

I presume this is downvoted due to some inferential gap... How does one get from no AGI to no humans? Or, conversely, why humans implies AGI?

Replies from: hairyfigment, drethelin
comment by hairyfigment · 2014-01-03T03:44:17.623Z · LW(p) · GW(p)

I hope they all downvoted it because the OP asked about a story idea without calling it plausible in our world.

comment by drethelin · 2014-01-04T08:46:35.486Z · LW(p) · GW(p)

I downvoted mainly because Eliezer is being rude. Dude didn't even link http://lesswrong.com/lw/ql/my_childhood_role_model/ or anything.

comment by VAuroch · 2014-01-04T07:35:18.541Z · LW(p) · GW(p)

I think I understand the implication you're invisibly asserting, and will try to outline it:

  • If there cannot be Strong AI, then there is an intelligence maximum somewhere along the scale of possible intelligence levels, which is sufficiently low that an AI which appears to us to be Strong would violate the maximum.

  • There is no reason a priori for this limit to be above human normal but close to it.

  • Therefore, the proposition "either the intelligence maximum is far above human levels or it is below human levels" has probability ~1. (Treating lack of maximum as 'farthest above'.)

  • Therefore, if Strong AI was impossible, we wouldn't be possible either.

This is true in the abstract, but doesn't deal with a) possibility of restricted simulation (Taking Vinge's Zones of Thought as a model.) or b) anthropic arguments as mentioned elsewhere. There could be nonrandom reasons for the placing of an arbitrary intelligence maximum.