The weakest arguments for and against human level AI

post by Stuart_Armstrong · 2012-08-15T11:04:40.906Z · LW · GW · Legacy · 34 comments

Contents

34 comments

While going through the list of arguments for why to expect human level AI to happen or be impossible I was stuck by the same tremendously weak arguments that kept on coming up again and again. The weakest argument in favour of AI was the perenial:

Lest you think I'm exaggerating how weakly the argument was used, here are some random quotes:

At least Moravec gives a glance towards software, even though it is merely to say that software "keeps pace" with hardware. What is the common scale for hardware and software that he seems to be using? I'd like to put Starcraft II, Excel 2003 and Cygwin on a hardware scale - do these correspond to Penitums, Ataris, and Colossus? I'm not particularly ripping into Moravec, but if you realise that software is important, then you should attempt to model software progress!

But very rarely do any of these predictors try and show why having computers with say, the memory capacity or the FOPS of a human brain, will suddenly cause an AI to emerge.

The weakest argument against AI was the standard:

Some of the more sophisticated go "Gödel, hence no AI!". If the crux of your whole argument is that only humans can do X, then you need to show that only humans can do X - not assert it and spend the rest of your paper talking in great details about other things.

34 comments

Comments sorted by top scores.

comment by CarlShulman · 2012-08-15T18:18:51.723Z · LW(p) · GW(p)

The weakest argument against AI was the standard: Free will (or creativity) hence no AI!

I am most appalled by "philosophical externalism about mental content, therefore no AI." Another silly one is "humans can be produced for free with unskilled labor, so AGI will never be cost-effective."

The weakest argument in favour of AI was the perenial: Moore's Law hence AI!

On the other hand, imagine that computer hardware was stagnant at 1970s levels. It would be pretty plausible that the most efficient algorithms for human-level AI we could find would just be too computationally demanding to experiment with or make practical use of. Hardware on its own isn't sufficient, but it's certainly important for the plausibility of human-level AI when we find performance on so many problems scales with hardware, and our only existence proof of human-level intelligence has high hardware demands..

Also, you occasionally see weak arguments for human-level AI by people who are especially interested in some particular narrow AI field, which reaches superhuman performance, that assume the difficulty of that field is highly representative of all the remaining problems in AI.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2012-08-15T19:41:49.058Z · LW(p) · GW(p)

..."humans can be produced for free with unskilled labor, so AGI will never be cost-effective".

Not only is this argument inductively weak, the premise seems obviously false, since childcare is actually quite expensive.

Replies from: CarlShulman, DuncanS, TimS
comment by CarlShulman · 2012-08-15T20:04:48.372Z · LW(p) · GW(p)

Yes, it's quite annoying, and also neglects runtime costs.

comment by DuncanS · 2012-08-16T20:37:54.355Z · LW(p) · GW(p)

Also the argument applies equally well to lots of non-intellectual tasks where a cheap human could well be a replacement for an expensive machine.

comment by TimS · 2012-08-16T00:51:43.923Z · LW(p) · GW(p)

Before the the recent normalization of women in the workforce, I'm not sure that it was intuitive that raising children was expensive since the childcare was not paid in money. From a certain perspective, that makes those offering the premise look bad.

comment by [deleted] · 2012-08-15T13:40:26.771Z · LW(p) · GW(p)

http://en.wikipedia.org/wiki/AI_effect seems to detail a weak argument against AI. I was going to sum it up, but the Wikipedia page was doing a better job than I was, so I'll just mention a few quotes from the beginning of the article.

The AI effect occurs when onlookers discount the behavior of an artificial intelligence program by arguing that it is not real intelligence.

Pamela McCorduck writes: "It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was chorus of critics to say, 'that's not thinking'."[1] AI researcher Rodney Brooks complains "Every time we figure out a piece of it, it stops being magical; we say, Oh, that's just a computation."[2]

Replies from: djcb
comment by djcb · 2012-08-15T15:18:48.411Z · LW(p) · GW(p)

That argument is primarily about what the word AI means, rather than an argument against AI as a phenomenon.

Replies from: None
comment by [deleted] · 2012-08-15T15:50:03.433Z · LW(p) · GW(p)

That's true, and unfortunately you could link it back to a phenomenonal argument relatively straightforwardly by saying something like "AI will never be developed because anything technology does is just a computation, not thinking."

In fact, laying out the argument explicitly just shows how weak it is, since it's essentially just asserting AI is impossible by definition. Yet, there are still people who would still agree with the argument anyway. For instance, I was looking up an example of a debate about the possibility of AI, (linked here http://www.debate.org/debates/Artificial-Intelligence-is-impossible/1/ ) and one side said:

"Those are mere programs, not AI." Now, later, the person said "Yes but in your case, Gamecube or Debate.org is simply programming, not AI. There is a difference between simple programming and human-like AI." and then: "This is not learning. These devices are limited by their programming, they cannot learn."

But I suppose my point is that this gets first summed up with an extremely weak lead in argument which is essentially: "You are wrong by definition!" which then has to be peeled back to get to a content argument like "Learning" "Godel" or "Free Will"

And that it happens so often it has it's own name rather than just being an example of a no true Scotsman.

Replies from: djcb
comment by djcb · 2012-08-15T18:24:13.798Z · LW(p) · GW(p)

The most famous proponent of this "those are mere programs" view may be John Searle and his Chinese Room. I wouldn't call that the weakest argument against AI, although I think his argument is flawed.

Replies from: Dentin, Dolores1984
comment by Dentin · 2012-08-15T19:47:21.967Z · LW(p) · GW(p)

Many years ago when I first became interested in strong AI, my boss encouraged me to read Searle's Chinese Room paper, saying that it was a critically important criticism and that any attempt at AI needed to address it.

To this day, I'm still shocked that anyone considers Searle's argument meaningful. It was pretty clear, even back then with my lesser understanding of debate tactics, that he had simply 'defined away' the problem. That I had been told this was a 'critically important criticism' was even more shocking.

I've since read critical papers with what I would consider a much stronger foundation, such as those claiming that without whole-body and experience simulation, you won't able to get something sufficiently human. But the Searle category of argument still seems to be the most common, in spite of its lack of content.

Replies from: OrphanWilde
comment by OrphanWilde · 2012-08-15T19:58:34.323Z · LW(p) · GW(p)

He didn't define away the problem; his flaw differed from the tautological. The fatal flaw he introduced was creating a computational process and then substituting in himself for that computational process when it came time to evaluate whether that process "understood" Chinese. Since he's a component of the process, it doesn't matter whether -he- understands Chinese, only whether the -process- understands Chinese.

comment by Dolores1984 · 2012-08-15T20:11:55.851Z · LW(p) · GW(p)

Every time I read something by Searle, my blood pressure rises a couple of standard deviations.

Replies from: djcb
comment by djcb · 2012-08-16T05:54:56.595Z · LW(p) · GW(p)

One has to commend Searle though from coming up with such a clear example of what he thinks is wrong with the then-current model of AI. I wish all people could formulate their phylosophical ideas, right or wrong, in such a fashion. Even when they are wrong, they can be quite fruitful, as can be seen in the many papers (example still referring to Searle and his Chinese Room, or even more famously in the EPR paradox paper.

comment by Irgy · 2012-08-16T02:21:52.296Z · LW(p) · GW(p)

You know what I'd like to see? A strong argument for or against human level AI.

I read the SI's brief on the issue, and all I discovered was that they did a spectacularly good job of listing all of the arguments for and against and demonstrating why they're all manifestly rubbish. The overall case seemed to boil down to "There's high uncertainty on the issue, so we should assume there's some reasonable chance". I'm not saying they're wrong, but it's a depressing state of affairs.

comment by djcb · 2012-08-15T15:24:38.380Z · LW(p) · GW(p)

Perhaps in the introduction (or title?) it should be mentioned that AI in the context of the article means human-level AI.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2012-08-15T17:34:20.463Z · LW(p) · GW(p)

Thanks, added clarification.

comment by timtyler · 2012-08-16T09:58:49.875Z · LW(p) · GW(p)

But very rarely do any of these predictors try and show why having computers with say, the memory capacity or the FOPS of a human brain, will suddenly cause an AI to emerge.

Legg put the matter this way:

more computer power makes solving the AGI design problem easier. Firstly, more powerful computers allow us to search larger spaces of programs looking for good algorithms. Secondly, the algorithms we need to find can be less efficient, thus we are looking for an element in a larger subspace.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2012-08-16T12:09:04.346Z · LW(p) · GW(p)

Trivially true, but beyond "we are closer to AGI today than we were yesterday", what does that give us?

Replies from: DaFranker, timtyler
comment by DaFranker · 2012-08-16T15:59:52.273Z · LW(p) · GW(p)

It gives us... not much.

It gives less informed audiences information that is actually novel to them, because they do not already master the previous inferential step of "Doing the same amount/quality of stuff on a more powerful computer is easier", seeing as that depends on understanding the very idea that programs can and do get optimized to work on weaker machines or do things faster on current machines.

That's even assuming the audience already has all the inferential steps before that, e.g. "Programming does not involve using an arcane language to instruct electron-monsters to work harder on maths so that you can be sure they won't make mistakes while doing multiplication" and "Assigning x the same value twice in a row just to make sure the computer did it correctly is not how programming is supposed to work".

For this, I'll refer to insanely funny anecdotal evidence. I've seen cases just as bad as this happen personally, so I'm weighing in favor of those cases being true, which together form relevant evidence that people do, in fact, know very little about this and often omit to close the inferential gap. People like hitting the Ignore button, I suppose.

comment by timtyler · 2012-08-16T22:54:11.192Z · LW(p) · GW(p)

Well, it illuminates the shape of trajectory - showing a "double whammy" effect of better hardware. Indeed, there's something of a third "whammy" - since these processes apply iteratively as we go along - to produce smarter search algorithms that better prune the junk out of the search space.

comment by Kawoomba · 2012-08-15T21:18:32.450Z · LW(p) · GW(p)

Moore's Law hence AI!

How is that such a weak argument? I'm all for smarter algorithms - as opposed to just increasing raw computing power - but given the algorithms that are already in existence (e.g. AIXItl, others) we'd strongly - and based on theoretical results - expect there to exist some hardware threshold, that once crossed, would empower even the current algorithms sufficiently for an AGI-like phenomenon to emerge.

Since we know that exponential growth is, well, quite fast, it seems a sensible conclusion to say "(If) Moore's Law, (then eventually) AGI", without even mandating more efficient programming. That or dispute established machine learning algorithms and theoretical models. While the software-side is the bottle-neck, it is one that scales with computing power and can thus be compensated.

Of course smarter algorithms would greatly lower the aforementioned threshold, but if (admittedly a big if) Moore's Law were to hold true for a few more iterations, that might not be as relevant as we assume it to be.

The number of steps for current algorithms/agents to converge on an acceptable model of their environment may still be very large, but compared to future approaches, we'd expect that to be a difference in degree, not in kind. Nothing that some computronium shouldn't be able to compensate.

This may be important because as long as there's any kind of consistent hardware improvement - not even exponential - that argument would establish that AGI is just a matter of time, not some obscure eventuality.

Replies from: CarlShulman, Stuart_Armstrong
comment by CarlShulman · 2012-08-15T21:39:17.915Z · LW(p) · GW(p)

(e.g. AIXItl,

Moore's Law is not enough to make AIXI-style brute force work. A few more orders of magnitude won't beat combinatorial explosion.

Replies from: Kawoomba
comment by Kawoomba · 2012-08-15T21:56:16.636Z · LW(p) · GW(p)

Assuming the worst case on the algorithmic side, a standstill, the computational cost - even that of a combinatorial explosion - remains constant. The gap can only narrow down. That makes it a question of how many doubling cycles it would take to close it. We're not necessarily talking desktop computers here (disregarding their goal predictions).

Exponential growth with such a short doubling time with some unknown goal threshold to be reached is enough to make any provably optimal approach work eventually. If it continues.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2012-08-15T22:36:14.265Z · LW(p) · GW(p)

There is probably not enough computational power in the entire visible universe (assuming maximal theoretical efficiency) to power a reasonable AIXI-like algorithm. A few steps of combinatorial growth makes mere exponential growth look like standing very very still.

Replies from: TimS
comment by TimS · 2012-08-16T00:54:46.442Z · LW(p) · GW(p)

Changing the topic slightly, I always interpreted the Godel argument as saying there weren't good reasons to expect faster algorithms - thus, no super-human AI.

As you implied, the argument that Godel-ian issues prevent human-level intelligence is obviously disprove by the existence of actual humans.

Replies from: CarlShulman
comment by CarlShulman · 2012-08-16T01:29:18.950Z · LW(p) · GW(p)

Who would you re-interpret as making this argument?

Replies from: TimS
comment by TimS · 2012-08-16T15:31:37.055Z · LW(p) · GW(p)

It's my own position - I'm not aware of anyone in the literature making this argument (I'm not exactly up on the literature).

Replies from: CarlShulman
comment by CarlShulman · 2012-08-16T20:49:39.941Z · LW(p) · GW(p)

Then why write "I...interpreted the Godel argument" when you were not interpreting others, and had in mind an argument that is unrelated to Godel?

comment by Stuart_Armstrong · 2012-08-15T22:30:37.373Z · LW(p) · GW(p)

And there you've given a better theory than most AI experts. It's not Moores's law + reasonable explanation hence AI that's weak, it's just Moores's law on its own...

comment by OrphanWilde · 2012-08-15T13:16:13.920Z · LW(p) · GW(p)

While the internal complexity of software has increased in pace with hardware, the productive complexity has increased only slightly; I am much more impressed by what was done in software twenty years ago than what is being done today, with a few exceptions. Too many programmers have adopted the attitude that the efficiency of their code doesn't matter because hardware will improve enough to offset the issue in the timeframe between coding and release.

comment by Luke_A_Somers · 2012-08-15T16:04:17.999Z · LW(p) · GW(p)

At least the Eder quote refers to sheer power - you'll have to provide more for it to come across as an argument for AI.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2012-08-16T08:34:01.554Z · LW(p) · GW(p)

When someone claims that computers will be as powerful as a brain, what could this be referring to if not intelligence? What "power" has the brain got otherwise?

Replies from: randallsquared
comment by randallsquared · 2012-08-16T11:44:32.448Z · LW(p) · GW(p)

Raw processing power. In the computer analogy, intelligence is the combination of enough processing power with software that implements the intelligence. When people compare computers to brains, they usually seem to be ignoring the software side.

Replies from: DaFranker
comment by DaFranker · 2012-08-16T16:18:38.882Z · LW(p) · GW(p)

This is true, but possibly not quite exactly the way you intended. "Most people" (AKA everyone I've talked to about this who is not a programmer or has related IT experience) will automatically associate computing power with "power".

Humans have intellectual "power", since their intellect allows them to build incredibly tools, like computers. If we give computers more ((computing) power => "power" => ability to affect environment, reason and build useful tools), they will "obviously become more intelligent".

It seems to me like a standard symbol problem unfortunately much too common even among people who should know better.