Rigorous academic arguments on whether AIs can replace all human workers?

post by ChrisHallquist · 2012-08-29T07:30:55.073Z · LW · GW · Legacy · 13 comments

Contents

13 comments

I believe that it will one day be possible for AIs to do all work currently done by humans. Probably most people at LessWrong believe this too, and thanks to science fiction I wouldn't be surprised if most people in the United States and Europe believed it too.

But it's interesting to think about why we should believe this, and when I ask myself that question, I can come up with arguments, but I have a hard time thinking of examples of the arguments being made in a rigorous way in the academic literature. This is surprising because it's a question with real practical import for humanity's future.

The only prominent example I can think of is an argument against based on Godel's theorem and similar math, what Turing called "The Mathematical Objection" and which has more recently been championed by Roger Penrose. But can anyone think of others?

13 comments

Comments sorted by top scores.

comment by fubarobfusco · 2012-08-29T08:15:42.282Z · LW(p) · GW(p)

There are a finite number of tasks that humans need performed to survive and prosper reasonably well. (Informal proof: There are a finite number of humans; some humans do prosper reasonably well; therefore, performance of some finite number of tasks is sufficient to allow these humans to prosper.)

To assert that robots or programs cannot do all of these tasks implies that there is at least one such task that robots or programs cannot perform. (I am choosing not to use "AI" as a noun, to bracket questions such as general intelligence or consciousness.)

I would ask, then, which ones?

Replies from: Kawoomba, kilobug
comment by Kawoomba · 2012-08-29T12:57:08.642Z · LW(p) · GW(p)

If you construe "cannot" to also encompass "cannot because they will be prevented from doing so", there will be a host of tasks (e.g. social roles, from pastor to psychiatrist) that will remain out of reach for AI, either because AI isn't accepted by costumers as performing the task (e.g. backlash to the DaVinci robotic surgery system) or because they are barred from performing the task (e.g. strong unions pressuring for subsidies in the form of "may be done only by humans").

Replies from: fubarobfusco
comment by fubarobfusco · 2012-08-29T22:06:03.307Z · LW(p) · GW(p)

Sure. We don't take programs to be artists either, but we do have "procedurally generated content" in computer games, taking some of the place that would otherwise be filled by the work of a human artist. Even though the program is not a person and has no social role, it creates some economic value that otherwise would require a person. Procedurally-generated stories don't seem so far off. either.

comment by kilobug · 2012-08-29T09:48:30.112Z · LW(p) · GW(p)

I do believe that AI will one day be able to do what we do, but I guess the answers to "which ones" will be either "innovate" or "art".

The only argument I have to say AI will one day be able to do it, is that our brains are computers and could theoretically be simulated by computers, so there must be no fundamental reason for which AI will one day be able to do all what we do. But that only excludes making AI being absolutely impossible, it doesn't prove we'll actually succeed in doing it.

comment by V_V · 2012-08-29T13:02:34.676Z · LW(p) · GW(p)

but I have a hard time thinking of examples of the arguments being made in a rigorous way in the academic literature. This is surprising because it's a question with real practical import for humanity's future.

Why does that seem surprising to you? You are trying to forecast future technological development, and there is no general rigorous way of doing it.

The best you can get are negative statements. Some are strict impossibility proofs (perpetual mobiles, faster than light travel, etc.), some are more informal implausibility arguments, like the ones advanced by Tom Murphy on the Do the Math blog.

Positive statements in the form "we will develop technology X", where X is a technology that doesn't already exist at least in some prototypal form, are wild speculations. Historically, such predictions, even when made by domain experts, turned out to be wrong more often than not, and conversely actual innovations were typically not predicted decades in advance.

Regarding human-level AI, some people (Penrose et al) tried to put forward impossibility arguments, but these arguments are generally considered uncompelling and probably incorrect. Given the present understanding, it seems that AI is not theoretically impossible, but this tells us nothing about its practical feasibility.

comment by Metus · 2012-08-29T13:15:18.519Z · LW(p) · GW(p)

I will go the brutal route and argue that, assuming nature's laws are computable in the computer science sense, we can simulate any given human brain, seeing as it is part of nature. By simulating this particular brain and arguing that the brain is the seat of intelligence in a human we have an AI in our hands thus confirming that AI is possible.

In other words, I think simulating a human brain and the possibility of AI are synonymous. Not necessarily in a strong sense. I realize this is not an academic argument, but might help you.

Replies from: DanArmak
comment by DanArmak · 2012-08-29T16:50:21.982Z · LW(p) · GW(p)

There's a long road from being possible (= not contradicting the laws of physics) to being probable (in our future).

Replies from: V_V
comment by V_V · 2012-08-30T09:22:28.113Z · LW(p) · GW(p)

Indeed. We can make a computer out of hydaulic valves, in principle, but I don't expect to be playing Halo Reach on it anytime soon.

comment by Randaly · 2012-08-29T08:54:58.362Z · LW(p) · GW(p)

As far as I can tell, most academic work on this calls the question the "Church-Turing Thesis," or, more specifically, the "Strong Church-Turing Thesis." (As the SEP points out here, this is actually a completely new thesis, distinct from the actual thesis advanced by Turing.) This is regarded as an empirical, open question in academia, though almost everybody agrees that it is true. (AFAICT, this applies both to the regular and the strong thesis.) Many papers have mentioned the strong version of the thesis, but most thought about it deals with specific critiques, since the best positive arguments are simply to show AIs doing stuff people do. (There's also a lot of stuff that's flat out irrelevant see eg:

Nachum Dershowitz, & Yuri Gurevich (2008). A Natural Axiomatization of Computability and Proof of Church’s Thesis The Bulletin of Symbolic Logic DOI: 10.2178/bsl/1231081370 http://research.microsoft.com/en-us/um/people/gurevich/Opera/188.pdf)

An additional specific critique: some (eg Searle and Block) have argued that machines cannot have a mind. Eg Searle has argued that no machine can be conscious, with his Chinese Room argument:

Searle, John (1980), "Minds, Brains and Programs", Behavioral and Brain Sciences 3 (3): 417–457, doi:10.1017/S0140525X00005756, retrieved May 13, 2009

Similarly, Dreyfus has argued (in practice) that most human thinking is unconscious, and that it will never be possible to program a computer to execute these sorts of unconscious thoughts. He wrote a lot of articles and books to convey this and other critiques, but in general he claimed that early AGI researchers were making unwarranted philosophical assumptions- see here.

Replies from: ChrisHallquist, Manfred
comment by ChrisHallquist · 2012-08-29T13:47:55.794Z · LW(p) · GW(p)

I don't think the Church-Turing Thesis is quite equivalent, because someone might think (even if they don't have good reasons for thinking it) that some human behavior (say, "having mathematical insights") is not algorithmicly computable.

As I understand Searle, his views aren't relevant here, because even if we got to the point where AIs can replace all human workers, Searle would still insist they aren't really thinking.

Dreyfus sounds interesting, though, will have to look into it.

comment by Manfred · 2012-08-29T14:27:31.063Z · LW(p) · GW(p)

Turing machines aren't even necessary - all that's necessary is that computing systems be understandable, and then buildable.

comment by Dr_Manhattan · 2012-08-29T12:49:57.468Z · LW(p) · GW(p)

There are 2 ways to interpret the motivation of your question.

comment by lukeprog · 2013-04-08T01:41:25.543Z · LW(p) · GW(p)

an argument against based on Godel's theorem and similar math, what Turing called "The Mathematical Objection" and which has more recently been championed by Roger Penrose

Chapter 11 of Quantum Computing since Democritus is a succinct rebuttal to Penrose & company on this point. It also contains a nice steel-manning of Penrose.