Could Robots Take All Our Jobs?: A Philosophical Perspective
post by ChrisHallquist · 2013-05-24T22:06:54.688Z · LW · GW · Legacy · 14 commentsContents
1. What a simulation could do 2. Simulation and philosophy of mind 3. Gödelian arguments against AI 4. Could the brain be a hypercomputer? 5. Hardware and software issues Bibliography None 14 comments
Note: The following is a draft of a paper written with an audience of philosophers in mind. It focuses on answering objections to AI likely to be made by contemporary philosophers, but it is still likely to be of interest to readers of LessWrong for obvious reasons, and I've tried to avoid assuming any specialized philosophical background.
The title of this paper probably sounds a little strange. Philosophy is generally thought of as an armchair discipline, and the question of whether robots could take all our jobs doesn’t seem like a question that can be settled from the armchair. But it turns out that when you look at this question, it leads you to some other questions that philosophers have had quite a bit to say about.
Some of these other questions are conceptual. They seem like they could in fact be answered from the armchair. Others are empirical but very general. They seem like they require going out and looking at the world, but they’re not about specific technologies. They’re about how the universe works, how the mind works, or how computers work in general. It’s been suggested that one of the distinctive things about philosophical questions is their level of generality. I don’t know whether that’s true, but I do know that some of these general empirical questions are ones that philosophers have had quite a bit to say about.
The line of reasoning in this paper is similar to arguments discussed by Alan Turing (1950), Hubert Dreyfus (1992), and David Chalmers (2010). I won’t say much about those discussions, though, for reasons of space and also because I’ll frame the discussion a bit differently. I want to avoid debates about what “intelligence” is, what “information processing” is, or what it would mean to say the brain is a machine. Hence the focus on machines taking human jobs. I should mention that I’m not the first to suggest this focus; after writing the first draft of this paper I found out that one AI researcher had already proposed replacing the “Turing test” with an “employment test” (Nilsson 2005).
Here, I’m going to assume no job has “done by a human” as part of its definition. I realize that in the future, there may be demand for having jobs specifically done by humans. People might want to be served by human bartenders even if robot bartenders do just as good of a job, in the same way that some people prefer handmade goods even when mass-produced ones are cheaper for the same or better quality (Doctorow 2011). But having acknowledged this issue, I’m going to spend the rest of the paper ignoring it.
I’ll assume, for example, that “generated by a human” is not part of the definition of “mathematical proof.” That’s because machines replacing humans in a wide variety of roles that don’t have “done by a human” as part of their definition would still be a big deal.
1. What a simulation could do
Speaking of mathematical proofs, Daniel Dennett (1981) writes:
Suppose we made a computer simulation of a mathematician, and suppose it worked well. Would we complain that what we had hoped for was proofs, but alas, all we got instead was mere representations of proofs? But representations of proofs are proofs, aren’t they? It depends on how good the proofs represented are. When cartoonists represent scientists pondering blackboards, what they typically represent as proofs of formulae on the blackboard is pure gibberish, however “realistic” these figures appear to the layman. If the simulation of the mathematician produced phony proofs like those in the cartoons, it might still simulate something of theoretical interest about mathematicians – their verbal mannerisms, perhaps, or their absentmindedness. On the other hand, if the simulation were designed to produce representations of the proofs a good mathematician would produce, it would be as valuable a “colleague” – in the proof producing department – as the mathematician. That is the difference it seems, between abstract, formal products like proofs or songs... and concrete, material products like milk. On which side of this divide does the mind fall? Is mentality like milk or like a song?
If we think of the mind’s product as something like control of the body, it seems its product is quite abstract.
Dennett’s point is not, as far as I can tell, very controversial. Proponents of Gödelian arguments against artificial intelligence, such as John Lucas and Roger Penrose (who I’ll talk about in a bit), might deny that it would be possible to create such a simulated mathematician. But it doesn’t seem like they’d deny that if you could create one, it would be able to produce real proofs and be a good “colleague” to a mathematician.
Similarly, John Searle, who’s known for his “Chinese Room” argument against AI (Searle 1980), would deny that the simulation really understands mathematics. But Searle (1997) has also written a critique of Penrose where he argued that a simulated mathematician like the one Dennett imagines “would be able to do in practice what mathematicians do,” perhaps producing a computer printout with proofs on it.
So Dennett’s point doesn’t seem very controversial, but I think it’s an important one, and it generalizes to lots of different kinds of work. For example, instead of talking about mathematicians and proofs, we could talk about academics and journal articles in general. Of course there’d be problems with producing papers that report new empirical results, but for papers based entirely on library research (like this one), you could feed the simulation a whole library worth of digitized books and journal articles, and then it could simulate the process of a professor researching and writing an original article using that library.
We can generalize even further. Today, lots of jobs involve work that is done or at least could be done entirely on a computer, and then sent to your boss or your customer over the internet. Long lists of such jobs can be found on websites like Elance.com, which allows people to hire freelancers and get work done without ever meeting them in person. And with all these jobs, it seems like, in principle at least, they could be done by a simulation analogous to Dennett’s simulated mathematician.
Now with manual labor, the problem is obviously more complicated, but it’s also more similar to the simulated mathematician case than you might think. For many manual tasks, the hard part of replacing humans with robots is not building the robot body, but building a computer brain that responds in the right way to information it’s getting from its sensors, its video cameras, its microphone ears. As Dennett says, control of the body seems to be an abstract product. So I think what I say in this paper could be applied to manual labor, but for simplicity I’ll focuses on cases like the mathematician case, and focus on what computers could do rather than what robots could do. (Again, even if it only mathematicians and philosophers were be replaced by computers, this would still be significant, at least to mathematicians and philosophers.)
Now once you accept that simulations could replace human workers in a wide variety of roles, the question becomes whether it’s actually possible to create such simulations. That’s what I’m going to be focusing on for the rest of the paper.
2. Simulation and philosophy of mind
I want to be clear that, in this paper, when I imagine simulated workers, I’m not making any claims about what’s “really” going on with the simulation, aside from the end product. Feel free to mentally insert scare-quotes or the word “simulated” wherever you think it’s necessary to resolve a problem with something I say. What matters is that you agree that for certain kinds of end products, like mathematical proofs, there’s no distinction between a simulated version and the real thing. Everything else is irrelevant for my purposes.
In particular, I’m not saying anything about mental states like beliefs. That means that criticisms of “functionalism” or “the computational theory of mind” don’t matter here, because those terms typically refer to views about mental states (Levin 2010, Horst 2011).
But some other major ideas in philosophy of mind do matter here. One is physicalism, the view that everything is physical. Another is epiphenomenal dualism, which allows for things that aren’t physical but says they have no effect on the physical. This is roughly the view of David Chalmers (1996)--Chalmers quibbles a bit about terminology, but he’s at least close enough to being an epiphenomalist that we can treat him as one here.
A third position we’ll need to deal with here is interactionist dualism, which allows for things that aren’t physical and have an effect on the physical. Rene Descartes, who thought that the soul influences the body through the pineal gland, is probably the most famous example. A more recent example is neuroscientist John Eccles, who proposed a theory of the mind involving what he called “psychons” in the 90s.
Though I don’t hear it put this way very often, I think modern physicalists and epiphenomenalists would agree with the following claim, which I’ll call bottom-up predictability: if you know the initial states of the lowest-level parts of a physical system, along with the lowest-level laws that apply to those parts, then in principle you’ll be able to predict the behavior of that system, whether or not it’s practical to do so. Even if the laws aren’t deterministic, they’ll at least tell you the probability of the system behaving in different ways.
One way that this claim could be false, of course, is if souls or psychons or whatever could come in and affect the behavior of physical systems. Another way it could be false is if there were certain laws that affect the behavior of the low-level components only when they are part of some larger whole. For example, there might be a law that says when hydrogen and oxygen atoms are put together into water molecules, they behave in a certain way, and the way they behave when they’re put together into water molecules is actually different than you’d expect from knowing just the laws that apply to all atoms all the time.
This second way of rejecting bottom-up predictability shows up in C. D. Broad’s book The Mind and its Place in Nature, which was published in 1925. Within just a couple years of that book’s publication, though, physicists started giving accounts of chemical phenomena in quantum mechanical terms. And our ability to confirm that molecules behave as the laws of quantum mechanics would predict has increased over time, as computers have gotten more powerful and approximation techniques have improved.
We don’t have the computing power to rigorously check bottom-up predictability for larger systems like cells, much less entire organisms, but the scientists I’ve heard talk about this issue seem quite confident in the idea. As someone who came very close to getting a degree in neuroscience before deciding to become a philosopher, I can say that it seems to be the working assumption of molecular biology and neuroscience, and personally I’m confident that this assumption is just going to just continue to be confirmed. (I don’t have a general philosophical argument here; I’m just going by sum of what I know about of physics, chemistry, molecular biology, and neuroscience.)
Now I said that I’m not saying anything about mental states. Another thing I’ve tried to avoid saying anything about is the exact relationship between the low-level physical laws and higher-level laws and phenomena. There are philosophers who call themselves “non-reductive physicalists” or “non-reductive materialists” who might deny that the low-level physical law fully explain the higher-level stuff, but don’t, if I understand them correctly, mean to deny bottom-up predictability.
For example, Hilary Putnam (1980) has a paper where he asks how we might explain the fact that a particular square peg fits in a square hole, but not a round one. He argues that even if you could compute all possible trajectories of the peg through the holes and deduce from just the laws of quantum mechanics that the square peg will never pass through the round hole, that deduction wouldn't be an explanation of why the peg won't fit.
What counts as an explanation doesn’t matter here, though. All that matters is whether you could, in principle, use the low-level physical laws (like quantum mechanics) to make predictions in the way Putnam talks about. And Putnam’s non-reductive materialism isn’t about denying you could do that. It isn’t about denying bottom-up predictability. As far as I can tell, very few people today are going to deny bottom-up predictability, unless they’re interactionist dualists.
The significance of bottom-up predictability is that if it’s true, and if the lowest-level physical laws allow any physical system to be simulated (at least in principle), then human brains and human bodies can be simulated (at least in principle), and a wide variety of human workers can be replaced by simulations (at least in principle). From what I understand, it does seem like the laws of physics allow physical systems in general to be simulated (at least in principle). I don’t know if anyone knows for sure whether that’s true, but it would be important if it were true.
I keep having to say “in principle” here because even if in principle you could use quantum mechanics to predict people’s behavior or at least get a good idea of the probabilities, the amount of computing power you’d need would so ridiculously huge that doing that would probably never become practical. The idea of simulating an entire mathematician at the quantum-mechanical level is just a thought-experiment.
Near the end, I’ll talk about what might be more realistic ways to simulate a mathematician. But first I want to talk about two important arguments which both suggest that doing that might be impossible even in principle.
3. Gödelian arguments against AI
In 1936, mathematician Alan Turing described a certain kind of theoretical machine called a Turing machine, which can be thought of as a theoretical model for a programmable digital computer. (That’s anachronistic, because Turing’s paper came before modern digital computers, but that’s probably the easiest way to explain the idea to a modern audience.)
Turing proved that some Turing machines can can act as universal Turing machines, meaning that with the right program they can simulate any other Turing machine, which amounts to being able to do anything any Turing machine can do. (Notice the parallel to the worker-simulation thought-experiment.)
The fact that a universal Turing machine can do anything any Turing machine can do helps explain why modern computers are so wonderfully flexible. Note, however, that when we talk about what Turing machines can do, generally we’re only saying what they can do in some arbitrary but finite amount of time, not what they can do in a reasonable amount of time. So there are some things that modern computers can’t do because they aren’t fast enough. And again, you need the right program, and in some cases we haven’t solved the programming problem.
Furthermore, there are some limitations to what Turing machines can do even in principle. “In principle” here means “with some finite amount of time and the right program.” Gödel's theorem shows that for any consistent formal system as powerful as basic arithmetic, there will always be some statement that can’t be proven in the system. Since Turing machines are logically equivalent to formal systems, this limitation applies to Turing machines as well. This leads to the argument that the human mind doesn’t have this limitation and therefore can’t be a Turing machine.
Turing (1950) considered and rejected this argument in his paper “Computing Machinery and Intelligence,” which is famous for having proposed the “Turing test” of whether a particular machine can think. Turing called this argument “the mathematical objection.” Later, John Lucas (1961) defended the argument in a paper where he wrote:
Gödel's theorem must apply to cybernetical machines, because it is of the essence of being a machine, that it should be a concrete instantiation of a formal system. It follows that given any machine which is consistent and capable of doing simple arithmetic, there is a formula which it is incapable of producing as being true---i.e., the formula is unprovable-in-the-system-but which we can see to be true. It follows that no machine can be a complete or adequate model of the mind, that minds are essentially different from machines.
This argument makes a number of assumptions. Since Gödel's theorem only applies to consistent formal systems, we must assume that if the mind is a formal system, it is consistent. Also, since different formal systems have different statements which they can’t prove, it isn’t enough to to say we can see the truth of some statements that certain formal systems can’t prove. It has to be the case that for any formal system, we will be able to see the truth of at least one statement the system can’t prove. Finally, Lucas implicitly assumes that if the mind is a formal systems, then our “seeing” a statement to be true involves the statement being proved in that formal system.
These assumptions can all be challenged, and Lucas responds to some of the possible challenges. In particular, he spends quite a bit of time arguing that the fact that humans make mistakes does not mean we could be represented by an inconsistent formal system.
After Lucas’ original article, similar arguments against the possibility of AI were defended by Roger Penrose in his books The Emperor’s New Mind and Shadows of the Mind, and also in a response to critics published in the online journal Psyche.
Many of the criticisms of Penrose published in Psyche are worth reading, particularly the one by Chalmers (1995). But I’m not going to recap the entire debate because I don’t have much to say about it that hasn’t already been said. Instead, I’m just going to comment on one thing Penrose does in his reply in Psyche (Penrose 1996).
Penrose emphasizes that he’s interested in whether there could ever be a formal system that, “encapsulates all the humanly accessible methods of mathematical proof.” In response to the point that humans make mistakes, he says:
I fully accept that individual mathematicians can frequently make errors, as do human beings in many other activities of their lives. This is not the point. Mathematical errors are in principle correctable, and I was concerned mainly with the ideal of what can indeed be perceived in principle by mathematical understanding and insight... The arguments given above... were also concerned with this ideal notion only. The position that I have been strongly arguing for is that this ideal notion of human mathematical understanding is something beyond computation.
Even if Penrose were right about this, it’s not clear that this would mean that simulation of individual humans is impossible. A simulation doesn’t have to claim to encapsulate “all the humanly accessible methods of mathematical proof”; it just needs to be a good simulation of an individual mathematician. It doesn’t even need to say anything directly about mathematics. To use the implausible thought-experiment, it might only talk about the subatomic particles that make up the mathematician.
This is John Searle’s (1997) response to Penrose, except that instead of subatomic particles, Searle imagines that the simulation would refer to things like neurotransmitters and synaptic clefts and cell assemblies, which is probably a more realistic assumption. (Again, more on the more realistic assumptions later.)
4. Could the brain be a hypercomputer?
Another line of thought that’s somewhat similar to Gödelian arguments against AI comes from the philosophers Diane Proudfoot and Jack Copeland, who in 1999 coined the term “hypercomputation” to refer to the idea that it might be possible to build a machine that’s similar to a modern computer but which can do things no Turing machine can do (Copeland 2000).
In a sense, we already know how to build some kinds of machines like this. Strictly speaking, Turing machines are defined to be deterministic; there’s no random element. But if you wanted to, you could build a device that generates truly random numbers by measuring the decay of a nuclear isotope, and then have a computer use the measurement in calculations. You could get a similar effect with a device that measures atmospheric noise (which is the method used by the website Random.org).
There are some situations where a random element like that might be useful, mainly because you want to keep other people from predicting how exactly your computer will behave (think slot machines or generating encryption keys). For the same reason, it wouldn’t be surprising if humans and other animals had evolved to have a random element in their behavior.
But as Turing himself pointed out, it’s possible to write a program that isn’t really random but merely appears random (Copeland 2000). Today, such programs are known as pseudo-random number generators and many programming languages have one built in. If you’re just learning programming, one exercise worth trying is to write a program that somehow uses the random number generator, like a simulated dice roller. Its behavior should appear random the first time you run the program, but depending on how you’ve written it, it may do exactly the same thing the second time your run it.
There are some well-known arguments from neuroscience that are supposed to show individual quantum events can’t play much of a role in the brain (Tegmark 2000), so it may be that human behavior has evolved to sometimes appear partly random using something like a pseudo-random number generator rather than genuine quantum randomness. In any case, the point is that even if human behavior included a genuinely random element--or even a non-deterministic form of free will that looks random from the outside, if you believe in such things--the point is we already know how to build machines that can simulate that.
Some things, however, can’t be done by any machine we know how to build. Famously, the formal definition of a Turing machine allows the machine to sometimes “halt,” and there’s no way to make a Turing machine that will look at a program for simulating any other Turing machine and determine whether the other machine will halt. This is known as the halting problem, and we don’t know how to build any machine that could solve it. And it’s this sense of hypercomputation, of machines with abilities beyond those of any machines we know how to build, that’s most interesting.
Unlike Penrose, Proudfoot and Copeland don’t claim to know that the brain does anything Turing machine’s can’t, but Copeland (2000) treats this as an important possibility, and they’ve claimed as recently as last year that “it remains unknown, however, whether hypercomputation is permitted or excluded by real-world physics” (Proudfoot and Copeland 2012). So they’re claiming it might be true that computers as we know them today are in principle incapable of simulating a human, but they’re not claiming that’s definitely true. They don’t think we know.
They do, however, assume that whatever humans do, we do through physical means, which means that if humans do hypercomputation, then it should also be possible to build a physical device that does hypercomputation. Of course, since we have no idea how to do that right now, humans doing hypercomputation would put full simulation of human beings a lot further off into the future.
While I don’t know of any arguments that show that our current understanding of physics totally rules out hypercomputation, there seems to be a tendency for proposals for hypercomputation to rely on implausible physical assumptions, especially when applied to the brain. For example, if you could somehow get access to infinite time, you could find out if any Turing machine halts by running it for an infinite length of time and seeing what happens. It’s been suggested that the theory of relativity could allow spacetime to be bent in a way that gives you the necessary infinite time (Ord 2006). Whether or not that’s actually possible, though, it seems unlikely that our brains bend spacetime in that way.
Similarly, there are some theoretical proposals for how an analog computer (as opposed to a digital computer, which is what modern computers are) might be able to do things like solve the halting problem, and it’s sometimes suggested that since the brain is analog, maybe the brain can do things like solve the halting problem. But these proposals tend to assume a device with unlimited precision (Cotogno 2003, Davis 2004, Ord 2006). It’s not clear that it’s physically possible to build a device with unlimited precision, and there’s good reason to think the brain’s precision is limited. Chris Eliasmith (2001) cites an estimate that “neurons tend to encode approximately 3-7 bits of information per spike,” which is less than a single byte.
5. Hardware and software issues
I’ve been arguing that it’s probably possible, in principle, to simulate mathematicians and philosophers and so on using the kinds of devices we already know how to build. Again, by “in principle” I mean not ruled out by the kind of philosophical and mathematical objections I’ve been talking about; possible with enough time and storage and with the right program. Since in the real world, it’s easier to design faster hardware than it is to design more time, it’s helpful to rephrase what I just said as, “what computers can do with powerful enough hardware and the right software.”
Maybe you think the hardware and software issues with AI were the real issues all along. However, if on the one hand you think we’ll never replace human mathematicians with computers, but on the other hand you don’t think that because of any in principle objection like some version of the Gödelian argument, it’s worth being clear about that.
It’s less clear what philosophers can say about hardware and software issues, but I’ll mention a few things. As I’ve said, simulating a human being at the level of subatomic particles is unlikely to ever be practical, but it’s also unlikely to ever be necessary. As I’ve also noted, individual quantum events probably don’t play an important role in the brain. The behavior of any individual molecule or ion is also unlikely to matter; events in the brain generally involve too many molecules and ions for the behavior of any one of them to matter.
So there’s a limit to how much detail a simulation of an actual brain would need. Where exactly that limit is is unclear; Anders Sandberg and Nick Bostrom (2008) have a paper where they list some possibilities. This approach to simulation would require a simulated body and simulated environment, but level of detail required for those would probably be much less than the level of detail required for the brain. Also, for the purposes of replacing particular human workers with computer programs, it wouldn’t be necessary to simulate the inner workings of the brain closely at all. What matters is simulating the outward behavior, and maybe you could do that through something like machine learning algorithms trained on actual human behavior.
So the hardware requirements for a mathematician simulation wouldn’t be quite as large as it might have seemed, but they could still be very large. So can we meet them? The rapid improvements in hardware that we’re all used to by now have relied on making transistors smaller and smaller, but we’ll reach a point where the size of transistors is measured in atoms and we can’t do that anymore. There are some suggestions for how we might continue making improvements in computer hardware even after we reach that point (Sandberg and Bostrom 2008), but I’m not the right person to evaluate them.
Even if we develop powerful enough hardware and the software is out there somewhere in the space of all possible software, maybe we’ll never find the software because searching that space of possible software will turn out to be too hard. One thing to say about this is that people in robotics and AI have realized that it’s easy to for computers to beat humans at things we didn’t evolve to do, like arithmetic, but hard for computers to beat humans at things we did evolve to do, like walking upright (Moravec 1988, Pinker 1997).
That suggests getting computers to do everything humans do is going to be difficult, but at least there’s a non-mysterious reason for the difficulty (because humans have evolved over millions of years to be really good at certain things without even realizing how good we are). It also suggests that there’s a limit to how hard getting computers to do everything humans do can be, because evolution produced humans just by going at the blind process of natural selection for a long enough period of time.
However, there is the possibility that humans are a fluke. Maybe you only get something like human brains once for every million or more planets where something like mammal brains have already developed. So maybe building things like humans is harder than a first-glance evolutionary argument would suggest. Nick Bostrom has a paper discussing that issue in greater depth (Shulman and Bostrom 2012).
Finally, I’ve focused on the ability of computers to take over human jobs, but that shouldn’t be interpreted as a claim that that’s going to be the most important effect of advances in AI. If computers become able to take over the vast majority of human jobs, there may be much bigger consequences than just unemployment (Hanson 2008, Muehlhauser and Salamon forthcoming). But that’s a topic for another day.
Bibliography
Chalmers, D. (1995). “Minds, Machines, and Mathematics.” Psyche, 2: 11-20.
---- (1996). The Conscious Mind: In Search of a Fundamental Theory. Oxford University Press.
---- (2010). “The Singularity: A Philosophical Analysis.” Journal of Consciousness Studies, 17: 7-65.
Copeland, B. J. (2000). “Narrow Versus Wide Mechanism.” Journal of Philosophy, 96: 5-32.
Cotogno, P. (2003). "Hypercomputation and the Physical Church-Turing Thesis," British Journal for the Philosophy of Science, 54 (2): 181-224.
Davis, M. (2004). “The Myth of Hypercomputation.” in C. Teuscher (ed.) Alan Turing: Life and legacy of a great thinker. Springer, 195-212.
Dennett, D. (1981). “Reflections,” in D. Hofstadter and D. Dennett, The Mind’s I: Fantasies and Reflections on Self and Soul. Bantam Books, 92-95.
Doctorow, C. (2011). “Untouched By Human Hands.” MAKE, 25: 16.
Dreyfus, H. (1992). What Computers Still Can’t Do: A Critique of Artificial Reason. MIT Press.
Eliasmith, C. (2001). “Attractive and in-discrete: A critique of two putative virtues of the dynamicist theory of mind.” Minds and Machines, 11: 417-426.
Hanson, R. (2008). “Economics of the Singularity.” Spectrum, IEEE, 45 (6): 45-50.
Horst, S. (2011). “The Computational Theory of Mind”, in E. N. Zalta (ed.) The Stanford Encyclopedia of Philosophy (Spring 2011 Edition). URL: http://plato.stanford.edu/archives/sum2010/entries/functionalism/.
Levin, J. (2010). “Functionalism”, in E. N. Zalta (ed.) The Stanford Encyclopedia of Philosophy (Summer 2010 Edition). URL: http://plato.stanford.edu/archives/sum2010/entries/functionalism/.
Lucas, J. (1961). “Minds, Machines, and Gödel.” Philosophy, 36: 112-127.
Moravec, H. (1988). Mind Children: The Future of Robot and Human Intelligence. Harvard University Press.
Muehlhauser, L. and A. Salamon. (forthcoming). In A. Eden, J. Søraker, J. H. Moor, and E. Steinhart (ed.) Singularity Hypotheses: A scientific and philosophical assessment. Springer.
Nilsson, N. (2005). “Human-Level Artificial Intelligence? Be Serious!”
Ord, T. (2006). “The Many Forms of Hypercomputation.” Applied Mathematics and Computation, 178: 143-153.
Penrose, R. (1996). “Beyond the Doubting of a Shadow.” Psyche 2.
Pinker, S. (1997). How the Mind Works. W. W. Norton & Company.
Proudfoot, D. and B. J. Copeland. (2012). “Artificial Intelligence,” in E. Margolis, R. Samuels, and S. Stich (ed.) The Oxford Handbook of Philosophy of Cognitive Science. Oxford University Press.
Putnam, H. (1980). “Philosophy and our mental life”, in Ned Block (ed.) Readings in the Philosophy of Psychology, Volume 1, 134-143.
Sandberg, A. and N. Bostrom. (2008). Whole Brain Emulation: A Roadmap, Technical Report #2008‐3, Future of Humanity Institute, Oxford University.
Searle, J. (1980). “Minds, Brains, and Programs.” Behavioral and Brain Sciences, 3 (3): 433-460.
---- (1997). The Mystery of Consciousness. The New York Review of Books.
Shulman, C. and N. Bostrom. (2012). “How Hard is Artificial Intelligence? Evolutionary Arguments and Selection Effects.” Journal of Consciousness Studies, 19: 103-130.
Tegmark, M. (2000). "Importance of quantum decoherence in brain processes." Physical Review E, 61 (4): 4194–4206.
Turing, A. (1950). “Computing Machinery and Intelligence.” Mind, 49: 433-460.
14 comments
Comments sorted by top scores.
comment by Nisan · 2013-05-24T23:12:17.322Z · LW(p) · GW(p)
Nice! By the way, I always thought the soul interacted with the body through the pineal gland, not the pituitary gland.
Replies from: ChrisHallquist, Jayson_Virissimo↑ comment by ChrisHallquist · 2013-05-25T00:24:22.226Z · LW(p) · GW(p)
Ah, you're right, thanks for catching that.
↑ comment by Jayson_Virissimo · 2013-05-24T23:45:52.665Z · LW(p) · GW(p)
If you are referring to Descartes' natural philosophy, then yes, it is the pineal gland where the action is (supposed to be).
comment by DSherron · 2013-05-28T17:15:58.561Z · LW(p) · GW(p)
It seems to me that this paper is overly long and filled with unnecessary references, even with a view towards philosophers who don't know anything from the field. It suffices to say that "bottom-up predictability" applied to the mind implies that we can build a machine to do the things which the mind does. The difficulty of doing so has a strict upper bound in the difficulty of building an organic brain from scratch, and is very probably easier than that (if any special physical properties are involved, they can very likely be duplicated by something much easier to build). Basically, if you accept that the brain is a physical system, then every argument you can produce about how physical systems can't do what the brain does is necessarily wrong (although you might need something that isn't a digital computer). Anything past that is an empirical technological issue which is not really in the realm of philosophy at all, but rather of computer scientists and physicists.
The sections on Godel's theorem and hyper computation could be summed up in a quick couple of paragraphs which reference each in turn as examples of objections that physical systems can't do what minds do, followed by the reminder that if you accept the mind as a physical system then clearly those objections can't apply. It feels like you just keep saying the same things over and over in the paper; by the end I was wondering what the point was. Certainly I didn't feel like your title tied into the paper very well at all, and there wasn't a strong thesis that stood out to me ("Given that the brain is a physical system, and physics is consistent (the same laws govern machines we find in nature, like the human brain, and those we build ourselves), then it must be possible in principle to build machines which can do any and all jobs that a human can do.") My proposed thesis is stronger than yours, mostly because machines have already taken many jobs (factory assembly lines) and in fact machines are already able to perform mathematical proofs (look up Computer Assisted Proofs -they can solve problems that humans can't due to speed requirements). I also use "machines" instead of AI in order to avoid questions of intelligence or the like - the Chinese Room might produce great works of literature, even if it is believed not to be intelligent, and that literature could be worth money.
Don't take this as an attack or anything, but rather as criticism that you can use to strengthen your paper. There's a good point here, I just think it needs be brought out and given the spotlight. The basic point is not complex, and the only thing you need in order to support it is an argument that the laws of physics don't treat things we build differently just because we built them (there's probably some literature here for an Appeal to Authority if you feel you need one; otherwise a simple argument from Occam's Razor plus the failure to observe this being the case in anything so far is sufficient). You might want an argument for machines that are not physically identical to humans, but you'll lose some of your non-reductionist audience (maybe hyper computation is possible for humans but nothing else). Such an argument can be achieved through Turing-complete simulation, or in the case of hyper computation the observation that it should be probably possible to build something that isn't a brain but uses the same special physics.
Replies from: ChrisHallquist↑ comment by ChrisHallquist · 2013-05-28T22:38:13.261Z · LW(p) · GW(p)
Thank you for the detailed commentary.
It seems to me that this paper is overly long and filled with unnecessary references, even with a view towards philosophers who don't know anything from the field.
You may be right about this, though I also want to be cautious because of illusion of transparency issues.
It suffices to say that "bottom-up predictability" applied to the mind implies that we can build a machine to do the things which the mind does.
What I want to claim is somewhat stronger than that; notably there's the question of whether [i]the general types of machines we already know how to build[/i] can do the things the human mind does. That might not be true if, e.g., you believe in physical hypercomputation (which I don't, but it's the kind of thing you want to address if you want to satisfy stubborn philosophers that you've dealt with as wide a range of possible objections as possible).
Basically, if you accept that the brain is a physical system, then every argument you can produce about how physical systems can't do what the brain does is necessarily wrong (although you might need something that isn't a digital computer).
Again, it would be nice if it were that simple, but there are people who insist they'll have nothing to do with dualism but who advance the idea that computers can't do what the brain does, and they don't accept that argument.
The sections on Godel's theorem and hyper computation could be summed up in a quick couple of paragraphs which reference each in turn as examples of objections that physical systems can't do what minds do, followed by the reminder that if you accept the mind as a physical system then clearly those objections can't apply.
Again, slightly more complicated than this. Penrose, Proudfoot, Copeland, and others who see AI as somehow philosophically or conceptually problematic often present themselves as accepting that the mind is physical.
Your comment makes me think I need to be clearer about who my opponents are--namely, people who say they accept the mind is physical but claim AI is philosophically or conceptually problematic. Does that sound right to you?
Replies from: DSherron↑ comment by DSherron · 2013-05-29T01:20:03.965Z · LW(p) · GW(p)
Do those same people still oppose the on-principle feasibility of the Chinese Room? I can understand why such people might have problems with the idea of a conscious AI, but I was not aware of a faction which thought that machines could never replicate a mind physically other than substance dualists. I'm not well-read in the field, so I could certainly be wrong about the existence of such people, but that seems like a super basic logic fail. Either a) minds are Turing complete, meaning we can replicate them, b) minds are hyper computers in a way which follows some normal physical law, meaning we can replicate them, or c) minds are hyper computers in a way which cannot be replicated (substance dualism). I don't see how there is a possible fourth view where minds are hyper computers that cannot in principle be replicated, but they follow only normal physical laws. Maybe some sort of material anti-reductionist who holds that there is a particular law which governs things that are exactly minds but nothing else? They would need to deny the in-principle feasibility of humans ever building a meat brain from scratch, which is hard to do (and of course it immediately loses to Occam's Raxor, but then this is philosophy, eh?). If you're neither an anti-reductionist nor a dualist then there's no way to make the claim, and there are better arguments against the people who are. I don't really see much point in trying to convince anti-reductionists or dualisms of anything, since their beliefs are un-correlated to reality anyway.
Note: there are still interesting feasibility-in-real-life questions to be explored, but those are technical questions. In any case your paper would be well improved by adding a clear thesis near the start of what you're proposing, in detail.
Oh, and before I forget, the question of whether machines we can currently build can implement a mind is purely a question of whether a mind is a hyper computer or not. We don't know how to build those yet, but if it somehow was then we'd presumably figure out how that part worked.
comment by pjeby · 2013-05-29T03:47:26.316Z · LW(p) · GW(p)
This is known as the halting problem, and we don’t know how to build any machine that could solve it.
Humans can't "solve" it either, in that sense. We can pattern-recognize that some programs will halt or not halt, but there exist huge spaces of programs in between where we would be just as helpless to give a yes or no answer as any computer program.
I'm not sure what this should be considered evidence of, but somehow it seems relevant. ;-)
Replies from: Nornagest↑ comment by Nornagest · 2013-05-29T04:37:56.914Z · LW(p) · GW(p)
For that matter, it's perfectly possible to build algorithms that can accurately tell you whether or not some systems will halt -- certain types of infinite loop are easily machine-detectable, to give one simple example. It's doing it in the general case that's impossible.
comment by Cyan · 2013-05-26T04:06:57.290Z · LW(p) · GW(p)
Replies from: ChrisHallquist↑ comment by ChrisHallquist · 2013-05-26T14:34:24.633Z · LW(p) · GW(p)
Thanks, fixed.
comment by figor888 · 2014-09-17T03:32:29.323Z · LW(p) · GW(p)
I certainly believe Artificial Intelligence can and will perform many mundane jobs that are nothing more than mindless repetition and even in some instances create art, that said what about world leaders or position that require making decisions that affect segments of the population in unfair ways, such as storage of nuclear waste, transportation systems, etc?
To me, the answer is obvious, only a fool would trust an A.I. to make such high-level decisions. Without empathy, the A.I. could never provide a believable decision that any sane person should trust, no matter how many variables are used for its cost-benefit analysis.
Replies from: ChrisHallquist↑ comment by ChrisHallquist · 2014-09-17T11:30:07.690Z · LW(p) · GW(p)
Hi! Welcome to LessWrong! A lot of people on LessWrong are worried about the problem you describe, which is why the Machine Intelligence Research Institute exists. In practice, the problem of getting an AI to share human values looks very hard. But, given that human values are implemented in human brains, it looks like it should be possible in principle to implement them in computer code as well.
comment by Martin-2 · 2013-05-26T17:40:21.551Z · LW(p) · GW(p)
Finally, Lucas implicitly assumes that if the mind is a formal systems, then our “seeing” a statement to be true involves the statement being proved in that formal system.
To me this seems like the crux of the issue (in fact, I perceive it to be the crux of the issue, so QED). Of course there are LW posts like Your Intuitions are not Magic, but surely a computer could output something like "arithmetic is probably consistent for the following reasons..." instead of a formal proof attempt if asked the right question.
Replies from: Decius