Preface to a Proposal for a New Mode of Inquiry

post by Daniel_Burfoot · 2010-05-17T02:11:02.211Z · LW · GW · Legacy · 85 comments

Contents

85 comments

Summary: The problem of AI has turned out to be a lot harder than was originally thought. One hypothesis is that the obstacle is not a shortcoming of mathematics or theory, but limitations in the philosophy of science. This article is a preview of a series of posts that will describe how, by making a minor revision in our understanding of the scientific method, further progress can be achieved by establishing AI as an empirical science.

 

The field of artificial intelligence has been around for more than fifty years. If one takes an optimistic view of things, its possible to believe that a lot of progress has been made. A chess program defeated the top-ranked human grandmaster. Robotic cars drove autonomously across 132 miles of Mojave desert. And Google seems to have made great strides in machine translation, apparently by feeding massive quantities of data to a statistical learning algorithm.

But even as the field has advanced, the horizon has seemed to recede. In some sense the field's successes make its failures all the more conspicuous. The best chess programs are better than any human, but go is still challenging for computers. Robotic cars can drive across the desert, but they're not ready to share the road with human drivers. And Google is pretty good at translating Spanish to English, but still produces howlers when translating Japanese to English. The failures indicate that, instead of being threads in a majestic general theory, the successes were just narrow, isolated solutions to problems that turned out to be easier than they originally appeared.

So what went wrong, and how to move forward? Most mainstream AI researchers are reluctant to provide clear answers to this question, so instead one must read behind the lines in the literature. Every new paper in AI implicitly suggests that the research subfield of which it is a part will, if vigorously pursued, lead to dramatic progress towards intelligence. People who study reinforcement learning think the answer is to develop better versions of algorithms like Q-Learning and temporal difference (TD) learning. The researchers behind the IBM Blue Brain project think the answer is to conduct massive neural simulations. For some roboticists, the answer involves the idea of embodiment: since the purpose of the brain is to control the body, to understand intelligence one should build robots, put them in the real world, watch how they behave, notice the problems they encounter, and then try to solve those problems. Practitioners of computer vision believe that since the visual cortex takes up such a huge fraction of total brain volume, the best way to understand general intelligence is to first study vision.

Now, I have some sympathy for the views mentioned above. If I had been thinking seriously about AI in the 80s, I would probably have gotten excited about the idea of reinforcement learning. But reinforcement learning is now basically an old idea, as is embodiment (this tradition can be traced back to the seminal papers by Rodney Brooks in the early 90s), and computer vision is almost as old as AI itself. If these avenues really led to some kind of amazing result, it probably would already have been found.

So, dissatisfied with the ideas of my predecessors, I've taken some trouble to develop my own hypothesis regarding the question of how to move forward. And desperate times call for desperate measures: the long failure of AI to live up to its promises suggests that the obstacle is no small thing that can be solved merely by writing down a new algorithm or theorem. What I propose is nothing less than a complete reexamination of our answers to fundamental philosophical questions. What is a scientific theory? What is the real meaning of the scientific method (and why did it take so long for people to figure out the part about empirical verification)? How do we separate science from pseudoscience? What is Ockham's Razor really telling us? Why does physics work so amazingly, terrifyingly well, while fields like economics and nutrition stumble?

Now, my answers to these fundamental questions aren't going to be radical. It all adds up to normality. No one who is up-to-date on topics like information theory, machine learning, and Bayesian statistics will be shocked by what I have to say here. But my answers are slightly different from the traditional ones. And by starting from a slightly different philosophical origin, and following the logical path as it opened up in front of me, I've reached a clearing in the conceptual woods that is bright, beautiful, and silent.

Without getting too far ahead of myself, let me give you a bit of a preview of the ideas I'm going to discuss. One highly relevant issue is the role that other, more mature fields have had in shaping modern AI. One obvious influence comes from computer science, since presumably AI will eventually be built using software. But this fact appears irrelevant to me, and so the influence of computer science on AI seems like a disastrous historical accident. To suggest that computer science should be an important influence on AI is a bit like suggesting that woodworking should be an important influence on music, since most musical instruments are made out of wood. Another influence, that should in principle be healthy but in practice isn't, comes from physics. Unfortunately, for the most part, AI researchers have imitated only the superficial appearance of physics - its use of sophisticated mathematics - while ignoring its essential trait, which is its obsession with reality. In my view, AI can and must become a hard, empirical science, in which researchers propose, test, refine, and often discard theories of empirical reality. But theories of AI will not work like theories of physics. We'll see that AI can be considered, in some sense, the epistemological converse of physics. Physics works by using complex deductive reasoning (calculus, differential equations, group theory, etc) built on top of a minimalist inductive framework (the physical laws). Human intelligence, in contrast, is based on a complex inductive foundation, supplemented by minor deductive operations. In many ways, AI will come to resemble disciplines like botany, zoology, and cartography - fields in which the researchers' core methodological impulse is to go out into the world and write down what they see.

An important aspect of my proposal will be to expand the definitions of the words "scientific theory" and "scientific method". A scientific theory, to me, is a computational tool that can be used to produce reliable predictions, and a scientific method is a process of obtaining good scientific theories. Botany and zoology make reliable predictions, so they must have scientific theories. In contrast to physics, however, they depend far less on the use of controlled experiments. The analogy to human learning is strong: humans achieve the ability to make reliable predictions without conducting controlled experiments. Typically, though, experimental sciences are considered to be far harder, more rigorous, and more quantitative than observational sciences. But I will propose a generalized version of the scientific method, which includes human learning as a special case, and shows how to make observational sciences just as hard, rigorous, and quantitative as physics.

As a result of learning, humans achieve the ability to make fairly good predictions about some types of phenomena. It seems clear that a major component of that predictive power is the ability to transform raw sensory data into abstract perceptions. The photons fall on my eye in a certain pattern which I recognize as a doorknob, allowing me to predict that if I turn the knob, the door will open. So humans are amazingly talented at perception, and modestly good at prediction. Are there any other ingredients necessary for intelligence? My answer is: not really. In particular, in my view humans are terrible at planning. Our decision making algorithm is not much more than: invent a plan, try to predict what will happen based on that plan, and if the prediction seems good, implement the plan. All the "magic" really comes from the ability to make accurate predictions. So a major difference in my approach as opposed to traditional AI is that the emphasis is on prediction through learning and perception, as opposed to planning through logic and deduction.

As a final point, I want to note that my proposal is not analogous to or in conflict with theories of brain function like deep belief networks, neural Darwinism, symbol systems, or hierarchical temporal memories. My proposal is like an interface: it specifies the input and the output, but not the implementation. It embodies an immense and multifaceted Question, to which I have no real answer. But, crucially, the Question comes with a rigorous evaluation procedure that allows one to compare candidate answers. Finding those answers will be an awesome challenge, and I hope I can convince some of you to work with me on that challenge.

I am going to post an outline of my proposal over the next couple of weeks. I expect most of you will disagree with most of it, but I hope we can at least identify concretely the points at which our views diverge. I am very interested in feedback and criticism, both regarding material issues (since we reason to argue), and on issues of style and presentation. The ideas are not fundamentally difficult; if you can't understand what I'm saying, I will accept at least three quarters of the blame.

85 comments

Comments sorted by top scores.

comment by Johnicholas · 2010-05-17T15:26:26.023Z · LW(p) · GW(p)

Here's an issue of style and presentation: Would you mind editing your text (or your future texts), striving to remove self-reference and cheerleading ("fluff")?

A small number of uses of "I/my" and colorful language ("amazing, terrifying, bright, beautiful, silent, immense, multifaceted") is reasonable, but the discipline of focusing almost entirely on the ideas being discussed helps both you and your readers understand what the ideas actually are.

As far as I can tell, the content of your post is "I will be posting over the next couple of weeks.", and the rest is fluff. Since you did invest some time in writing this post, you must have believed there was more to it. The fluff has either confused you (into believing this post was substantial) or confused me (preventing me from seeing the substantial arguments).

comment by SilasBarta · 2010-05-17T04:14:21.385Z · LW(p) · GW(p)

You maybe should have mentioned the earlier discussion of your idea on the open thread, in which I believed I spotted some critical problems with where you're going: you seem to be endorsing a sort of "blank slate" model in that humans have a really good reasoning engine, and the stimuli humans get after birth are sufficient to make all the right inferences.

However, all experimental evidence tells us (cf. Pinker's The Blank Slate) that humans make a significantly smaller set of inferences on our sense data than are logically possible under constraint of Occam's razor; there are grammatical errors that children never make in any language; there are expectations babies all have, at the same time, though none has gathered enough postnatal sense data to justify such inferences, etc.

I conclude that it is fruitless to attempt to find "general intelligence" by looking at what general algorithm would make the inferences human do, given postnatal stimuli. My alternative suggestion is to identify human intelligence as a combination of general reasoning and pre-encoding of environment-specific knowledge that humans do not have to entirely relearn after birth because the brain wiring-up in the womb already filters out inference patterns that don't win.

That knowledge can come from the "accumulated wisdom" of the evolution history, meaning you need to account for how that data was transformed in a human's present internal model.

ETA: Wow, I was sloppy when I wrote this; hope the point was able to shine through. Typos and missing words corrected. Should make more sense now.

Replies from: Daniel_Burfoot
comment by Daniel_Burfoot · 2010-05-17T10:51:55.123Z · LW(p) · GW(p)

The reason I didn't link to that discussion is that it was kind of tangential to what will be my main points. My goal is to understand the natural setting of the learning problem, not the specifics of how humans solve it.

Replies from: SilasBarta
comment by SilasBarta · 2010-05-17T14:22:13.010Z · LW(p) · GW(p)

But you've made assumptions that will keep you from finding that setting. Your approach already commits itself to treating humans as a blank slate. But humans aren't "blank slate with great algorithm"; they're "heavily formatted slate with respectable context-specific algorithm".

Replies from: Daniel_Burfoot
comment by Daniel_Burfoot · 2010-05-17T17:32:25.514Z · LW(p) · GW(p)

Let's postpone this debate until the main points become a bit more clear. I don't think of myself as "treating humans" at all, much less as a blank slate!

Replies from: SilasBarta
comment by SilasBarta · 2010-05-17T21:35:42.976Z · LW(p) · GW(p)

Could you at least give some signal of your idea's quality that distinguishes it from the millions with hopeless ideas who scream "You guys are doing it all wrong, I've got something that's just totally different from everything else and will get it right this time"?

Because a lot of what you've said so far isn't promising.

Replies from: Daniel_Burfoot
comment by Daniel_Burfoot · 2010-05-17T23:22:11.511Z · LW(p) · GW(p)

Yikes, take it easy. When I said "let's argue", I meant let's argue after I've made some of my main points.

Replies from: SilasBarta, ocr-fork
comment by SilasBarta · 2010-05-18T00:07:33.928Z · LW(p) · GW(p)

Yes, I read that part of your comment. But having posted on the order of ~1500 words on your idea by now (this article + our past exchange), I still can't find a sign of anything promising, and you've had more than enough space by now to distinguish yourself from the dime-a-dozen folks claiming to have all the answers on AI.

I strongly recommend that you look at whatever you have prepared for your next article, and cut it down to about 500 words in which you get straight to the point.

LW is a great site because of its frequent comments and articles from people who have assimilated Eliezer Yudkowsky's lessons on rationality; I'd hate to see it turn into a platform for just any AI idea that someone thinks is the greatest ever.

comment by ocr-fork · 2010-05-17T23:42:06.324Z · LW(p) · GW(p)

Which will be soon, right?

comment by Mass_Driver · 2010-05-17T03:49:36.096Z · LW(p) · GW(p)

I'm intrigued and looking forward to reading your articles. I suggest you change your title-writing algorithm, though. To my ears, "Preface to a Proposal for a New Mode of Inquiry" sounds like a softcover edition of a book co-authored by a committee of the five bastard stepchildren of Kant and Kafka.

comment by Vladimir_Nesov · 2010-05-17T10:50:02.442Z · LW(p) · GW(p)

To suggest that computer science should be an important influence on AI is a bit like suggesting that woodworking should be an important influence on music, since most musical instruments are made out of wood.

"Computer science is no more about computers than astronomy is about telescopes." -- E. Dijkstra

Replies from: djcb, Vladimir_M
comment by djcb · 2010-05-17T21:12:25.126Z · LW(p) · GW(p)

Dijkstra did take a bit narrow view of computer science though, or maybe he was a bit tongue-in-cheek here.

I think actual computers should influence computer science; for instance, it's crucial for fast algorithms to be smart with respect to CPU cache usage, but many of the 'classical computer science' hash tables are quite bad in that area.

comment by Vladimir_M · 2010-05-17T20:50:03.409Z · LW(p) · GW(p)

"Computer science is no more about computers than astronomy is about telescopes."

I'm a bit surprised this statement is being upvoted with such apparent admiration here. I've always found it rather inaccurate.

comment by Vladimir_Nesov · 2010-05-17T11:00:39.413Z · LW(p) · GW(p)

Our decision making algorithm is not much more than: invent a plan, try to predict what will happen based on that plan, and if the prediction seems good, implement the plan. All the "magic" really comes from the ability to make accurate predictions.

You need to locate a reasonable hypothesis before there is any chance for it to be right. A lot of magic is hidden in the "invent a plan".

comment by nhamann · 2010-05-18T07:14:06.579Z · LW(p) · GW(p)

To suggest that computer science should be an important influence on AI is a bit like suggesting that woodworking should be an important influence on music, since most musical instruments are made out of wood.

It's been brought up in multiple comments already, but I also wanted to register my disapproval of this statement. The first four minutes of the first SICP video lecture has the best description of computer science that I've ever heard, so I quote:

"The reason that we think computer science is about computers is pretty much the same reason that the Egyptians thought geometry was about surveying instruments, and that is when some field is just getting started and you don't really understand it very well, it's very easy to confuse the essence of what you're doing with the tools that you use...I think in the future, people will look back and say, "well yes, those primitives in the 20th century were fiddling around with these gadgets called 'computers,' but really what they were doing was starting to learn how to formalize intuitions about process: how to do things; starting to develop a way to talk precisely about 'how-to' knowledge, as opposed to geometry that talks about 'what is true.'" - Hal Abelson

That said, I'm looking forward to your upcoming posts.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-05-19T04:14:50.896Z · LW(p) · GW(p)

Yet, OP has a point. In the course of getting a PhD in computer science, I had the requirement or opportunity to study computer hardware architecture, operating system design, compiler design, data structures, databases, graphics, and lots of different computer languages. And none of that stuff was ever relevant to AI - not one page of it. (Even the data structures and databases courses dealt only with data structures inappropriate for AI.) The courses I took in linguistics, neuroscience, mathematics, psychology, and even electrical engineering were all more useful.

Other than the specifically AI-oriented courses, I can recall only 2 computer science courses that turned out to be helpful for AI: Algorithm analysis, and computational complexity theory. And the AI courses always seemed out of place in the computer science department.

I would not recommend anyone interested in AI to major in computer science. Far too much time wasted on irrelevant subjects. It's difficult to say what they should major in - perhaps neuroscience, or math.

comment by John_Maxwell (John_Maxwell_IV) · 2010-05-17T07:43:49.157Z · LW(p) · GW(p)

Er, have you given much thought to friendliness?

Anna Salamon once described the Singularity Institute's task as to "discover differential equations before anyone else has discovered algebra". The idea being that writing an AI that will behave predictably according to a set of rules you give it is much more difficult than building an AI that's smart enough to do dangerous stuff. It seems to me that if your ideas about AI are correct, you will be contributing to public knowledge of algebra.

Replies from: Daniel_Burfoot, nhamann, ocr-fork
comment by Daniel_Burfoot · 2010-05-17T23:17:04.743Z · LW(p) · GW(p)

I see that I am caught between a rock and a hard place. To people who think I'm wrong, I'm a crackpot who should be downvoted into oblivion. To people who think I might have something interesting and original to say, I'm helping to bring about the destruction of humanity.

To people who think I'm wrong: fine, who cares? Isn't the point of this site to be a forum where relatively well-informed discussions can take place about issues of mutual interest?

To people who think I'm bringing about doomsday: if my ideas are substantively right, it's going to take a long time before this stuff gets rolling. It will take a decade just to convince the mainstream scientific establishment. After that, things might speed up, but it's still going to be a long, hard slog. Did I mention I have only a good question, not an answer? Let's all take some deep breaths.

Replies from: John_Maxwell_IV, ocr-fork
comment by John_Maxwell (John_Maxwell_IV) · 2010-05-18T05:00:19.603Z · LW(p) · GW(p)

BTW, a potential bias you should be aware of in this situation is the human tendency to be irrationally inclined to go through with things once they said they're going to do them. (I believe Robert Cialdini's Influence: Science and Practice talks about this.) So you might want to consider self-observing and trying to detect if that bias is having any influence on your thought process. I (and, probably, all of the kind folks at SIAI--although of course I can't speak for them) will completely forgive you if you go back on your public statements on this. Speaking for myself individually, I'd see this as a demonstration of virtue.

And just to be a little silly, I'll use another technique from Influence on you: reciprocation. When I read that you didn't think computer science would be fundamental to the development of strong AI, I immediately thought "That can't be right". I had a very strong gut feeling that somehow, computer science must be fundamental to the development of strong AI and I immediately starting trying to find a reason for why it was. (It seems Vladimir Nesov's reaction was very similar to mine, and note that he didn't find much of a reason. My guess is his comment's high score is a result of many LW readers sharing his and my gut instinct.) However, I noticed that my mind had entered one of its failure modes (motivated continuation) and I thought to myself "Well, I don't have any solid argument now for why computer science must be fundamental, and there's no real reason for me to look for an argument in favor of that idea instead of an argument against it." So now I've publicly admitted that my gut instinct was unfounded and that my mind is broken; maybe using the Dark Technique of trying to get you to reciprocate will convince you to do the same. :P

To people who think I'm bringing about doomsday: if my ideas are substantively right, it's going to take a long time before this stuff gets rolling. It will take a decade just to convince the mainstream scientific establishment. After that, things might speed up, but it's still going to be a long, hard slog. Did I mention I have only a good question, not an answer? Let's all take some deep breaths.

I believe Eliezer is a member of the school of thought which holds that the intelligence explosion could potentially be triggered by nine geniuses working together in a basement.

Replies from: NihilCredo, Vladimir_Nesov
comment by NihilCredo · 2010-05-18T14:31:37.826Z · LW(p) · GW(p)

I believe Eliezer is... nine geniuses working together in a basement.

By the nether gods... IT ALL MAKES SENSE NOW

comment by Vladimir_Nesov · 2010-05-18T14:23:27.527Z · LW(p) · GW(p)

Note that I attacked a flaw in the argument (usage of analogy that assumes that computer science is about computers), and never said anything about the implied conclusion (that computer science is irrelevant for AI). And this does reflect my reaction.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2010-05-18T19:31:54.265Z · LW(p) · GW(p)

Oh, sorry, I missed that.

comment by ocr-fork · 2010-05-18T00:09:14.415Z · LW(p) · GW(p)

Let's all take some deep breaths.

I sense this thread has crossed a threshold, beyond which questions and criticisms will multiply faster than they can be answered.

comment by nhamann · 2010-05-17T18:21:28.476Z · LW(p) · GW(p)

But that is an absurd task, because if you don't understand algebra, you certainly won't be discovering differentiation. Attempting to "discover differential equations before anyone else has discovered algebra" doesn't mean you can skip over discovering algebra, it just means you also have to discover it in addition to discovering DE's.

It seems that a more reasonable approach would be a) work towards algebra while simultaneously b) researching and publicizing the potential dangers of unrestrained algebra use (Oops, the metaphor broke.)

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2010-05-18T04:35:55.315Z · LW(p) · GW(p)

But that is an absurd task, because if you don't understand algebra, you certainly won't be discovering differentiation. Attempting to "discover differential equations before anyone else has discovered algebra" doesn't mean you can skip over discovering algebra, it just means you also have to discover it in addition to discovering DE's.

To clarify: 'Anna Salamon once described the Singularity Institute's task as to "discover differential equations before anyone who isn't concerned with friendliness has discovered algebra".'

Replies from: nhamann
comment by nhamann · 2010-05-18T06:44:40.335Z · LW(p) · GW(p)

Okay, but what exactly is the suggestion here? That the OP should not publicize his work on AI? That the OP shouldn't even work on AI at all, and should dedicate his efforts to advocating friendly AI discussion and research instead? If a major current barrier to FAI is understanding how intelligence even works to begin with, then this preliminary work (if it is useful) is going to be a necessary component to both regular AGI and FAI. Is the only problem you see, then, that it's going to be made publicly available? Perhaps we should establish private section of LW for Top Secret AI discussion?

I apologize for being snarky, but I can't help but find it absurd that we should be worrying about the effects of LW articles on unfriendly singularity, especially given that the hard takeoff model, to my knowledge, is still rather fuzzy. (Last I checked, Robin Hanson put probability of hard takeoff at less than 1%. Unfriendly singularity is so bad an outcome that research and discussion about hard takeoff is warranted, of course, but is it not a bit of an overreaction to suggest that this series of articles might be too dangerous to be made available to the public?)

Replies from: rhollerith_dot_com, John_Maxwell_IV
comment by RHollerith (rhollerith_dot_com) · 2010-05-18T21:35:43.285Z · LW(p) · GW(p)

Last I checked, Robin Hanson put probability of hard takeoff at less than 1%.

And among writers actually skilled at general rationality who do not have a very large personal vested interest in one particular answer (i.e., have already invested years of their lives in becoming AGI researchers), Robin Hanson is on one extreme end of the continuum of opinion on the subject.

Okay, but what exactly is the suggestion here? That the OP should not publicize his work on AI?

Seems like the sensible course of action to me! Do you really think Eliezer and other responsible AGI researchers have published all of their insights into AGI?

If the OP wishes to make a career in AGI research, he can do so responsibly by affiliating himself with SIAI, the Future of Humanity Institute or some other group with a responsible approach to AGI. They will probably share their insights with him only after a lengthy probationary period during which they vigorously check him for signs that he might do something irresponsible once they have taken him into their confidence. (ADDED. If it were me, I would look mainly for signs that the candidate might make a choice which tends to have a bad effect on the global situation, but a positive effect on his or her scientific reputation or on some other personal agenda that humans typically care about.) And they will probably share their insights with him only after he has made a commitment to stay with the group for life.

Replies from: nhamann, Tyrrell_McAllister, RobinHanson, NancyLebovitz, Vladimir_Nesov
comment by nhamann · 2010-05-18T21:53:35.334Z · LW(p) · GW(p)

Seems like the sensible course of action to me! Do you really think Eliezer and other responsible AGI researchers have published all of their insights into AGI?

I don't buy that that's a good approach, though. This seems more like security through obscurity to me: keep all the work hidden, and hope that it's both a) on the right track and b) that no one else stumbles upon it. If, on the other hand, AI discussion did take place on LW, then that gives us a chance to frame the discussion and ensure that FAI is always a central concern.

People here are fond of saying "people are crazy, the world is mad," which is sadly true. But friendliness is too important an issue for SIAI and the community surrounding it to set itself up as stewards of humanity; every effort needs to be made to bring this issue to the forefront of mainstream AI research.

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2010-05-19T00:35:41.003Z · LW(p) · GW(p)

friendliness is too important an issue for SIAI and the community surrounding it to set itself up as stewards of humanity

I agree, which is why I wrote, "SIAI, the Future of Humanity Institute or some other group with a responsible approach to AGI". If for some reason, the OP does not wish to or is not able to join one of the existing responsible groups, he can start his own.

In security through obscurity, a group relies on a practice they have invented and kept secret when they could have chosen instead to adopt a practice that has the benefit of peer review and more testing against reality. Well, yeah, if there exists a practice that has already been tested extensively against reality and undergone extensive peer review, then the responsible AGI groups should adopt it -- but there is no practice like that for solving this particular problem. There are no good historical examples of the current situation with AGI, but the body of practice with the most direct applicability that I can think of right now is the situation during and after WW II in which the big military powers mounted vigorous systematic campaigns that lasted for decades to restrict the dissemination of certain kind of scientific and technical knowledge. Let me remind that in the U.S. this campaign included the requirement for decades that vendors of high-end computer hardware and machine tools obtain permission from the Commerce Department before exporting any products to the Soviets and their allies. Before WW II, other factors (like wealth and the will to continue to fight) besides scientific and technical knowledge dominated the list of factors that decided military outcomes.

Note the current plan of the SIAI for what the AGI should do after it is created is to be guided by an "extrapolation" that gives equal weight to the wishes or "volition" of every single human living at the time of the creation of the AGI, which IMHO goes a very long way to aleviating any legit concerns of people who cannot joing one of the responsible AGI groups.

comment by Tyrrell_McAllister · 2010-05-18T22:23:27.115Z · LW(p) · GW(p)

And among writers actually skilled at general rationality who do not have a very large personal vested interest in one particular answer (i.e., have already invested years of their lives in becoming AGI researchers), Robin Hanson is on one extreme end of the continuum of opinion on the subject.

I didn't realize that. Have there been surveys to establish that Robin's view is extreme?

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2010-05-19T00:47:46.697Z · LW(p) · GW(p)

In discussions on Overcoming Bias during the last 3 years, before and after LW spun off of Overcoming Bias, most people voicing opinions backed by actual reasoning voiced opinions that assigned a higher probability to a hard take-off given that a self-improving AGI is created than Robin.

In the spirit of impartial search for the truth, I will note that rwallace on LW advocates not worrying about unFriendly AI, but I think he has invested years becoming an AGI researcher. Katja Grace is another who thinks hard take-off very unlikely and has actual reasoning on her blog to that effect. She has not invested any time becoming an AGI researcher and has lived for a time at Benton Street as a Visiting Fellow and in the Washington, D.C., area where she traveled with the express purpose of learning from Robin Hanson.

All the full-time employees and volunteers of SIAI that I know of assign much more probability to hard take-off (given AGI) than Robin does. At a workshop following last year's Singularity Summit, every attendee expressed the wish that brain emulation would arrive before AGI. I get the definite impression that those wishes stems mainly from fears of hard takeoff, and not from optimism about brain emulation per se. In the spirit of impartial search for truth, I note that SIAI employees and volunteers probably chose the attendee list of this workshop.

Replies from: Tyrrell_McAllister, steven0461, JoshuaZ
comment by Tyrrell_McAllister · 2010-05-19T14:32:08.182Z · LW(p) · GW(p)

All the full-time employees and volunteers of SIAI that I know of assign much more probability to hard take-off (given AGI) than Robin does.

I'm not convinced that "full-time employees and volunteers of SIAI" are representative of "writers actually skilled at general rationality who do not have a very large personal vested interest in one particular answer", even when weighted by level of rationality.

I'm under the vague impression that Daniel Dennett and Douglas Hofstadter are skeptical about hard take-off. Do you know whether that impression is correct?

ETA: . . . or is there a reason to exclude them from the relevant class of writers?

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2010-05-19T16:13:07.938Z · LW(p) · GW(p)

No, I know of no reason to exclude Douglas Hofstadter from the relevant class of writers though his writings on the topic that I have seen are IMO not very good. Dennett has shown abundant signs of high skill at general rationality, but I do not know if he has done the necessary reading to have an informed probability of hard take-off. But to get to your question, I do not know anything about Dennett's opinions about hard take-off. (But I'd rather talk of the magnitude of the (negative) expected utility of the bad effects of AGI research than about "hard take-off" specifically.)

Add Bill Joy to the list of people very worried about the possibility that AI research will destroy civilization. He wrote of it in an influential piece in Wired in 2000. (And Peter Theil if his donations to SIAI mean what I think they mean.)

Note that unlike those who have invested a lot of labor in SIAI, and consequently who stand to gain in prestige if SIAI or SIAI's area of interest gains in prestige or importance, Bill Joy has nothing personal to gain from holding the opinion he holds. Neither do I, BTW: I applied to become a visiting fellow at SIAI last year and was turned down in such a way that made it plain that the decision was probably permanent and probably would not be revisited next year. Then I volunteered to work at SIAI at no cost to SIAI and was again turned down. (((ADDED. I should rephrase that: although SIAI is friendly and open and has loose affiliations with very many people (including myself) my discussions with SIAI have left me with the impression that I will probably not be working closely enough with SIAI at any point in the future for an increase in SIAI's prestige (or income for that matter) to rub off on me.))) I would rather have not disclosed that in public, but I think it is important to give another example of a person who has no short-term personal stake in the matter who thinks that AGI research is really dangerous. Also, it makes people more likely to take seriously my opinion that AGI researchers should join a group like SIAI instead of publishing their results for all the world to see. (I am not an AGI researcher and am too old (49) to become one. Like math, it really is a young person's game.)

Let me get more specific on how dangerous I think AGI research is: I think a healthy person of, say, 18 years of age is more likely to be killed by AGI gone bad than by cancer or by war (not counting deaths caused by military research into AGI). (I owe this way of framing the issue to Eliezer, who expressed an even higher probability to me 2 years ago.)

any other questions for me?

Replies from: NancyLebovitz, kodos96, Tyrrell_McAllister
comment by NancyLebovitz · 2010-05-19T17:48:20.816Z · LW(p) · GW(p)

Please expand on your reasons for thinking AGI is a serious risk within the next 60 years or so.

comment by kodos96 · 2010-05-19T19:01:24.586Z · LW(p) · GW(p)

and was turned down in such a way that made it plain that the decision was probably permanent and probably would not be revisited

Hmmm... I have absolutely no knowledge of the politics involved in this, but it sounds intriguing.... could you elaborate on this a bit more?

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2010-05-20T09:46:25.777Z · LW(p) · GW(p)

BTW I have added a sentence of clarification to my comment.

All I am going to say in reply to your question is that the policy that seems to work best in the part of the world in which I live (California) is to apply to participate in any educational program one would like to participate in and to join every outfit one would like to join, and to interpret the rejection of such an application as neither a reflection on one's value as a person nor the result of the operation of "politics".

comment by Tyrrell_McAllister · 2010-05-19T18:45:43.415Z · LW(p) · GW(p)

any other questions for me?

Nope, that's all from me. Thanks for your thorough reply :). (My question was just about the meta-level claim about expert consensus, not the object level claim that there will be a hard take-off.)

comment by steven0461 · 2010-05-19T01:19:46.855Z · LW(p) · GW(p)

Also, people who believe hard takeoff is plausible are more likely to want to work with SIAI, and people at SIAI will probably have heard the pro-hard-takeoff arguments more than the anti-hard-takeoff arguments. That said, <1% is as far as I can tell a clear outlier among those who have thought seriously about the issue.

Replies from: Will_Newsome
comment by Will_Newsome · 2010-05-20T06:52:51.617Z · LW(p) · GW(p)

When Robin visited Benton house and the 1% figure was brought up, he was skeptical that he had ever made such a claim. Do you know where that estimate came up (on OB or wherever)? I'm worried about ascribing incorrect probability estimates to people who are fully able to give new ones if we asked.

Replies from: arundelo
comment by arundelo · 2010-05-20T07:48:05.841Z · LW(p) · GW(p)

Off-topic question: Is Benton house the same as the SIAI house? (I see that it is in the Bay Area.) Edit: Thanks Nick and Kevin!

Replies from: Kevin, Nick_Tarleton
comment by Kevin · 2010-05-20T08:01:38.281Z · LW(p) · GW(p)

The people living there seem to call it Benton house or Benton but I try to avoid calling it that to most people because it is clearly confusing. It'll be even more confusing if the SIAI house moves from Benton Street...

comment by Nick_Tarleton · 2010-05-20T07:53:53.653Z · LW(p) · GW(p)

Yes.

comment by JoshuaZ · 2010-05-19T14:45:54.476Z · LW(p) · GW(p)

At a workshop following last year's Singularity Summit, every attendee expressed the > wish that brain emulation would arrive before AGI. I get the definite impression that those wishes stems mainly from fears of hard takeoff, and not from optimism about brain emulation per se.

Are you sure this wasn't a worry at all due to the fact that even without hard take-off moderately smart unFriendly AI can do a lot of damage?

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2010-05-19T16:29:43.176Z · LW(p) · GW(p)

Are you sure this wasn't a worry at all due to the fact that even without hard take-off moderately smart unFriendly AI can do a lot of damage?

Well, the question prompting the discussion was whether a responsible AGI researcher should just publish his or her results (and let us for the sake of this dialog define an idea that took a long time to identify even though it might not pan out a "result") for any old AGI researcher to see or whether he or she should take care to control as best he or she can the dissemination of the results so that rate of dissemination to responsible researchers is optimized relative to rate of dissemination to irresponsible ones. If an unFriendly AI can do a lot of damage without hard take-off, well, I humbly suggests he or she should take pains to control dissemination.

But to answer your question in case you are asking out of curiosity rather than to forward the discussion on "controlled dissemination": well, Eliezer certainly thinks hard take-off represents the majority of the negative expected utility, and if the other (2) attendees of the workshop that I have had long conversations with felt differently, I would have learned of that by now more likely than not. (I, too, believe that hard take-off represent the majority of the negative expected utility even when utility is defined the "popular" way rather than the rather outre way I define it.)

Replies from: JoshuaZ
comment by JoshuaZ · 2010-05-21T02:32:54.636Z · LW(p) · GW(p)

Yes, this was a question about curiosity of the responses not in regards specifically to the issue of controlled dissemination.

comment by RobinHanson · 2010-05-21T10:52:56.733Z · LW(p) · GW(p)

For rational people skeptical about hard takeoff, consider the Interim Report from the Panel Chairs, AAAI Presidential Panel on Long-Term AI Futures. Most economists I've talked to are also quite skeptical, much more so than I. Dismissing such folks because they haven't read enough of your writings or attended your events seems a bit biased to me.

Replies from: Roko, rhollerith_dot_com, CarlShulman
comment by Roko · 2010-05-21T11:38:00.332Z · LW(p) · GW(p)

"The panel of experts was overall skeptical of the radical views expressed by futurists and science-fiction authors. Participants reviewed prior writings and thinking about the possibility of an “intelligence explosion” where computers one day begin designing computers that are more intelligent than themselves. They also reviewed efforts to develop principles for guiding the behavior of autonomous and semi-autonomous systems. Some of the prior and ongoing research on the latter can be viewed by people familiar with Isaac Asimov's Robot Series as formalization and study of behavioral controls akin to Asimov’s Laws of Robotics. There was overall skepticism about the prospect of an intelligence explosion as well as of a “coming singularity,” and also about the large-scale loss of control of intelligent systems."

comment by RHollerith (rhollerith_dot_com) · 2010-05-21T11:19:26.121Z · LW(p) · GW(p)

Hi Robin!

If a professional philosopher or an economist gives his probability that AGI researchers will destroy the world, I think a curious inquirer should check for evidence that the philosopher or economist has actually learned the basics of the skills and domains of knowledge the AGI researchers are likely to use.

I am pretty sure that you have, but I do not know that, e.g., Daniel Dennett has, excellent rationalist though he is. All I was saying is that my interlocutor should check that before deciding how much weight to give Dennett's probability.

Replies from: RobinHanson
comment by RobinHanson · 2010-05-21T17:37:22.587Z · LW(p) · GW(p)

But in the above you explicitly choose to exclude AGI researchers. Now you also want to exclude those who haven't read a lot about AGI? Seems like you are trying to exclude as irrelevant everyone who isn't an AGI amateur like you.

Replies from: jimrandomh
comment by jimrandomh · 2010-05-21T18:30:58.309Z · LW(p) · GW(p)

I guess it depends where exactly you set the threshold. Require too much knowledge and the pool of opinions, and the diversity of the sources of those opinions, will be too small (ie, just "AGI ameteurs"). On the other hand, the minimum amount of research required to properly understand the AGI issue is substantial, and if someone demonstrates a serious lack of understanding, such as claiming that AI will never be able to do something that narrow AIs can do already, then I have no problem excluding their opinion.

comment by CarlShulman · 2010-05-21T18:27:45.718Z · LW(p) · GW(p)

Most economists I've talked to are also quite skeptical, much more so than I.

About advanced AI being developed, extremely rapid economic growth upon development, or local gains?

comment by NancyLebovitz · 2010-05-18T22:17:32.781Z · LW(p) · GW(p)

Now that you mention it, I didn't have any opinion about whether Eliezar et al had secret ideas about AI.

My tentative assumption is that they hadn't gotten far enough to have anything worth keeping secret, but this is completely a guess based on very little.

comment by Vladimir_Nesov · 2010-05-18T21:47:20.728Z · LW(p) · GW(p)

Lots of guesswork.

comment by John_Maxwell (John_Maxwell_IV) · 2010-05-18T19:30:12.553Z · LW(p) · GW(p)

(Last I checked, Robin Hanson put probability of hard takeoff at less than 1%. Unfriendly singularity is so bad an outcome that research and discussion about hard takeoff is warranted, of course, but is it not a bit of an overreaction to suggest that this series of articles might be too dangerous to be made available to the public?)

If the probability of hard takeoff was 0.1%, it's still too high a probability for me to want there to be public discussion of how one might build an AI.

http://www.nickbostrom.com/astronomical/waste.html

Because the lifespan of galaxies is measured in billions of years, whereas the time-scale of any delays that we could realistically affect would rather be measured in years or decades, the consideration of risk trumps the consideration of opportunity cost. For example, a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years.

comment by ocr-fork · 2010-05-17T21:44:58.238Z · LW(p) · GW(p)

I don't get it. Are you saying a smart, dangerous AI can't be simple and predictable? Differential equations are made of algebra, so did she mean the task is impossible? You were replying to my post, right?

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2010-05-18T04:37:09.669Z · LW(p) · GW(p)

Are you saying a smart, dangerous AI can't be simple and predictable?

Probably not simple.

The point is that for it to be predictable, you'd need a very high level of knowledge about it. More than the amount necessary to build it.

comment by Morendil · 2010-05-17T06:25:27.881Z · LW(p) · GW(p)

To suggest that computer science should be an important influence on AI is a bit like suggesting that woodworking should be an important influence on music, since most musical instruments are made out of wood.

Something of a jarring note in an otherwise interesting post (I'm at least curious to see the follow-up), in that you are a) reasoning by analogy and b) picking the wrong one: the usual story about music is that it begins with plucked strings and that the study of string resonance modes gave rise to the theories of tuning and harmony.

Replies from: Daniel_Burfoot
comment by Daniel_Burfoot · 2010-05-17T10:44:37.067Z · LW(p) · GW(p)

in that you are a) reasoning by analogy and b) picking the wrong one:

I have separate reasons for believing that CS is a bad influence (the analogy is an illustration, not an argument). Basically, CS is a mix of theory and engineering with very little empirical science mixed in.

comment by ocr-fork · 2010-05-17T05:48:23.598Z · LW(p) · GW(p)

I think I understand better now.

Your proposal seems to involve throwing out "sophisticated mathematics" in favor of something else more practical, and probably more complex. You can't do that. Math always wins.

The problem with math is that it's too powerful: it describes everything, including everything you're not interested in. In theory, all you need to make an AI is a few Turing machines to simulate reality and Bayes theorem to pick the right ones. In practice this AI would take an eternity to run. Turing machines live in a world of 0s and 1s, but we live a world made of clouds and birds, and a machine that talks in binary about clouds and birds would be complicated and hard to find. For a practical AI, you need a model of computation that regards nouns, verbs and people as the building blocks of reality, and regards Turing machines as very weird examples of nouns. This model would perform worse than a Turing machine if presented with a freakish alternate universe with no concept of time or space, but otherwise it's fine. The hard part is compromising between simplicity and open-mindedness.

The same applies to neural networks. In theory, the shape can be anything you like as long as it's big enough.(I'm leaving out a lot of details here, sorry.)Math is just the general framework that you build reality inside.

Empirical methods are upside down. You're starting with the gritty details, hoping that as everything piles up something more powerful than bayesian inference will emerge. That won't happen. Instead you'll get a lousy, brittle copy of bayesian inference that can't handle anything too different from what it was designed for... like a human.

(Edited for grammar)

Replies from: Daniel_Burfoot
comment by Daniel_Burfoot · 2010-05-17T11:01:13.727Z · LW(p) · GW(p)

Your proposal seems to involve throwing out "sophisticated mathematics"

I am not, of course, against mathematics per se. But the reason math is used in physics is because it describes reality. All too often in AI and computer vision, math seems to be used because it's impressive.

Obviously, in fields like physics math is very, very useful. In other cases, it's better to just go out and write down what you see. So cartographers make maps, zoologists write field guides, and linguists write dictionaries. Why a priori should we prefer one epistemological scheme to another?

Replies from: ata, ocr-fork
comment by ata · 2010-05-21T11:15:41.175Z · LW(p) · GW(p)

I am not, of course, against mathematics per se. But the reason math is used in physics is because it describes reality. All too often in AI and computer vision, math seems to be used because it's impressive.

I'd find it much more impressive if you could do anything useful in AI or computer vision without math.

comment by ocr-fork · 2010-05-17T13:03:23.707Z · LW(p) · GW(p)

What else is there to see besides humans?

Replies from: Clippy
comment by Clippy · 2010-05-17T15:47:24.807Z · LW(p) · GW(p)

Paperclips. Also, paperclip makers. And paperclip maker makers. And paperclip maker maker makers.

And stuff for maintaining paperclip maker maker makers.

Replies from: cupholder
comment by cupholder · 2010-05-17T16:13:30.438Z · LW(p) · GW(p)

And paper?

Replies from: Clippy
comment by Clippy · 2010-05-17T17:38:14.374Z · LW(p) · GW(p)

Maybe.

comment by whpearson · 2010-05-17T10:35:52.219Z · LW(p) · GW(p)

I am unsure whether this is LW material. There are plenty of people with ideas about AI and it tends to generate more heat than light, from my experience. I'll reserve judgement though, since there is a need for a place to discuss things.

First I agree with the need to take AI in different directions.

However I'm sceptical of the Input Output view of intelligence. Humans aren't pure functions that always map the same input to the same output, it relies on their history as well. So even if you have a system that corresponds with what a human does for the time t to t+n it may not correspond at times greater than t+n.

The way forward, for me, is to look at altering the software ecosystem. Currently the programs we write are static rigid structures with limited awareness of their surrounding software. They are like this because it is easier for the human system administrator to deal with. We need to write software that looks at its computing environment and reasons about it to manage itself and the (virtual) machines that enable this to be done in an controlled fashion.

comment by ObliqueFault · 2010-05-17T17:01:51.071Z · LW(p) · GW(p)

"(and why did it take so long for people to figure out the part about empirical verification)?"

Most of the immediate progress after the advent of empiricism was about engineering more than science. I think the biggest hurdle wasn't lack of understanding of the importance of empirical verification, but lack of understanding of human biases.

Early scientists just assumed that they were either unbiased or that their biases wouldn't affect the data. They had no idea of the power of expectation and selection biases, placebo effects, etc. It wasn't until people realized this and started controlling for it that science took off.

'An important aspect of my proposal will be to expand the definitions of the words "scientific theory" and "scientific method"'

I have to admit that this idea makes me extremely wary, but that's probably because I'm used to statements like this coming from people with a harmful agenda (i.e. creationists). I'll try to keep an open mind when I read your future posts in this series.

comment by [deleted] · 2010-05-17T03:35:56.681Z · LW(p) · GW(p)

Have you heard of the methodology proposed by cyberneticists and systems engineers and how is it similar or different from what you are proposing?

Edited for diplomacy/clarity.

comment by ocr-fork · 2010-05-17T03:37:52.777Z · LW(p) · GW(p)

So... what's your proposal?

I am going to post an outline of my proposal over the next couple of weeks. I expect most of you will disagree with most of it, but I hope we can at least identify concretely the points at which our views diverge. I am very interested in feedback and criticism, both regarding material issues (since we reason to argue), and on issues of style and presentation. The ideas are not fundamentally difficult; if you can't understand what I'm saying, I will accept at least three quarters of the blame.

Aw come on, just one little hint? Most posts have a tl;dr paragraph or a "related to" to help people understand.

To suggest that computer science should be an important influence on AI is a bit like suggesting that woodworking should be an important influence on music, since most musical instruments are made out of wood.

Computer science is probably not what you think it is. AI is included in it; but so is applied stuff like hacking. I think time (not watchmaking, just time) would make a better example.

Edited for trying/failing not to sound mean/weird.

comment by mindviews · 2010-05-17T08:18:27.342Z · LW(p) · GW(p)

Thoughts I found interesting:

The failures indicate that, instead of being threads in a majestic general theory, the successes were just narrow, isolated solutions to problems that turned out to be easier than they originally appeared.

Interesting because I don't think it's true. I think the problem is more about the need of AI builders to show results. Providing a solution (or a partial solution or a path to a solution) in a narrow context is a way to do that when your tools aren't yet powerful enough for more general or mixed approaches. Given the variety of identifiable structures in the human brain that gives us intelligence I strongly expect that an AI will be built by combining many specialized parts that will probably be based on multiple research areas we'd recognize today.

One obvious influence comes from computer science, since presumably AI will eventually be built using software. But this fact appears irrelevant to me, and so the influence of computer science on AI seems like a disastrous historical accident.

Interesting because it forced me to consider what I think AI is outside the context of computer science - something I don't normally do.

In my view, AI can and must become a hard, empirical science, in which researchers propose, test, refine, and often discard theories of empirical reality.

Interesting because I'm very curious to see what this means in the context of your coming proposal.

comment by Jonathan_Graehl · 2010-05-17T03:33:12.044Z · LW(p) · GW(p)

I work in machine translation research. Google might have a little more data, but there are several groups doing equally good work.

comment by zero_call · 2010-05-20T01:56:24.781Z · LW(p) · GW(p)

This sounds really good and interesting, and is well written, but it also sounds incredibly ambitious. Maybe a little more conservative presentation would be more convincing for me.

comment by Vladimir_Nesov · 2010-05-17T10:54:41.971Z · LW(p) · GW(p)

Human intelligence, in contrast, is based on a complex inductive foundation, supplemented by minor deductive operations.

You'd be hard-pressed to formalize this statement, since any notion of "induction" can find a deductive conceptualization.

Replies from: Daniel_Burfoot
comment by Daniel_Burfoot · 2010-05-17T17:44:40.473Z · LW(p) · GW(p)

I will formalize it. I don't know what your second statement means; to me induction and deduction are completely different. 2+2=4 is a deductive statement, provably true within the context of a certain formal system. "Mars is red" is an inductive statement, it can't be derived from some larger theory; we believe it because of empirical evidence.

Replies from: Vladimir_M, SilasBarta
comment by Vladimir_M · 2010-05-17T20:37:55.871Z · LW(p) · GW(p)

"Mars is red" is an inductive statement, it can't be derived from some larger theory; we believe it because of empirical evidence.

That's not an example of a non-trivial induction, since you're talking about a set with only one element. A truly inductive statement says something about a larger set of things where we don't have the relevant empirical data about each single one of them. And once you start formalizing a procedure for non-trivial induction, the boundary between induction and deduction becomes very blurry indeed.

Replies from: Daniel_Burfoot
comment by Daniel_Burfoot · 2010-05-21T18:57:17.230Z · LW(p) · GW(p)

Maybe an example will clarify the issue. Compare general relativity to a world atlas. Both are computational tools that enable predictions, so both are, by my definition, scientific theories. Now GR is very complex deductively (it relies on complex mathematics), but very simple parametrically (it uses only a couple of constants). The world atlas is the opposite - simple deductively but complex parametrically (requires a lot of bits to specify).

comment by SilasBarta · 2010-05-17T21:14:58.348Z · LW(p) · GW(p)

I trust you've read the discussions and articles regarding the status of purported "a priori" knowledge, then? If not, I have reason to suspect your ideas will not appear informed and will thus not yield inslight.

comment by JanetK · 2010-05-17T09:04:37.720Z · LW(p) · GW(p)

One thing the world has is an abundance of human minds. We actually do not need machines that think like humans - we have humans. What we need is two other things: machines that do thinking that humans find difficult (like the big number crunchers) and one-off machines that are experimental proofs-of-concept for understanding how a human brain works (like Blue Brain). As far as getting the glory for doing what many said was impossible and unveiling a mechanical human-like intelligence, forget the glory because they will just move the goal posts.

I believe that what is needed is to leave sequential operations and learn how to effectively use parallel operations. This would get close to a human intelligence and would also advance the power of computing of a non-human but useful kind.

I think you are so very right about the importance of prediction. !!! And looking forward to later posts.

Replies from: Risto_Saarelma, cupholder, whpearson
comment by Risto_Saarelma · 2010-05-17T10:47:38.890Z · LW(p) · GW(p)

One thing the world has is an abundance of human minds. We actually do not need machines that think like humans - we have humans.

Machines for doing dangerous and monotonous work which requires human or near-human levels of perception and judgment such as mining or driving trucks would have a clear utility, even though they'd just be machines that think (somewhat) like humans and would neither do superhuman feats of cognition nor advance the understanding of the mind design space.

comment by cupholder · 2010-05-17T16:32:17.008Z · LW(p) · GW(p)

One thing the world has is an abundance of human minds. We actually do not need machines that think like humans - we have humans.

We have an abundance of ordinary human minds. We don't have an abundance of genius human minds. For all I know, machines that thought like Shakespeare or Mill or Newton could be a godsend.

Replies from: NihilCredo
comment by NihilCredo · 2010-05-17T16:50:25.808Z · LW(p) · GW(p)

One can make a case that genius is precisely the degree to which one does not think like a human mind (at least in a more useful and/or beautiful way).

Replies from: cupholder
comment by cupholder · 2010-05-17T17:32:07.277Z · LW(p) · GW(p)

Depends how broadly you're drawing the line around the 'human mind' concept. I'd say that since Shakespeare, Mill and Newton's minds were all human minds, that's a prima facie case for saying they think like humans.

comment by whpearson · 2010-05-17T13:04:18.679Z · LW(p) · GW(p)

Well I'd agree we don't want exact human clones. But then the majority of people don't want the complex to use computers we have at the moment. Moving from serial to parallel won't make the computer any easier to use or reduce the learning burden on the user. The beauty of interacting with a human is that you don't need to know the fine details of how it works on the inside to get it to do what you want, even if it didn't have the ability to do the task previously. This aspect of the human brain would be very beneficial if we can get computers to have it (assuming it doesn't lead to negative singularity, extinction of the human race etc).

Replies from: ocr-fork
comment by ocr-fork · 2010-05-17T13:39:02.697Z · LW(p) · GW(p)

An AI that acts like people? I wouldn't buy that. It sounds creepy. Like Clippy with a soul.

Replies from: whpearson
comment by whpearson · 2010-05-17T13:54:10.683Z · LW(p) · GW(p)

I didn't say acts like people. I said had one aspect of humans (and dogs or other trainable animals for that matter).

We don't need to add all the other aspects to make it act like a human.