Link: why training a.i. isn’t like training your pets

post by XiXiDu · 2011-01-12T18:23:12.823Z · LW · GW · Legacy · 78 comments

As the SIAI is gaining publicity more people are reviewing its work. I am not sure how popular this blog is but judged by its about page he writes for some high-profile blogs. His latest post takes on Omohundro's "Basic AI Drives":

When we last looked at a paper from the Singularity Institute, it was an interesting work asking if we actually know what we’re really measuring when trying to evaluate intelligence by Dr. Shane Legg. While I found a few points that seemed a little odd to me, the broader point Dr. Legg was perusing was very much valid and there were some equations to consider. However, this paper isn’t exactly representative of most of the things you’ll find coming from the Institute’s fellows. Generally, what you’ll see are spanning philosophical treatises filled with metaphors, trying to make sense out of a technology that either doesn’t really exist and treated as a black box with inputs and outputs, or imagined by the author as a combination of whatever a popular science site reported about new research ideas in computer science. The end result of this process tends to be a lot like this warning about the need to develop a friendly or benevolent artificial intelligence system based on a rather fast and loose set of concepts about what an AI might decide to do and what will drive its decisions.

Link: worldofweirdthings.com/2011/01/12/why-training-a-i-isnt-like-training-your-pets/

I posted a few comments but do not think to be the right person to continue that discussion. So if you believe it is important what other people think about the SIAI and want to improve its public relations, there is your chance. I'm myself interested in the answers to his objections.

78 comments

Comments sorted by top scores.

comment by JoshuaZ · 2011-01-14T04:46:40.105Z · LW(p) · GW(p)

I'm worried that XiXi posted this link expressly as an example of the sort of thing that the SIAI should be engaging in and then when the author came over here, his comments got quickly downvoted. This is not an effective recipe for engagement.

Replies from: GregFish, TheOtherDave, wedrifid
comment by GregFish · 2011-01-14T14:07:08.614Z · LW(p) · GW(p)

Hey, if people choose to downvote my replies, either because they disagree or just plain don't like me, that's their thing. I'm not all that easy to scare with a few downvotes... =)

comment by TheOtherDave · 2011-01-14T13:54:46.491Z · LW(p) · GW(p)

Do you think the comments themselves ought not have been downvoted? Or just that, regardless of the value of the comments, the author ought not have been?

If the former, that seems a broader concern. If you have a sense of what it is about them that the community disliked that it ought not have disliked, it might be valuable to articulate that sense and why a different metric would be preferable.

If the latter, I'm not sure that's a bad thing, nor am I sure that "fixing" it doesn't cause more problems than it resolves.

comment by wedrifid · 2011-01-14T10:59:49.626Z · LW(p) · GW(p)

Wolf!

comment by timtyler · 2011-01-12T19:21:31.422Z · LW(p) · GW(p)

Shane Legg is not "from the Singularity Institute". He is currently a postdoctoral research fellow at the Gatsby Computational Neuroscience Unit in London.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-01-12T19:42:50.655Z · LW(p) · GW(p)

The reason that the piece refers to him in that context is that the author read Legg's material on the advice of Michael Anissimov (who is affiliated with the SI).

comment by whpearson · 2011-01-13T13:05:00.535Z · LW(p) · GW(p)

My view is that the problem here is a disconnect between the practical and the theoretical view points.

The practical view of computers is likely to commit pc-morphism, that is assume that any computer systems of the future are likely to be like current PCs in the way that they are programmed and act. This is not unreasonable if you haven't been exposed to things like cellular automata and have a lot of evidence of computers being PC-like.

The theoretical view looks at the entire world as a computer (computable physics etc) and so has grander views of what is possible. People who go for the theoretical view of computation also tend to go for the theoretical view of agents and like Omohundro's theory. I'm a little more sceptical of this and would rather keep the view at the computational level as the Omohundro-style view doesn't tell us when things will make mistakes or malfunction. The abstraction tosses out too much information for my liking.

The computational view is currently lacking a good framework for discussing intelligence. If we had one we would certainly be closer to implementing it. Whether we will get one in the future is hard to predict,

comment by Normal_Anomaly · 2011-01-13T02:32:04.320Z · LW(p) · GW(p)

I am by no means an expert, but I see a problem with this passage:

Wanted behaviors are rewarded, unwanted are punished, and the subject is basically trained to do something based on this feedback. It’s a simple and effective method since you’re not required to communicate the exact details of a task with your subject. Your subject might not even be human, and that’s ok because eventually, after enough trial and error, he’ll get the idea of what he should be doing to avoid a punishment and receive the reward. But while you’re plugging into the existing behavior-consequence circuit of an organism and hijacking it for your goals, no such circuit exists in a machine.

This seems like it's overgeneralizing "machines". I think there would be a "behavior-consequence" circuit in an AI that tells it to repeat things that result in a reward, and not repeat things that result in punishment, if the programmers put one there. Isn't this principle (reinforcement learning) already being implemented in narrow AI, for instance here? If I'm doing this wrong, or that link is not trustworthy, let me know please.

Replies from: Nornagest
comment by Nornagest · 2011-01-13T04:55:00.467Z · LW(p) · GW(p)

There are a number of machine learning techniques that don't involve progressive reinforcement of any kind. Most of those I can think of are either too crude to support AGI or computationally intractable when generalized outside of tiny problem domains, but I don't know of any proof that says AGI implies reinforcement learning.

On the other hand, you could make an analogous but stronger argument in terms of fitness functions.

Replies from: PeterisP, Normal_Anomaly
comment by PeterisP · 2011-01-16T15:09:11.564Z · LW(p) · GW(p)

To put it in very simple terms - if you're interested in training AI according to technique X because you think that X is the best way, then you design or adapt the AI structure so that technique X is applicable. Saying 'some AI's may not respond to X' is moot, unless you're talking about trying to influence (hack?) AI designed and controlled by someone else.

comment by Normal_Anomaly · 2011-01-13T12:54:13.256Z · LW(p) · GW(p)

Thanks for the response. I'll check out the other techniques; I don't know much about them.

I don't know of any proof that says AGI implies reinforcement learning.

I didn't mean that, exactly; I just meant that reinforcement learning is possible. Fish seemed to be implying that it wasn't.

Replies from: GregFish
comment by GregFish · 2011-01-13T15:31:33.168Z · LW(p) · GW(p)

Fish seemed to be implying that it wasn't.

Absolutely not. If you take another look, I argue that it's uncessary. You don't want the machine to do something? Put in a boundry. You don't have the option to just turn off a lab rat's desire to search a particular corner of its cage with a press of a button, so all you can do is put in some deterrent. But with a machine, you can just tell it not to do that. For example, this code in Java would mean not to add two even numbers if the method recieves them:

public int add(int a, int b) { if ((a % 2) != 0 && (b % 2) != 0) { return a + b; } return -1; }

So why do I need to build an elaborate curcuit to "reward" the computer for not adding even numbers? And why would it suddenly decide to override the condition? Just to see why? If I wanted it to experiment, I'd just give it fewer bounds.

Replies from: Perplexed
comment by Perplexed · 2011-01-13T18:49:33.679Z · LW(p) · GW(p)

Part of the disagreement here seems to arise from disjoint models of what a powerful AI would consist of.

You seem to imagine something like an ordinary computer, which receives its instructions in some high-level imperative language, and then carries them out, making use of a huge library of provably correct algorithms.

Other people imagine something like a neural net containing more 'neurons' than the human brain - a device which is born with little more hardwired programming than the general guidance that 'learning is good' and 'hurting people is bad' together with a high-speed internet connection and the URL for wikipedia. Training such an AI might well be a bit like training your pets.

It is not clear to me which kind of AI will reach a human level of intelligence first. But if I had to bet, I would guess the second. And therein lies the danger.

ETA: But even the first kind of AI can be dangerous, because sooner or later someone is going to issue a command with unforeseen consequences.

Replies from: GregFish, timtyler
comment by GregFish · 2011-01-14T13:34:06.487Z · LW(p) · GW(p)

Other people imagine something like a neural net containing more 'neurons' than the human brain - a device which is born with little more hardwired programming than the general guidance...

That's not what an artificial neural net actually is. When training your ANN, you give it an input and tell it what the output should be. Then, using a method called backpropagation, you tell it to adjust the weights and activation thresholds of each neuron object until it can match the output. So you're not just telling it to learn, you're telling it what the problem is and what the answer should be, then let it find its way to the solution. Then, you apply what it learned on real-world problems.

Again, those other people you mention seem to think that a lot more is going on in an AI system than is actually going on.

Replies from: Perplexed
comment by Perplexed · 2011-01-14T15:30:54.471Z · LW(p) · GW(p)

That's not what an artificial neural net actually is. ...

Thank you for the unnecessary tutorial. But actually, what I said is that a super-human AI might be something like a very large neural net. Clearly, a neural net by itself doesn't act autonomously - to get anything approaching 'intelligence' you will need to at least add some feedback loops beyond simple backpropagation.

Again, those other people you mention seem to think that a lot more is going on in an AI system than is actually going on.

More will go on in a future superhuman AI than goes on in any present-day toy AI. Well, yes, those other people I mention do seem to think that. But they are not indulging in any kind of mysticism. Only in the kinds of conceptual extrapolation which took place, for example, in going from simple combinational logic circuitry to the instruction fetch-execute cycle of a von Neumann computer architecture.

Replies from: GregFish
comment by GregFish · 2011-01-14T18:31:52.763Z · LW(p) · GW(p)

Thank you for the unnecessary tutorial. But actually, what I said is that a super-human AI might be something like a very large neural net.

No, actually I think the tutorial was necessary, especially since what you're basically saying is that something like a large enough neural net will no longer function by the rules of an ANN. If it doesn't, how does it learn? It would simply spit out random outputs without having some sort of direct guidance.

More will go on in a future superhuman AI than goes on in any present-day toy AI.

And again I'm trying to figure out what the "superhuman" part will consist of. I keep getting answers like "it will be faster than us" or "it'll make correct dicisons faster", and once again point out that computers already do that on a wide variety of specific tasks which is why we use them...

Replies from: Perplexed
comment by Perplexed · 2011-01-14T19:06:45.236Z · LW(p) · GW(p)

what you're basically saying is that something like a large enough neural net will no longer function by the rules of an ANN.

Am I really being that unclear? Something containing so many and such large embedded neural nets so that the rest of its circuitry is small by comparison. But that extra circuitry does mean that the whole machine indeed no longer functions by the rules of an ANN. Just as my desktop computer no longer functions by the rules of a dRAM.

And again I'm trying to figure out what the "superhuman" part will consist of.

And as JoshuaZ explains, it is something that does everything intellectual that a human can do, only faster and better. Play chess, write poetry, learn to speak Chinese, design computers, prove Fermat's Last Theorem. The whole human repertoire.

Sure, machines already do some of those things. Many people (I am not one of them) think that such an AI, doing every last one of those things at superhuman speed, would be transformative. It is at least conceivable that they are right.

Replies from: GregFish
comment by GregFish · 2011-01-14T20:09:18.643Z · LW(p) · GW(p)

Just as my desktop computer no longer functions by the rules of a dRAM.

It never really did. DRAM is just a way to keep bits in memory for processing. What's going on under the hood of any computer hasn't changed at all. It's just grown vastly more complex and allowed us to do much more intricate and impressive things with the same basic ideas. The first computer ever built and today's machines function by the same rules, it's just that the latter is given the tools to do so much more with them.

And as JoshuaZ explains, it is something that does everything intellectual that a human can do, only faster and better.

But machines already do most of the things humans do faster and better except for being creative and pattern recognition. Does it mean that the first AI will be superhuman by default as soon as it encompasses the whole human realm of abilities?

Many people think that such an AI, doing every last one of those things at superhuman speed, would be transformative.

At the very least it would be informative and keep philosophers marinating on the whole "what does it mean to be human" thing.

Replies from: Perplexed
comment by Perplexed · 2011-01-14T22:25:57.230Z · LW(p) · GW(p)

Does it mean that the first AI will be superhuman by default as soon as it encompasses the whole human realm of abilities?

Yes. As long as it does everything roughly as well as a human and some things much better.

Replies from: timtyler, GregFish, JGWeissman
comment by timtyler · 2011-01-15T10:42:08.422Z · LW(p) · GW(p)

Bostrom has:

By a "superintelligence" we mean an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.

I think that is more conventional. Unless otherwise specified, to be "super" you have to be much better at most of the things you are supposed to be "super" at.

comment by GregFish · 2011-01-14T23:21:35.947Z · LW(p) · GW(p)

Sounds like a logical conclusion to me...

I still have a lot of questions about detail but I'm starting to see what I was after: consistent, objective definitions I can work with and relate to my experience with computers and AI.

comment by JGWeissman · 2011-01-14T22:51:39.327Z · LW(p) · GW(p)

To recursively self-improve to superhuman intelligence, the AI should be able to do everything as well as humans, be implemented in a way that humans (and therefore the AI) can understand well enough to improve on, and have access to the details of this implementation.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-01-14T23:43:47.234Z · LW(p) · GW(p)

To recursively self-improve to superhuman intelligence, the AI should be able to do everything as well as humans

It could start improving (in software) from a state where it's much worse than humans in most areas of human capability, if it's designed specifically for ability to self-improve in an open-ended way.

Replies from: JGWeissman
comment by JGWeissman · 2011-01-14T23:50:10.182Z · LW(p) · GW(p)

Agreed. I meant to emphasize the importance of the AI having the ability to effectively reflect on its own implementation details. An AI that is as smart as humans but doesn't understand how it works is not likely to FOOM.

Replies from: timtyler
comment by timtyler · 2011-01-15T00:28:17.120Z · LW(p) · GW(p)

The ability to duplicate adult researchers quickly and cheaply might accelerate the pace of research quite a bit, though.

Replies from: Perplexed
comment by Perplexed · 2011-01-15T00:50:27.163Z · LW(p) · GW(p)

It might indeed. 25 years of human nursing, diapering, potty training, educating, drug rehabilitating, and more educating gets you a competent human researcher about 1 time in 40, so artificial researchers are likely to be much cheaper and quicker to produce. But I sometimes wonder just how much of human innovation stems from the fact that not all human researchers have had exactly the same education.

Replies from: timtyler
comment by timtyler · 2011-01-15T10:37:34.607Z · LW(p) · GW(p)

If machine researchers are anything like phones or PCs, there will be millions of identical clones - but also substantial variation. Not just variation caused by different upbringings and histories, but variation caused by different architectural design.

By contrast humans are mostly all the same - due to being built using much the same recipe inherited from a recent common ancestor. We aren't built for doing research - whereas they probably will be. They will likely be running rings around us soon enough.

comment by timtyler · 2011-01-13T20:48:21.781Z · LW(p) · GW(p)

There's a big, fat book all about the topic of the difficulties of controlling machines - and it is now available online: Kevin Kelly - Out of Control

comment by JoshuaZ · 2011-01-12T19:39:56.776Z · LW(p) · GW(p)

Having read the piece I was not impressed. I became even less impressed when I read his criticism of Legg's piece. It seems to be basically come down "computers can't do things that humans can. And they never will be able to. So there."

Replies from: XiXiDu, GregFish
comment by XiXiDu · 2011-01-12T20:02:26.413Z · LW(p) · GW(p)

My intention for linking to it was not that I thought it featured good arguments, as you might notice by my comments over there, but that he is an educated skeptic with potential influence in the mainstream rationality community. The post is a sample of an outsiders perception and assessment of the SIAI. And right now is the time for the SIAI to hone its appearance and public relations. Because once people like PZ Myers become aware of the SIAI and portray it and LW negatively, this community will be inundated with literally thousands of mediocre rationalists and many potential donors will be lost.

Replies from: ata, GregFish
comment by ata · 2011-01-13T21:01:03.463Z · LW(p) · GW(p)

this community will be inundated with literally thousands of mediocre rationalists and many potential donors will be lost.

Bad comments will get downvoted and not seen by many people. If someone isn't getting much out of LW and LW isn't getting much out of their presence, they'll leave eventually. If the moderation system continues working about as well as it has been working, an influx of new users shouldn't be a problem. (It's probably something the site needs to be prepared for when Eliezer's books come out, anyway.)

comment by GregFish · 2011-01-13T01:24:26.562Z · LW(p) · GW(p)

My intention for linking to it was not that I thought it featured good arguments...

Gee, thanks. So you basically linked and replied as a form of damage control? And by the way, the "outsiders' perception" isn't helped when the "insiders'" arguments seem to be based not on what computers actually do, but what they're made to do in comic books.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-01-13T05:08:16.296Z · LW(p) · GW(p)

Gee, thanks. So you basically linked and replied as a form of damage control?

XiXi is actually one of the people here who is more critical of the SI and the notion of run-away superintelligence. XiXi can correct me if I'm wrong here, but I suspect that XiXi's intention in this particular instance was to do just what he said. To give an example of an outsider's perspective on the SI of exactly the type of outsider who the SI should be trying to convince and should be able to convince if their arguments have much validity.

And by the way, the "outsiders' perception" isn't helped when the "insiders'" arguments seem to be based not on what computers actually do, but what they're made to do in comic books.

Ok. This is the sort of remark that get's the SI people correctly annoyed. Generalizations from fictional evidence are bad. But, at the same time, that something happens to have occurred in fictional settings isn't in general a reason to assign it lower probability than you would if one weren't aware of such fiction. (To use a silly example, there's fiction set after the sun has become a red giant. The fact that there's such fiction isn't relevant to evaluating whether or not the sun will enter such a phase). It also misses one of the fundamental points that the SI people have made repeatedly: computers as they exist today are very weak entities. The SI's argument doesn't have to do with computers in general. It centers around what happens once machines have human level intelligence. So, ask yourself, how likely is it do you think that we'll have general AI ever, and if we do have general AI, what buggy failure modes seem most likely?

Replies from: GregFish
comment by GregFish · 2011-01-13T15:36:14.560Z · LW(p) · GW(p)

It centers around what happens once machines have human level intelligence.

As defined by... what exactly? We have problems measuring our own intelligence or even defining it so we're giving computers a very wide sliding scale of intelligence based on personal opinions and ideas morethan a rigirous examination. A computer today could ace just about any general knowledge test we give it if we tell it how to search for an answer or compute a problem. Does that make it as intelligent as a really academically adept human? Oh and it can do it in a tiny fraction of the time it would take us. Does that make it superhuman?

Replies from: JoshuaZ, jsalvatier
comment by JoshuaZ · 2011-01-13T20:00:29.776Z · LW(p) · GW(p)

It may be a red herring to focus on the definition of "intelligence" in this context. If you prefer, taboo the words intelligent and intelligence in this context and simply refer to a computer capable of doing at least everything a regular person can do. The issue is what happens after one has a machine that reaches that point.

Replies from: GregFish
comment by GregFish · 2011-01-14T14:11:49.571Z · LW(p) · GW(p)

... the words intelligent and intelligence in this context and simply refer to a computer capable of doing at least everything a regular person can do.

But we already have things capable of doing everything a regular person can do. We call them regular people. Are we trying to build another person in digital format here, and if so, why? Just because we want to see if can? Or because we have some big plans for it?

Replies from: JoshuaZ
comment by JoshuaZ · 2011-01-14T17:45:25.345Z · LW(p) · GW(p)

But we already have things capable of doing everything a regular person can do. We call them regular people. Are we trying to build another person in digital format here, and if so, why? Just because we want to see if can? Or because we have some big plans for it?

Irrelevant to the question at hand, which is what would happen if a machine had such capabilities. But, if you insist on discussing this issue also, machines with human-like abilities could be very helpful. For example, one might be able to train one of them to do some task, and then make multiple copies of it, much more efficient than individually training lots of humans. Or one could send such AIs into dangerous situations where we might not ethically send a person (whether it would actually be ethical to send an AI is a distinct question.)

Replies from: Vladimir_Nesov, GregFish
comment by Vladimir_Nesov · 2011-01-15T00:31:43.673Z · LW(p) · GW(p)

Or one could send such AIs into dangerous situations where we might not ethically send a person (whether it would actually be ethical to send an AI is a distinct question.)

Why is it distinct? Whether doing something is an error determines if it's beneficial to obtain ability and willingness to do it.

Replies from: ata
comment by ata · 2011-01-15T01:42:47.110Z · LW(p) · GW(p)

It's distinct when the question is about risk to the human, rather than about the ethics of the task itself. We could make nonsentient nonpersons that nevertheless have humanlike abilities in some broad or narrow sense, so that sacrificing them in some risky or suicidal task doesn't impact the ethical calculation as it would if we were sending a person.

(I think that's what JoshuaZ was getting at. The "distinct question" would presumably be that of the AI's potential personhood.)

comment by GregFish · 2011-01-14T18:48:36.068Z · LW(p) · GW(p)

Um... we already do all that to a pretty high extent and we don't need general intelligence in every single facet of human ability to do that. Just make it an expert in its task and that's all you need.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-01-14T20:04:18.423Z · LW(p) · GW(p)

Um... we already do all that to a pretty high extent and we don't need general intelligence in every single facet of human ability to do that. Just make it an expert in its task and that's all you need.

There are a large number of tasks where the expertise level needed by current technology is woefully insufficient. Anything that has a strong natural language requirement for example.

Replies from: GregFish
comment by GregFish · 2011-01-14T23:27:33.607Z · LW(p) · GW(p)

Oh fun, we're talking about my advisers' favorite topic! Yeah, strong natural language is a huge pain and if we had devices that understood human speech well, tech companies would jump on that ASAP.

But here's the thing. If you want natural language processing, why build a Human 2.0? Why not just build the speech recognition system? It's making AGI for something like that the equivalent of building a 747 to fly one person across a state? I can see various expert systems coming together as an AGI, but not starting out as such.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-01-15T05:30:11.192Z · LW(p) · GW(p)

It would surprise me if human-level natural-language processing were possible without sitting on top of a fairly sophisticated and robust world-model.

I mean, just as an example, consider how much a system has to know about the world to realize that in your next-to-last sentence, "It's" is most likely a typo for "Isn't."

Granted that one could manually construct and maintain such a model rather than build tools that maintain it automatically based on ongoing observations, but the latter seems like it would pay off over time.

comment by jsalvatier · 2011-01-13T17:51:14.601Z · LW(p) · GW(p)

I don't think this is a good argument. Just because you cannot define something doesn't mean it's not a real phenomena or that you cannot reason about it at all. Before we understood fire completely, it was still real and we could reason about it somewhat (fire consumes some things, fire is hot etc.). Similarly, intelligence is a real phenomena that we don't completely understand and we can still do some reasoning about it. It is meaningful to talk about a computer having "human-level" (I think "human-like" might be more descriptive) intelligence.

Replies from: GregFish
comment by GregFish · 2011-01-14T13:53:38.774Z · LW(p) · GW(p)

I don't think this is a good argument. Just because you cannot define something doesn't mean it's not a real phenomena or that you cannot reason about it at all.

If you have no working definition for what you're trying to discuss, you're more than likely to be barking up the wrong tree about it. We didn't understand fire completely, but we knew that it was hot, you couldn't touch it, and you made it by rubbing dry sticks together really, really fast, or by making a spark with rocks and have it land on dry straw.

Also, where did I say that until I get a definition of intelligence all discussion about the concept is meaningless? I just want to know what criteria an AI must meet to be considered human and match them with what we have so far so I can see how far we might be from those benchmarks. I think it's a perfectly reasonable way to go about this kind of discussion.

Replies from: jsalvatier, XiXiDu
comment by jsalvatier · 2011-01-14T16:37:58.571Z · LW(p) · GW(p)

I apologize, the intent of your question was not at all clear to me from your previous post. It sounded to me like you were using this as an argument that SIAI types were clearly wrong headed.

To answer your question then, the relevant dimension of intelligence is something like "ability to design and examine itself similarly to it's human designers".

Replies from: GregFish
comment by GregFish · 2011-01-14T18:46:11.307Z · LW(p) · GW(p)

the relevant dimension of intelligence is something like "ability to design and examine itself similarly to it's human designers".

Ok, I'll buy that. I would agree that any system that could be its own architect and hold meaningful design and code review meetings with its builders would qualify as human-level intelligent.

Replies from: jsalvatier
comment by jsalvatier · 2011-01-14T19:27:34.485Z · LW(p) · GW(p)

To clarify: I didn't mean that such a machine is necessarily "human level intelligent" in all respects, just that that is the characteristic relevant to the idea of an "intelligence explosion".

comment by XiXiDu · 2011-01-14T14:16:20.956Z · LW(p) · GW(p)

I just want to know what criteria an AI must meet to be considered human and match them with what we have so far so I can see how far we might be from those benchmarks.

Interesting question, Wikipedia does list some requirements.

comment by GregFish · 2011-01-13T01:14:07.672Z · LW(p) · GW(p)

Wow, if that's all you got from a post trying to explain the very real difference between acing an intelligence test by figuring things out on your own and having a machine do the same after you give it all the answers and how the suggested equations only measure how many answers were right, not how that feat was accomplished, I don't even know how to properly respond...

Oh and by the way, in the comments I suggest how to keep track of the machine doing some learning and figuring out to Dr. Legg so there's another thing to consider. And yes, I've had the formal instruction in discrete math to do so.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-01-13T04:59:24.411Z · LW(p) · GW(p)

Wow, if that's all you got from a post trying to explain the very real difference between acing an intelligence test by figuring things out on your own and having a machine do the same after you give it all the answers and how the suggested equations only measure how many answers were right, not how that feat was accomplished, I don't even know how to properly respond...

It is possible that I didn't explain my point well. The problem I am referring to is your apparent insistence that there are things that machines can't do that people can and that this is insurmountable. Most of your subclaims are completely reasonable, but the overarching premise that machines can only do what they are programmed to seems to show up in both pieces, and is simply wrong. Even today, that's not true by most definitions of those terms. Neural nets and genetic algorithms often don't do what they are told.

Replies from: GregFish
comment by GregFish · 2011-01-13T15:49:53.226Z · LW(p) · GW(p)

... but the overarching premise that machines can only do what they are programmed to seems to show up in both pieces, and is simply wrong.

Only if you choose to discard any thought to how machines are actually built. There's no magic going on in that blinking box, just ciruits performing the functions they were designed to do in the order they're told.

Neural nets and genetic algorithms often don't do what they are told.

Actually, they do precisely what they're told because without a fitness function which determines what problem they are to solve in their output and their level of correctness, they just crash the computer. Don't confuse algorithms that have very generous bounds and allow us to try different possible solutions to the same problem for some sort of thinking or initiative on the computer's part. And when computers do something weird, it's because of a bug which sends them persuing their logic in whays programmers never intended, not because they decide to go off on their own.

I can't tell you how many seemingly bizarre and ridiculous problems I've eventually tracked down to a bad loop, or a bad index value, or a missing symbol in a string...

Replies from: JoshuaZ, TheOtherDave
comment by JoshuaZ · 2011-01-13T20:07:56.683Z · LW(p) · GW(p)

Only if you choose to discard any thought to how machines are actually built. There's no magic going on in that blinking box, just ciruits performing the functions they were designed to do in the order they're told.

There's no magic going on inside the two pounds of fatty tissue inside my skull either. Magic is apparently not required for creativity or initiative (whatever those may be).

Actually, they do precisely what they're told because without a fitness function which determines what problem they are to solve in their output and their level of correctness, they just crash the computer. Don't confuse algorithms that have very generous bounds and allow us to try different possible solutions to the same problem for some sort of thinking or initiative on the computer's part.

I'm confused by what you mean by "thinking" and "initiative." Let's narrow the field slightly. Would the ability to come up with new definitions and conjectures in math be an example of thinking and initiative?

And when computers do something weird, it's because of a bug which sends them persuing their logic in whays programmers never intended, not because they decide to go off on their own.

Calling something a bug doesn't change the nature of what is happening. That's just a label. Humans are likely as smart as they are due to runaway sexual selection for intelligence. And then humans got really smart and realized that they could have all the pleasure of sex while avoiding the hassle of reproduction. Is the use of birth-control an example of human initiative or a bug? Does it make a difference?

Replies from: GregFish
comment by GregFish · 2011-01-14T13:43:47.477Z · LW(p) · GW(p)

Would the ability to come up with new definitions and conjectures in math be an example of thinking and initiative?

Yes, but with a caveat. I could teach an ANN how to solve a problem but it would be more or less by random trial and error with a squashing function until each "neuron" has the right weight and activation function. So it will learn how to solve this generic problem, but it won't be because it traced its way along all the steps.

(Actually I made in mistake in my previous reply, ANNs have no fitness function, that's a genetic algorithm. ANNs are given an input and a desired output.)

So if you develop a new defintion or conjecture and can state why and how you did it, then develop a proof, you've shown thought. Your attempt to suddenly create a new definition or theorem just because you wanted to and were curious rather than just tasked to do it would be initiative.

Calling something a bug doesn't change the nature of what is happening. That's just a label.

No, you see, a bug is when a computer does something it's not supposed to do and handles its data incorrectly. Birth control is actually another approach to reproduction most of the time, delaying progeny until we feel ready to raise them. Those who don't have children have put their evolutionary desire to provide for themselves above the drive to reproduce and counter that urge with protected sex. So it's not really a bug as it is a solution to some of the problems posed by reproduction. Now, celibacy is something I'd call a bug and we know from many studies that it's almost always a really bad idea to forgo sex altogether. Mental health tends to suffer greatly.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-01-14T17:41:49.462Z · LW(p) · GW(p)

So if you develop a new defintion or conjecture and can state why and how you did it, then develop a proof, you've shown thought. Your attempt to suddenly create a new definition or theorem just because you wanted to and were curious rather than just tasked to do it would be initiative.

Hmm, so would a grad student who is thinking about a thesis problem because their advisor said to think about it be showing initiative? Is a professional mathematician showing initiative? They keep thinking about math because that's what gives them positive feedback (e.g. salary, tenure, positive remarks from their peers).

No, you see, a bug is when a computer does something it's not supposed to do and handles its data incorrectly

Is "incorrectly" a normative or descriptive term? .How is it different than "this program didn't do what I expected it to do" other than that you label it a bug when the program deviates more from what you wanted to accomplish? Keep in mind that what a human wants isn't a notion that cleaves reality at the joints.

Birth control is actually another approach to reproduction most of the time, delaying progeny until we feel ready to raise them. Those who don't have children have put their evolutionary desire to provide for themselves above the drive to reproduce and counter that urge with protected sex. So it's not really a bug as it is a solution to some of the problems posed by reproduction.

Ok. So when someone (and I know quite a few people in this category) deliberately uses birth control because they want the pleasure of sex but don't want to ever have kids, is that a bug in your view?

Replies from: GregFish
comment by GregFish · 2011-01-14T18:38:34.258Z · LW(p) · GW(p)

Hmm, so would a grad student who is thinking about a thesis problem because their advisor said to think about it be showing initiative?

Dis he/she volunteer to work on a problem and come to the advisor saying that this is the thesis subject? Doesn't sound like it, so I'd say it's not. Initiative is doing something that's not required, but something you feel needs to be done or something you want to do.

Is "incorrectly" a normative or descriptive term?

Yes. When you need it to return "A" and it retuns "Finland," it made a mistake which has to be fixed. How it came to that mistake can be found by tracing the logic after the bug manifests itself.

Keep in mind that what a human wants isn't a notion that cleaves reality at the joints.

Ok, whan you build a car but the car doesn't start, I don't think you're going to say that the car is just doing what it wants and we humans are just selfishly insisting that it bends to our whims. You're probably going to take that thing to a mechanic. Same thing with computers, even AI. If you build an AI to learn a language and it doesn't seem to be able to do so, there's a bug in the system.

So when someone (and I know quite a few people in this category) deliberately uses birth control because they want the pleasure of sex but don't want to ever have kids, is that a bug in your view?

That's answered in the second sentence of the quote you chose...

Replies from: JoshuaZ
comment by JoshuaZ · 2011-01-14T20:07:34.141Z · LW(p) · GW(p)

Dis he/she volunteer to work on a problem and come to the advisor saying that this is the thesis subject? Doesn't sound like it, so I'd say it's not. Initiative is doing something that's not required, but something you feel needs to be done or something you want to do.

Ok. Now, if said grad student did come to the thesis adviser, but their motivation was that they've been taught from a very young age that they should do math. Is there initiative?

Ok, whan you build a car but the car doesn't start, I don't think you're going to say that the car is just doing what it wants and we humans are just selfishly insisting that it bends to our whims. You're probably going to take that thing to a mechanic. Same thing with computers, even AI. If you build an AI to learn a language and it doesn't seem to be able to do so, there's a bug in the system.

It seems that a large part of the disagreement is implicit premises here. You seem to be focused on very narrow AI, when the entire issue is what happens when one doesn't have narrow AI but have AI that has most capabilities that humans have. Let's set aside whether or not we should build such AIs and whether or not they are possible. Assuming that such entities are possible, do you or do you not think there's a risk of the AI getting out of control.

So when someone (and I know quite a few people in this category) deliberately uses birth control because they want the pleasure of sex but don't want to ever have kids, is that a bug in your view?

That's answered in the second sentence of the quote you chose...

Either there's a miscommunication here or there's a misunderstanding about how evolution works. An organism that puts its own survival over reproducing is an evolutionary dead end. Historically, lots of humans didn't want any children, but they didn't have effective birth control methods, so in the ancestral environment there was minimal evolutionary incentive to remove that preference. It has only been recently that there is widespread and effective birth control. So, what you've said is one evolved desire overriding another would still seem to be a bug.

Replies from: GregFish
comment by GregFish · 2011-01-14T23:37:18.040Z · LW(p) · GW(p)

Now, if said grad student did come to the thesis adviser, but their motivation was that they've been taught from a very young age that they should do math. Is there initiative?

Not sure. You could argue both points in this situation.

Assuming that such entities are possible, do you or do you not think there's a risk of the AI getting out of control.

Any AI can get out of control. I never denied that. My issue is with how that should be managed, not whether it can happen.

So, what you've said is one evolved desire overriding another would still seem to be a bug.

I suppose it would.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-01-16T15:37:18.354Z · LW(p) · GW(p)

Any AI can get out of control. I never denied that. My issue is with how that should be managed, not whether it can happen.

Ah. In that case, there's actually very minimal disagreement.

comment by TheOtherDave · 2011-01-13T17:42:11.633Z · LW(p) · GW(p)

Can you clarify how it's helpful to know that my machine only does what it's been told to do, if I can't know what I'm telling it to do or be certain what I have told it to do?

I mean, there's a sense in which humans only do "what they've been told to do", also... we have programs embedded in DNA that manifest themselves in brains that construct minds from experience in constrained ways. (Unless you believe in some kind of magic free will in human minds, in which case this line of reasoning won't seem sensible to you.) But so what? Knowing that doesn't make humans harmless.

Replies from: jsalvatier, GregFish
comment by jsalvatier · 2011-01-13T17:56:49.036Z · LW(p) · GW(p)

Additionally, a big part of what SIAI types emphasize is that knowing very precisely and very broadly (at the same time) what humans want is very important. Human desires are very complex, so this is not a simple task.

comment by GregFish · 2011-01-14T13:48:16.529Z · LW(p) · GW(p)

Can you clarify how it's helpful to know that my machine only does what it's been told to do, if I can't know what I'm telling it to do or be certain what I have told it to do?

If you have no idea what you want your AI to do, why are you building it in the first place? I have never built an app that does, you know, anything and whatever. It'll just be muddled mess that probably won't even compile.

we have programs embedded in DNA that manifest themselves in brains...

No we do not. This is not how biology works. Brains are self-organizing structures built by a combination of cellular signals and environmental cues. All that DNA does is to regulate what proteins the cell will manufacture. Development goes well beyond that.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-01-14T14:48:10.496Z · LW(p) · GW(p)

If you have no idea what you want your AI to do, why are you building it in the first place?

I'm not sure how you got from my question to your answer. I'm not talking at all about programmers not having intentions, and I agree with you that in pretty much all cases they do have intentions.

I'll assume that I wasn't clear, rather than that you're willing to ignore what's actually being said in favor of what lets you make a more compelling argument, and will attempt to be clearer.

You keep suggesting that there's no reason to worry about how to constrain the behavior of computer programs, because computer programs can only do what they are told to do.

At the same time, you admit that computer programs sometimes do things their programmers didn't intend for them to do. I might have written a stupid bug that causes the program to delete the contents of my hard drive, for example.

I agree completely that, in doing so, it is merely doing what I told it to do: I'm the one who wrote that stupid bug, it didn't magically come out of nowhere, the program doesn't have any mysterious kind of free will or anything. It's just a program I wrote.

But I don't see why that should be particularly reassuring. The fact remains that the contents of my hard drive are deleted, and I didn't want them to be. That I'm the one who told the program to delete them makes no difference I care about; far more salient to me is that I didn't intend for the program to delete them.

And the more a program is designed to flexibly construct strategies for achieving particular goals in the face of unpredictable environments, the harder it is to predict what it is that I'm actually telling my program to do, regardless of what I intend for it to do.

In other words: "I can't know what I'm telling it to do or be certain what I have told it to do."

Sure, once it deletes the files, I can (in principle) look back over the source code and say "Oh, I see why that happened." But that doesn't get me my files back.

Brains are self-organizing structures built by a combination of cellular signals and environmental cues. All that DNA does is to regulate what proteins the cell will manufacture. Development goes well beyond that.

And yet, remarkably, brains don't "self-organize" in the absence of that regulation.

You're right, of course, that the correct environment is also crucial; DNA won't magically turn into a brain without a very specific environment in which to manifest.

Then again, source code won't magically turn into a running program without a very specific environment either, and quite a lot of the information defining that running program comes from the compiler and the hardware platform rather than the source code... and yet we have no significant difficulty equating a running program with its source code.

(Sure, sometimes bugs turn out to be in the compiler or the hardware, but even halfway competent programmers don't look there except as a matter of last resort. If the running program is doing something I didn't intend, it's most likely that the source code includes an instruction I didn't intend to give.)

Replies from: GregFish
comment by GregFish · 2011-01-14T18:44:18.399Z · LW(p) · GW(p)

You keep suggesting that there's no reason to worry about how to constrain the behavior of computer programs, because computer programs can only do what they are told to do.

No, I just keep saying that we don't need to program them to "like rewards and fear punishments" and train them like we'd train dogs.

I agree completely that, in doing so, it is merely doing what I told it to do: I'm the one who wrote that stupid bug, it didn't magically come out of nowhere, the program doesn't have any mysterious kind of free will or anything. It's just a program I wrote. But I don't see why that should be particularly reassuring.

Oh no, it's not. I have several posts on my blog detailing how bugs like that could actually turn a whole machine army against us and turn Terminator into a reality rather than a cheesy robots-take-over-the-world-for-shits-and-giggles flick.

... and yet we have no significant difficulty equating a running program with its source code.

But the source code isn't like DNA in an organism. Source code covers so much more ground than that. Imagine having an absolute blueprint of how every cell cluster in your body will react to any stimuli through your entire life and every process it will undertake from now until your death, including how it will age. That would be source code. Your DNA is not ever nearly that complete. It's more like a list of suggestions and blueprints for raw materials.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-01-14T19:08:01.267Z · LW(p) · GW(p)

No, I just keep saying that we don't need to program them to "like rewards and fear punishments" and train them like we'd train dogs.

(shrug) OK, fair enough.

I agree with you that reward/punishment conditioning of software is a goofy idea.

I was reading your comment here to indicate that we can constrain the behavior of human-level AGIs by just putting appropriate constraints in the code. ("You don't want the machine to do something? Put in a boundry. [..] with a machine, you can just tell it not to do that.")

I think that idea is importantly wrong, which is why I was responding to it, but if you don't actually believe that then we apparently don't have a disagreement.

Re: source code... if we're talking about code that is capable of itself generating executable code as output in response to situations that arise (which seems implicit in the idea of a human-level AGI, given that humans are capable of generating executable code), it isn't at all clear to me that its original source code comprises in any kind of useful way an absolute blueprint for how every part of it will react to any stimuli.

Again, sure, I'm not positing magic: whatever it does, it does because of the interaction between its source code and the environment in which it runs, there's no kind of magic third factor. So, sure, given the source code and an accurate specification of its environment (including its entire relevant history), I can in principle determine precisely what it will do. Absolutely agreed. (Of course, in practice that might be so complicated that I can't actually do it, but you aren't claiming otherwise.)

If you don't think the same is true of humans, then we disagree about humans, but I think that's incidental.

Replies from: GregFish
comment by GregFish · 2011-01-14T20:12:12.673Z · LW(p) · GW(p)

... if we're talking about code that is capable of itself generating executable code as output in response to situations that arise

Again, it really shouldn't be doing that. It should have the capacity to learn new skills and build new neural networks to do so. That doesn't require new code, it just requires a routine to initialize a new set of ANN objects at runtime.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-01-14T20:44:19.348Z · LW(p) · GW(p)

If it somehow follows from that that there's an absolute blueprint in it for how every part of it will react to any stimuli in a way that is categorically different from how human genetics specify how humans will respond to any environment, then I don't follow the connection... sorry. I have only an interested layman's understanding of ANNs.

comment by Zachary_Kurtz · 2011-01-12T19:16:32.921Z · LW(p) · GW(p)

"imagined by the author as a combination of whatever a popular science site reported"

I've heard this argument from non-singulatarians from time to time. It bothers me due to the problem conservation of expected evidence. What is the blogger's priors of taking an argument seriously if it seems as if the discussed about topic reminds him of something he's heard about in a pop sci piece?

We all know that popular sci/tech reporting isn't the greatest, but if you low confidence about SIAI-type AI and hearing it reminds you of some second hand pop reporting then discounting it because of the medium that exposed you to it is not an argument! Especially if you priors about the likelihood of pop sci reporting being accurate/useful is already low.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-01-12T19:20:36.053Z · LW(p) · GW(p)

I don't think that's what is meant by the phrase. I think the author is asserting that it seems to them that some of the stuff put out by the website shows the general trends one expect if someone has learned about some idea from popularizations rather than the technical literature. If that is what the author is discussing then that is worrisome.

Replies from: GregFish, XiXiDu, Zachary_Kurtz
comment by GregFish · 2011-01-13T01:16:46.852Z · LW(p) · GW(p)

I think the author is asserting that it seems to them that some of the stuff put out by the website shows the general trends one expect if someone has learned about some idea from popularizations rather than the technical literature.

Yes that is exactly what I meant. That might sound a little harsh, but that was my impression.

comment by XiXiDu · 2011-01-12T19:41:52.132Z · LW(p) · GW(p)

What might also be worrisome is that the two papers he seems to have read and associated with the SIAI are both not written by the SIAI.

Replies from: JoshuaZ, timtyler
comment by JoshuaZ · 2011-01-12T19:44:50.942Z · LW(p) · GW(p)

Yes, but in at least one of those cases (both cases?) the piece was recommended to him by a higher-up in the SIAI. So associating them with the SIAI in the weak sense that they reflect views connected to the Institute is not unreasonable. If that was the intended meaning, it is just very poor phrasing.

ETA: And regardless of those issues, that's a reflection of problems with the author, not necessarily a claim that defends the SIAI from the particular criticism in question.

comment by timtyler · 2011-01-13T20:57:37.775Z · LW(p) · GW(p)

What might also be worrisome is that the two papers he seems to have read and associated with the SIAI are both not written by the SIAI.

I think that is not correct. You said:

His latest post takes on Omohundro's "Basic AI Drives"

However. the link was to:

http://singinst.org/upload/ai-resource-drives.pdf

...not...

http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/

The former is written by Carl Shulman - who seems to be being credited with 4 recent SIAI publications here.

comment by Zachary_Kurtz · 2011-01-12T21:38:13.807Z · LW(p) · GW(p)

its not clear to me, though this explanation seems plausible as well. Either way it's not good.

comment by timtyler · 2011-01-12T22:44:20.062Z · LW(p) · GW(p)

How can we be expected to build something we don’t understand and why should we possibly devote our time to building something intended to make us obsolete?

Machine intelligence 101 is required here, methinks.

Replies from: GregFish
comment by GregFish · 2011-01-13T01:19:16.145Z · LW(p) · GW(p)

Well, argue the points then. Anyone can make a pithy "oh, he doesn't know what he's talking about" and leave it at that. Go ahead, show your expertise on the subject. Of course you'd be showing it on a single out-of-context quote here...

Replies from: timtyler
comment by timtyler · 2011-01-13T09:48:53.866Z · LW(p) · GW(p)

You've laid out some of your positions on these topics in your blog. Alas, after reading them, I am not positively inclined towards engaging with you. I cited one for the purpose of illustrating your perspective to other readers.

Replies from: GregFish
comment by GregFish · 2011-01-13T15:53:03.732Z · LW(p) · GW(p)

So in other words, you're more of a hit-and-run-out-of-context kind of guy than someone who prefers to actually go further than a derisive little put down and show that he actually understands the topic in enought depth to argue it?