Posts

Comments

Comment by GregFish on Link: why training a.i. isn’t like training your pets · 2011-01-14T23:37:18.040Z · LW · GW

Now, if said grad student did come to the thesis adviser, but their motivation was that they've been taught from a very young age that they should do math. Is there initiative?

Not sure. You could argue both points in this situation.

Assuming that such entities are possible, do you or do you not think there's a risk of the AI getting out of control.

Any AI can get out of control. I never denied that. My issue is with how that should be managed, not whether it can happen.

So, what you've said is one evolved desire overriding another would still seem to be a bug.

I suppose it would.

Comment by GregFish on Link: why training a.i. isn’t like training your pets · 2011-01-14T23:27:33.607Z · LW · GW

Oh fun, we're talking about my advisers' favorite topic! Yeah, strong natural language is a huge pain and if we had devices that understood human speech well, tech companies would jump on that ASAP.

But here's the thing. If you want natural language processing, why build a Human 2.0? Why not just build the speech recognition system? It's making AGI for something like that the equivalent of building a 747 to fly one person across a state? I can see various expert systems coming together as an AGI, but not starting out as such.

Comment by GregFish on Link: why training a.i. isn’t like training your pets · 2011-01-14T23:21:35.947Z · LW · GW

Sounds like a logical conclusion to me...

I still have a lot of questions about detail but I'm starting to see what I was after: consistent, objective definitions I can work with and relate to my experience with computers and AI.

Comment by GregFish on Link: why training a.i. isn’t like training your pets · 2011-01-14T20:12:12.673Z · LW · GW

... if we're talking about code that is capable of itself generating executable code as output in response to situations that arise

Again, it really shouldn't be doing that. It should have the capacity to learn new skills and build new neural networks to do so. That doesn't require new code, it just requires a routine to initialize a new set of ANN objects at runtime.

Comment by GregFish on Link: why training a.i. isn’t like training your pets · 2011-01-14T20:09:18.643Z · LW · GW

Just as my desktop computer no longer functions by the rules of a dRAM.

It never really did. DRAM is just a way to keep bits in memory for processing. What's going on under the hood of any computer hasn't changed at all. It's just grown vastly more complex and allowed us to do much more intricate and impressive things with the same basic ideas. The first computer ever built and today's machines function by the same rules, it's just that the latter is given the tools to do so much more with them.

And as JoshuaZ explains, it is something that does everything intellectual that a human can do, only faster and better.

But machines already do most of the things humans do faster and better except for being creative and pattern recognition. Does it mean that the first AI will be superhuman by default as soon as it encompasses the whole human realm of abilities?

Many people think that such an AI, doing every last one of those things at superhuman speed, would be transformative.

At the very least it would be informative and keep philosophers marinating on the whole "what does it mean to be human" thing.

Comment by GregFish on Link: why training a.i. isn’t like training your pets · 2011-01-14T18:48:36.068Z · LW · GW

Um... we already do all that to a pretty high extent and we don't need general intelligence in every single facet of human ability to do that. Just make it an expert in its task and that's all you need.

Comment by GregFish on Link: why training a.i. isn’t like training your pets · 2011-01-14T18:46:11.307Z · LW · GW

the relevant dimension of intelligence is something like "ability to design and examine itself similarly to it's human designers".

Ok, I'll buy that. I would agree that any system that could be its own architect and hold meaningful design and code review meetings with its builders would qualify as human-level intelligent.

Comment by GregFish on Link: why training a.i. isn’t like training your pets · 2011-01-14T18:44:18.399Z · LW · GW

You keep suggesting that there's no reason to worry about how to constrain the behavior of computer programs, because computer programs can only do what they are told to do.

No, I just keep saying that we don't need to program them to "like rewards and fear punishments" and train them like we'd train dogs.

I agree completely that, in doing so, it is merely doing what I told it to do: I'm the one who wrote that stupid bug, it didn't magically come out of nowhere, the program doesn't have any mysterious kind of free will or anything. It's just a program I wrote. But I don't see why that should be particularly reassuring.

Oh no, it's not. I have several posts on my blog detailing how bugs like that could actually turn a whole machine army against us and turn Terminator into a reality rather than a cheesy robots-take-over-the-world-for-shits-and-giggles flick.

... and yet we have no significant difficulty equating a running program with its source code.

But the source code isn't like DNA in an organism. Source code covers so much more ground than that. Imagine having an absolute blueprint of how every cell cluster in your body will react to any stimuli through your entire life and every process it will undertake from now until your death, including how it will age. That would be source code. Your DNA is not ever nearly that complete. It's more like a list of suggestions and blueprints for raw materials.

Comment by GregFish on Link: why training a.i. isn’t like training your pets · 2011-01-14T18:38:34.258Z · LW · GW

Hmm, so would a grad student who is thinking about a thesis problem because their advisor said to think about it be showing initiative?

Dis he/she volunteer to work on a problem and come to the advisor saying that this is the thesis subject? Doesn't sound like it, so I'd say it's not. Initiative is doing something that's not required, but something you feel needs to be done or something you want to do.

Is "incorrectly" a normative or descriptive term?

Yes. When you need it to return "A" and it retuns "Finland," it made a mistake which has to be fixed. How it came to that mistake can be found by tracing the logic after the bug manifests itself.

Keep in mind that what a human wants isn't a notion that cleaves reality at the joints.

Ok, whan you build a car but the car doesn't start, I don't think you're going to say that the car is just doing what it wants and we humans are just selfishly insisting that it bends to our whims. You're probably going to take that thing to a mechanic. Same thing with computers, even AI. If you build an AI to learn a language and it doesn't seem to be able to do so, there's a bug in the system.

So when someone (and I know quite a few people in this category) deliberately uses birth control because they want the pleasure of sex but don't want to ever have kids, is that a bug in your view?

That's answered in the second sentence of the quote you chose...

Comment by GregFish on Link: why training a.i. isn’t like training your pets · 2011-01-14T18:31:52.763Z · LW · GW

Thank you for the unnecessary tutorial. But actually, what I said is that a super-human AI might be something like a very large neural net.

No, actually I think the tutorial was necessary, especially since what you're basically saying is that something like a large enough neural net will no longer function by the rules of an ANN. If it doesn't, how does it learn? It would simply spit out random outputs without having some sort of direct guidance.

More will go on in a future superhuman AI than goes on in any present-day toy AI.

And again I'm trying to figure out what the "superhuman" part will consist of. I keep getting answers like "it will be faster than us" or "it'll make correct dicisons faster", and once again point out that computers already do that on a wide variety of specific tasks which is why we use them...

Comment by GregFish on Link: why training a.i. isn’t like training your pets · 2011-01-14T14:11:49.571Z · LW · GW

... the words intelligent and intelligence in this context and simply refer to a computer capable of doing at least everything a regular person can do.

But we already have things capable of doing everything a regular person can do. We call them regular people. Are we trying to build another person in digital format here, and if so, why? Just because we want to see if can? Or because we have some big plans for it?

Comment by GregFish on Link: why training a.i. isn’t like training your pets · 2011-01-14T14:07:08.614Z · LW · GW

Hey, if people choose to downvote my replies, either because they disagree or just plain don't like me, that's their thing. I'm not all that easy to scare with a few downvotes... =)

Comment by GregFish on Link: why training a.i. isn’t like training your pets · 2011-01-14T13:53:38.774Z · LW · GW

I don't think this is a good argument. Just because you cannot define something doesn't mean it's not a real phenomena or that you cannot reason about it at all.

If you have no working definition for what you're trying to discuss, you're more than likely to be barking up the wrong tree about it. We didn't understand fire completely, but we knew that it was hot, you couldn't touch it, and you made it by rubbing dry sticks together really, really fast, or by making a spark with rocks and have it land on dry straw.

Also, where did I say that until I get a definition of intelligence all discussion about the concept is meaningless? I just want to know what criteria an AI must meet to be considered human and match them with what we have so far so I can see how far we might be from those benchmarks. I think it's a perfectly reasonable way to go about this kind of discussion.

Comment by GregFish on Link: why training a.i. isn’t like training your pets · 2011-01-14T13:48:16.529Z · LW · GW

Can you clarify how it's helpful to know that my machine only does what it's been told to do, if I can't know what I'm telling it to do or be certain what I have told it to do?

If you have no idea what you want your AI to do, why are you building it in the first place? I have never built an app that does, you know, anything and whatever. It'll just be muddled mess that probably won't even compile.

we have programs embedded in DNA that manifest themselves in brains...

No we do not. This is not how biology works. Brains are self-organizing structures built by a combination of cellular signals and environmental cues. All that DNA does is to regulate what proteins the cell will manufacture. Development goes well beyond that.

Comment by GregFish on Link: why training a.i. isn’t like training your pets · 2011-01-14T13:43:47.477Z · LW · GW

Would the ability to come up with new definitions and conjectures in math be an example of thinking and initiative?

Yes, but with a caveat. I could teach an ANN how to solve a problem but it would be more or less by random trial and error with a squashing function until each "neuron" has the right weight and activation function. So it will learn how to solve this generic problem, but it won't be because it traced its way along all the steps.

(Actually I made in mistake in my previous reply, ANNs have no fitness function, that's a genetic algorithm. ANNs are given an input and a desired output.)

So if you develop a new defintion or conjecture and can state why and how you did it, then develop a proof, you've shown thought. Your attempt to suddenly create a new definition or theorem just because you wanted to and were curious rather than just tasked to do it would be initiative.

Calling something a bug doesn't change the nature of what is happening. That's just a label.

No, you see, a bug is when a computer does something it's not supposed to do and handles its data incorrectly. Birth control is actually another approach to reproduction most of the time, delaying progeny until we feel ready to raise them. Those who don't have children have put their evolutionary desire to provide for themselves above the drive to reproduce and counter that urge with protected sex. So it's not really a bug as it is a solution to some of the problems posed by reproduction. Now, celibacy is something I'd call a bug and we know from many studies that it's almost always a really bad idea to forgo sex altogether. Mental health tends to suffer greatly.

Comment by GregFish on Link: why training a.i. isn’t like training your pets · 2011-01-14T13:34:06.487Z · LW · GW

Other people imagine something like a neural net containing more 'neurons' than the human brain - a device which is born with little more hardwired programming than the general guidance...

That's not what an artificial neural net actually is. When training your ANN, you give it an input and tell it what the output should be. Then, using a method called backpropagation, you tell it to adjust the weights and activation thresholds of each neuron object until it can match the output. So you're not just telling it to learn, you're telling it what the problem is and what the answer should be, then let it find its way to the solution. Then, you apply what it learned on real-world problems.

Again, those other people you mention seem to think that a lot more is going on in an AI system than is actually going on.

Comment by GregFish on Link: why training a.i. isn’t like training your pets · 2011-01-13T15:53:03.732Z · LW · GW

So in other words, you're more of a hit-and-run-out-of-context kind of guy than someone who prefers to actually go further than a derisive little put down and show that he actually understands the topic in enought depth to argue it?

Comment by GregFish on Link: why training a.i. isn’t like training your pets · 2011-01-13T15:49:53.226Z · LW · GW

... but the overarching premise that machines can only do what they are programmed to seems to show up in both pieces, and is simply wrong.

Only if you choose to discard any thought to how machines are actually built. There's no magic going on in that blinking box, just ciruits performing the functions they were designed to do in the order they're told.

Neural nets and genetic algorithms often don't do what they are told.

Actually, they do precisely what they're told because without a fitness function which determines what problem they are to solve in their output and their level of correctness, they just crash the computer. Don't confuse algorithms that have very generous bounds and allow us to try different possible solutions to the same problem for some sort of thinking or initiative on the computer's part. And when computers do something weird, it's because of a bug which sends them persuing their logic in whays programmers never intended, not because they decide to go off on their own.

I can't tell you how many seemingly bizarre and ridiculous problems I've eventually tracked down to a bad loop, or a bad index value, or a missing symbol in a string...

Comment by GregFish on Link: why training a.i. isn’t like training your pets · 2011-01-13T15:36:14.560Z · LW · GW

It centers around what happens once machines have human level intelligence.

As defined by... what exactly? We have problems measuring our own intelligence or even defining it so we're giving computers a very wide sliding scale of intelligence based on personal opinions and ideas morethan a rigirous examination. A computer today could ace just about any general knowledge test we give it if we tell it how to search for an answer or compute a problem. Does that make it as intelligent as a really academically adept human? Oh and it can do it in a tiny fraction of the time it would take us. Does that make it superhuman?

Comment by GregFish on Link: why training a.i. isn’t like training your pets · 2011-01-13T15:31:33.168Z · LW · GW

Fish seemed to be implying that it wasn't.

Absolutely not. If you take another look, I argue that it's uncessary. You don't want the machine to do something? Put in a boundry. You don't have the option to just turn off a lab rat's desire to search a particular corner of its cage with a press of a button, so all you can do is put in some deterrent. But with a machine, you can just tell it not to do that. For example, this code in Java would mean not to add two even numbers if the method recieves them:

public int add(int a, int b) { if ((a % 2) != 0 && (b % 2) != 0) { return a + b; } return -1; }

So why do I need to build an elaborate curcuit to "reward" the computer for not adding even numbers? And why would it suddenly decide to override the condition? Just to see why? If I wanted it to experiment, I'd just give it fewer bounds.

Comment by GregFish on Link: why training a.i. isn’t like training your pets · 2011-01-13T01:24:26.562Z · LW · GW

My intention for linking to it was not that I thought it featured good arguments...

Gee, thanks. So you basically linked and replied as a form of damage control? And by the way, the "outsiders' perception" isn't helped when the "insiders'" arguments seem to be based not on what computers actually do, but what they're made to do in comic books.

Comment by GregFish on Link: why training a.i. isn’t like training your pets · 2011-01-13T01:19:16.145Z · LW · GW

Well, argue the points then. Anyone can make a pithy "oh, he doesn't know what he's talking about" and leave it at that. Go ahead, show your expertise on the subject. Of course you'd be showing it on a single out-of-context quote here...

Comment by GregFish on Link: why training a.i. isn’t like training your pets · 2011-01-13T01:16:46.852Z · LW · GW

I think the author is asserting that it seems to them that some of the stuff put out by the website shows the general trends one expect if someone has learned about some idea from popularizations rather than the technical literature.

Yes that is exactly what I meant. That might sound a little harsh, but that was my impression.

Comment by GregFish on Link: why training a.i. isn’t like training your pets · 2011-01-13T01:14:07.672Z · LW · GW

Wow, if that's all you got from a post trying to explain the very real difference between acing an intelligence test by figuring things out on your own and having a machine do the same after you give it all the answers and how the suggested equations only measure how many answers were right, not how that feat was accomplished, I don't even know how to properly respond...

Oh and by the way, in the comments I suggest how to keep track of the machine doing some learning and figuring out to Dr. Legg so there's another thing to consider. And yes, I've had the formal instruction in discrete math to do so.