"Robot scientists can think for themselves"

post by CronoDAS · 2009-04-02T21:16:22.682Z · LW · GW · Legacy · 11 comments

Contents

11 comments

I recently saw this Reuters article on Yahoo News. In typical science reporting fashion, the headline seems to be pure hyperbole - does anyone here know enough to clarify what the groups referenced have actually achieved?

This links represent what I could find:

Homepage of the "Robot Scientist" project:http://www.aber.ac.uk/compsci/Research/bio/robotsci/ 

Homepage of Hod Lipson: http://www.mae.cornell.edu/lipson/

Hod Lipson's 2007 paper "Automated reverse engineering of nonlinear dynamical systems" (pdf)

11 comments

Comments sorted by top scores.

comment by jimrandomh · 2009-04-02T22:13:43.374Z · LW(p) · GW(p)

The Reuters article refers to two unrelated pieces of research. The first is about a laboratory robot (abstract) that automates some measurements, with software to analyze the results and decide what to measure next. Useful, but not really related to general-purpose AI.

The other is about a method for fitting differential equations to time series. I'm sure it has some applications somewhere, but I don't see any obvious way to apply it to AI.

Replies from: thomblake
comment by thomblake · 2009-04-03T18:01:02.604Z · LW(p) · GW(p)

but not really related to general-purpose AI.

Amongst people who actually build robots, it's generally understood that you don't get general-purpose AI by creating a 'general intelligence' and letting it run; it seems much more likely that we'll need a lot of small, task-specific systems that can work together.

Replies from: Eliezer_Yudkowsky, Vladimir_Nesov
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-03T18:16:30.873Z · LW(p) · GW(p)

Amongst people who actually install air conditioners, it's generally understood that you get general-purpose AI by adding freon.

The roboticists I know don't claim to know how to build AGI. Why would they?

Replies from: thomblake
comment by thomblake · 2009-04-03T18:51:44.229Z · LW(p) · GW(p)

The roboticists I know don't claim to know how to build AGI. Why would they?

Because they read up on artificial intelligence, study philosophy of mind, and build systems that exhibit intelligent behavior. And unlike many people that claim to be AI researchers, they actually build working systems that seem to be engaging in learning, communication, and other intelligent behaviors.

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-04-03T18:55:42.407Z · LW(p) · GW(p)

And unlike many people that claim to be AI researchers, they actually build working systems that seem to be engaging in learning, communication, and other intelligent behaviors.

Anecdotally, in my experience, artificial intelligence is something of a God of the Gaps for computer science--techniques that work are appropriated by others, relabelled, and put to work. Someone who claims to be an AI researcher is essentially saying "I am studying things that don't actually work yet".

This is probably related to the long "AI winter" caused by the collapse of hype.

Replies from: thomblake, Vladimir_Nesov
comment by thomblake · 2009-04-03T18:57:14.040Z · LW(p) · GW(p)

It should be noted that the "AI winter" is somewhat apocryphal, and a lot of the much-maligned techniques of GOFAI (or things similar to them) are being used to great effect in small chunks that work together.

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-04-03T19:02:13.779Z · LW(p) · GW(p)

Yes, but how often do you hear those GOFAI techniques described as AI except in AI textbooks?

Speaking of which, I have a copy of Russell and Norvig's AIMA on my desk right now, and in fact I should probably be spending more time doing exercises from it and less time posting on LW...

comment by Vladimir_Nesov · 2009-04-03T19:06:13.721Z · LW(p) · GW(p)

Not so, there are lots of problems in CS that you can't naturally label as AI problems. If you go in the opposite direction, saying that AI by definition solves all problems, then you can say that whatever unsolved problem you are working on, you are actually working on a special case of AI. But that's pretty void.

comment by Vladimir_Nesov · 2009-04-03T18:22:07.516Z · LW(p) · GW(p)

You are channeling too much certainty through the reference to authority. We are too far away from seeing the solution to describe its form in detail, much less to defer to the popular perception.

Replies from: thomblake
comment by thomblake · 2009-04-03T18:53:17.874Z · LW(p) · GW(p)

You are channeling too much certainty through the reference to authority. We are too far away from seeing the solution to describe its form in detail, much less to defer to the popular perception.

Rather in line with my point. To claim that this is not really related to general-purpose AI, when people who build the closest things to thinking machines that we have would disagree with that sentiment, did not seem warranted. I was showing that the statement was without merit due to informed folks thinking otherwise.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-04-03T19:01:43.513Z · LW(p) · GW(p)

Err, my point is obviously that the AI researchers are too far away from seeing the solution, so their opinion shouldn't count as anything approaching certainty. This is to point out the falsity of connotation of your original comment. Not to mention that there is actually no consensus among the experts, a factual error in your statement.