Rodney Brooks talks about Evil AI and mentions MIRI [LINK]

post by ike · 2014-11-12T04:50:23.828Z · LW · GW · Legacy · 7 comments

Contents

7 comments

Rodney Brooks says that "evil" AI is not a big problem:
http://www.rethinkrobotics.com/artificial-intelligence-tool-threat/

7 comments

Comments sorted by top scores.

comment by Kaj_Sotala · 2014-11-12T10:04:31.243Z · LW(p) · GW(p)

The MIRI mention:

Just how open the question of time scale for when we will have human level AI is highlighted by a recent report by Stuart Armstrong and Kaj Sotala, of the Machine Intelligence Research Institute, an organization that itself has researchers worrying about evil AI. But in this more sober report, the authors analyze 95 predictions made between 1950 and the present on when human level AI will come about. They show that there is no difference between predictions made by experts and non-experts. And they also show that over that 60 year time frame there is a strong bias towards predicting the arrival of human level AI as between 15 and 25 years from the time the prediction was made. To me that says that no one knows, they just guess, and historically so far most predictions have been outright wrong!

Replies from: Punoxysm
comment by Punoxysm · 2014-11-12T22:22:43.043Z · LW(p) · GW(p)

Do you feel that is a fair summary of your report?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-11-13T04:04:36.052Z · LW(p) · GW(p)

Yeah.

comment by Punoxysm · 2014-11-12T22:21:30.894Z · LW(p) · GW(p)

He is, perhaps, a little glib. And I would not dismiss some kind of left-field breakthrough in the next 25 years that brings us close to AI.

But other than that I agree with most of his statements. We are fundamental leaps away from understanding how to create strong AI. Research on safety is probably mostly premature. Worrying about existing projects, like Googles', having the capacity to be dangerous is nonsensical.

Replies from: torekp
comment by torekp · 2014-11-13T17:29:06.352Z · LW(p) · GW(p)

I place most of my probability weighting on far-future AI too, but I would not endorse Brooks's call to relax. There is a lot of work to be done on safety, and the chances of successfully engineering safety go up if work starts early. Granted, much of that work needs to wait until it is clearer which approaches to AGI are promising. But not all.

comment by jessicat · 2014-11-13T05:06:06.443Z · LW(p) · GW(p)

Well, he's right that intentionally evil AI is highly unlikely to be created:

Malevolent AI would need all these capabilities, and then some. Both an intent to do something and an understanding of human goals, motivations, and behaviors would be keys to being evil towards humans.

which happens to be the exact reason why Friendly AI is difficult. He doesn't directly address things that don't care about humans, like paperclip maximizers, but some of his arguments can be applied to them.

Expecting more computation to just magically get to intentional intelligences, who understand the world is similarly unlikely.

He's totally right that AGI with intentionality is an extremely difficult problem. We haven't created anything that is even close to practically approximating Solomonoff induction across a variety of situations, and Solomonoff induction is insufficient for the kind of intentionality you would need to build something that cares about universe states while being able to model the universe in a flexible manner. But, you can throw more computation power at a lot of problems to get better solutions, and I expect approximate Solomonoff induction to become practical in limited ways as computation power increases and moderate algorithmic improvements are made. This is true partially because greater computation power allows one to search for better algorithms.

I do agree with him that human-level AGI within the next few decades is unlikely and that significantly slowing down AI research is probably not a good idea right now.

comment by Gunnar_Zarncke · 2014-11-12T08:41:40.613Z · LW(p) · GW(p)

I think the key points (or misunderstandings) of the post can be seen in these quotes:

OK, so what about connecting an IBM Watson like understanding of the world to a Roomba or a Baxter? No one is really trying as the technical difficulties are enormous, poorly understood, and the benefits are not yet known.

and

Expecting more computation to just magically get to intentional intelligences, who understand the world is similarly unlikely. And, there is a further category error that we may be making here. That is the intellectual shortcut that says computation and brains are the same thing. Maybe, but perhaps not.

Which seem to indicate that Brooks doesn't look past 'linear' scaling and sees composition effects as far away.

I say relax everybody. If we are spectacularly lucky we’ll have AI over the next thirty years with the intentionality of a lizard, and robots using that AI will be useful tools. And they probably won’t really be aware of us in any serious way. Worrying about AI that will be intentionally evil to us is pure fear mongering. And an immense waste of time.

Apparently he extrapolates his own specialy into the future.