Lessons from Isaac: Poor Little Robbie

post by adamShimi · 2020-03-14T17:14:56.438Z · LW · GW · 8 comments

Every so often, when explaining issues related to AI safety, I call on good old Asimov. That's easy: almost everyone that is at least interested in science knows his name, and the Three Laws of Robotics are a very good example of misspecified goal. Or are they?

The truth is: I don't know. My last reading through Asimov's robots dates back ten years; it was in french; and I didn't know anything about AI safety, specification and many parts of my current mental scaffolding. So when I use Asimov for my points now, I'm not sure whether I'm spouting bullshit or not.

Fortunately, the solution is simple, for once: I just have to read the goddamn stories. And since I'm not the only one I heard talking about Asimov in this context, I thought that a sequence on the robots stories would prove useful.

My first stop is by "I,Robot", the first robot short story collection. And it starts with the first story published by Asimov, "Robbie".

Basically, this Robbie is a robot that takes care of a little girl named Gloria. All is well, until Gloria's mother turns into the bad guy, and decides that her girl should not be raised by a machine. She harasses her weak husband until he accepts to get rid of Robbie. But when Gloria discovers the loss of her friend, nothing can comfort her. The parents try everything, including a trip to New York, paradise to suburbians. But nope, the girl is still heartbroken. Last try of the father: a visit to a factory manned by robots, so little Gloria can see that they are lifeless machines, not real people. But, tada! Robbie was there! And he even saves the girl from an oncoming truck! It's revealed that the father planned it (Robbie being there, not the murder attempt on his daughter), but even so, the mother can't really send back the savior of her little girl. The End.

Just a simple story about a nice little robot beloved by a girl, and the machinations of her mother to "protect" her from him. What's not to love? It's straight to the point, nicely written, and, if you can gloss over the obvious sexism, quite enjoyable.

How does it hold in terms of AI safety discussion? Well, let Mr Weston, the father, give it to us:

'Nonsense', Weston denied, with an involuntary nervous shiver. 'That's completely ridiculous. We had a long discussion at the time we bought Robbie about the First Law of Robotics. You know it's impossible for a robot to harm a human being; that long before enough can go wrong to alter that First Law, a robot would be completely inoperable. It's a mathematical impossibility. Besides I have an engineer from US Robots here twice a year to give the poor gadget a complete overhaul. Why, there's no more chance of anything at all going wrong with Robbie than there is of you or I suddenly going looney -- considerably less, in fact. [...]'

That was underwhelming.

See, Robbie is a human in a tin wrapping. Even worse, he's a human with a perfect temper, that never really gets mad at the girl. For example, here:

And Robbie cowered, holding his hands over his face, so that she had to add, 'No, I won't, Robbie. I won't spank you.[...]'

and here:

But Robbie was hurt at the unjust accusation, so he seated himself carefully and shook his head ponderously from side to side.

Nowhere do I see the kind of AI we're all thinking about -- an AI that does not hate you, but does not love you either. Robbie loves you. At least Gloria. And this sidesteps pretty much every issue of AI safety.

To be fair with old Isaac, the point of this story is clearly to counter the paranoia about robots and machines. An anti-terminator, if you wish. And it works decently on that front. Robbie is always nice with Gloria -- he even saves her at the end. He's one of the characters with which we have more empathy. And the only bad guys are the mother, and the robophobic neighbors.

This would be okay, if it did not wrap a wrong assumption: robots are safe and the only issue comes from the nasty humans. Whereas what we want people to understand is that robots and AIs are not unsafe because they don't do what we tell them to do, but because they do exactly that.

What about the First Law, you may ask? After all, it was mentioned in the quote above. Well, that mention is all we get in this story. To find the actual Law (yes, I know it, and so do you, but let's assume an innocent reader), you have to look at the first page of the book:

1- A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

That's what I'm talking about! I've come looking for Laws breaking up, not anti-discrimination against non-existent robots. I assume these are treated in the next stories. After all, there are three Laws of Robotics, and only one is mentioned -- not even written -- here. I'll reserve my judgement until all the stories are in. But still, don't try to pull another Robbie on me, Asimov.

8 comments

Comments sorted by top scores.

comment by Pattern · 2020-03-19T22:44:52.107Z · LW(p) · GW(p)

Can you imagine a story about a machine that cares about humans ending badly?

comment by wizzwizz4 · 2020-03-14T18:58:12.759Z · LW(p) · GW(p)

Perhaps it might be better if you skipped over the books that were "pulling another Robbie". This post is basically "the story doesn't teach us anything useful".

Replies from: adamShimi
comment by adamShimi · 2020-03-14T19:04:39.332Z · LW(p) · GW(p)

True. Do you think I should still list and quickly explain the stories that are "useless" for this point someplace?

Replies from: wizzwizz4
comment by wizzwizz4 · 2020-03-14T19:21:57.611Z · LW(p) · GW(p)

Yes, I think that would be good. Perhaps you could make it a draft epilogue, and add an entry to it every time you've got nothing really to write about a story. And if your quick summary of why it's useless starts getting too big for the list, you can always split it off into a separate post.

This has the potential to be a good series (but I must say it's a terrible start! :-p).

Replies from: adamShimi
comment by adamShimi · 2020-03-14T19:34:16.823Z · LW(p) · GW(p)

Hum, good idea. At least it can't get worse. ^^

Replies from: wizzwizz4
comment by wizzwizz4 · 2020-04-14T16:40:48.955Z · LW(p) · GW(p)

With recent events, you might not have been able to write more of these. Are you still planning to? I'd really like to read them.

Replies from: adamShimi, adamShimi
comment by adamShimi · 2020-05-09T10:08:54.750Z · LW(p) · GW(p)

Just so you know, the next one is posted. ;)

comment by adamShimi · 2020-04-14T17:25:18.878Z · LW(p) · GW(p)

Thanks for the comment! My lateness to write the next installment is more related to having a lot of research work and study to do (as well as preparing a job interview), but I already have a draft of the second post. And this time, the short story has loads of ideas related to AI safety in non trivial ways. ;)

I should be able to post it around the end of this week.