LINK: Bostrom & Yudkowsky, "The Ethics of Artificial Intelligence" (2011)

post by lukeprog · 2011-02-27T17:43:57.659Z · LW · GW · Legacy · 9 comments

Contents

9 comments

Just noticed that Less Wrong has apparently not yet linked to Bostrom & Yudkowsky's new paper for the forthcoming Cambridge Handbook of Artificial Intelligence, entitled "The Ethics of Artificial Intelligence." Enjoy.

9 comments

Comments sorted by top scores.

comment by timtyler · 2011-02-27T18:48:18.380Z · LW(p) · GW(p)

I searched for "friendly". No matches were found! The document is not "friendly"-friendly!

comment by Manfred · 2011-02-27T23:29:45.481Z · LW(p) · GW(p)

It says forthcoming, so I'll give in to my urge to nitpick.

Page 6: Will the audience be familiar with the term "instrumental rationality?"

Possible typo on page 10: "the idea whole brain emulation," should be "the idea of whole brain emulation."

Controversial point that could be omitted on page 15: Claim that a truly desirable outcome means no conflict at all. The main point could be retained if it was changed to the less controversial claim that in a truly desirable outcome there would be no stories about saving the world against all odds.

Those are the only three things I noticed - overall, an excellent paper.

comment by DanielVarga · 2011-02-27T21:47:37.568Z · LW(p) · GW(p)

I think the "Minds with Exotic Properties" section only scratches the surface of exotic properties. It mostly deals with subjective rate of time and reproduction, two phenomena we already have quite good metaphors for. I think the point where human analogies really start to break down is when we start to talk about merging minds.

I will not elaborate here, just link to this short comment of mine, and to my agreeing reply to this comment by Johnicholas in a somewhat different context. There I called Individualism Bias what this Bostrom-Yudkowsky section is a case of in my opinion. Maybe they simply didn't want to exceed the Shock Level of the Cambridge Handbook editors. But it is interesting to contrast sci-fi with philosophy: sci-fi does not have this blind spot, merging minds are almost a cliché there.

Replies from: Perplexed
comment by Perplexed · 2011-02-27T23:05:43.072Z · LW(p) · GW(p)

I will not elaborate here ...

I would like to see thoughts along these lines elaborated and discussed somewhere.

comment by Richard_Kennaway · 2011-02-28T11:51:20.195Z · LW(p) · GW(p)

The paper says:

Occasionally a good new idea in ethics comes along, and it comes as a surprise

I'd be interested to know what examples the authors had in mind.

comment by gwern · 2011-02-27T19:05:23.135Z · LW(p) · GW(p)

The abstract:

"The possibility of creating thinking machines raises a host of ethical issues. These questions relate both to ensuring that such machines do not harm humans and other morally relevant beings, and to the moral status of the machines themselves. The first section discusses issues that may arise in the near future of AI. The second section outlines challenges for ensuring that AI operates safely as it approaches humans in its intelligence. The third section outlines how we might assess whether, and in what circumstances, AIs themselves have moral status. In the fourth section, we consider how AIs might differ from humans in certain basic respects relevant to our ethical assessment of them. The final section addresses the issues of creating AIs more intelligent than human, and ensuring that they use their advanced intelligence for good rather than ill."

(I skimmed through the paper. It's nice but I didn't see any that struck as particularly novel.)

Replies from: lukeprog
comment by lukeprog · 2011-02-27T19:32:54.943Z · LW(p) · GW(p)

Yeah, like Chalmers' paper, it's survey article.

comment by timtyler · 2011-02-27T18:54:17.153Z · LW(p) · GW(p)

It may be a form of good‐story bias to ask, “Will AIs be good or evil?” as if trying to pick a premise for a movie plot. The reply should be, “Exactly which AI design are you talking about?”

A common reply to that might be: "the one(s) we are most likely to get".

comment by timtyler · 2011-02-27T18:37:56.429Z · LW(p) · GW(p)

Thanks. My guess would be:

  • Ethics in Machine Learning and Other Domain‐Specific AI Algorithms: Yudkowsky

  • Artificial General Intelligence: Yudkowsky

  • Machines with Moral Status: Bostrom

  • Minds with Exotic Properties: Bostrom

  • Superintelligence: Yudkowsky

I am not so sure about the abstract and conclusion.