The ethics of AI for the Routledge Encyclopedia of Philosophy

post by Stuart_Armstrong · 2020-11-18T17:55:49.952Z · LW · GW · 8 comments

I've been tasked by the Routledge Encyclopedia of Philosophy to write their entry on the ethics of AI.

I'll be starting the literature reviews and similar in the coming weeks. Could you draw my attention to any aspect of AI ethics (including the history of AI ethics) that you think is important and deserves to be covered?

Cheers!

8 comments

Comments sorted by top scores.

comment by Kaj_Sotala · 2020-11-22T17:12:59.205Z · LW(p) · GW(p)

You probably know of these already, but just in case: lukeprog wrote a couple of articles on the history of AI risk thought [1 [LW · GW], 2 [LW · GW]] going back to 1863. There's also the recent AI ethics article in the Stanford Encyclopedia of Philosophy.

I'd also like to imagine that my paper on superintelligence and astronomical suffering might say something that someone might consider important, but that is of course a subjective question. :-)

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2020-11-23T13:42:28.285Z · LW(p) · GW(p)

Kiitos!

comment by Aryeh Englander (alenglander) · 2020-11-18T18:43:09.325Z · LW(p) · GW(p)

Also - particular papers that you think are important, especially if you think they might be harder to find in a quick literature search. I'm part of an AI Ethics team at work, and I would like to find out about these as well.

comment by Charlie Steiner · 2020-11-19T00:08:36.366Z · LW(p) · GW(p)

Congrats! Here are my totally un-researched first thoughts:

Pre-1950 History: Speculation about artificial intelligence (if not necessarily in the modern sense) dates back extremely far. Brazen head, Frankenstein, the mechanical turk, R.U.R. robots. Basically all of this treats the artificial intelligence as essentially a human, though the brazen head mythology is maybe more related to deals with djinni or devils, but basically all of it (together with more modern science fiction) can be lumped into a pile labeled "exposes and shapes human intuition, but not very serious".

Automating the work of logic was of interest to logicians such as Hilbert, Pierce, Frege, so there might be interesting discussions related to that. Obviously you'll also mention modern electronic computing, Turing, Good, Von Neumann, Wiener, etc.

There might actually be a decent amount already written about the ethics of AI for central planning of economies. Thinking about Wiener, and also about the references of the recent book Red Plenty, about Kantorovich and halfhearted soviet attempts to use optimization algorithms on the economy.

The most important modern exercise of AI ethics is not trolley problems for self-driving cars (despite the press), but interpretability. If the law says you can't discriminate based on race, for instance, and you want to use AI to make predictions about someone's insurance or education or what have you, that AI had better not only not discriminate, it has to interpretably not discriminate in a way that will stand up in court if necessary. Simultaneously, there's the famous story of Target sending out advertisements for baby paraphernalia, and Facebook targets ads to you that, if they don't use your phone's microphone, make good enough predictions to make people suspect they do. So the central present issue of AI ethics is the control of information - both its practicality in places we've already legislated that we want certain information not to be acted on, and examining the consequences in places where there's free rein.

And obviously autonomous weapons, the race to detect AI-faked images, the ethics of automation of large numbers of jobs, and then finally maybe we can get to the ethical issues raised by superhuman AI.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2020-11-20T17:12:12.145Z · LW(p) · GW(p)

Thanks!

comment by Gordon Seidoh Worley (gworley) · 2020-11-20T02:35:41.697Z · LW(p) · GW(p)

This post I wrote a while back has some references you might find useful: "A developmentally-situated approach to teaching normative behavior to AI" [LW · GW].

Also I think some of the references in this paper I wrote might be useful: "Robustness to fundamental uncertainty in AGI alignment".

Topics that seem important to cover to me include not only AI impact on humans but also questions surrounding the subjective experience of AI, which largely revolves around the question of if AI have subjective experience or are otherwise moral patients at all.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2020-11-20T17:11:45.395Z · LW(p) · GW(p)

Thanks!

comment by DaemonicSigil · 2020-11-19T01:01:20.880Z · LW(p) · GW(p)

I don't know if the entry is intended to cover speculative issues, but if so, I think it would be important to include a discussion of what happens when we start building machines that are generally intelligent and have internal subjective experiences. Usually with present day AI ethics concerns, or AI alignment, we're concerned about the AI taking actions that harm humans. But as our AI systems get more sophisticated, we'll also have to start worrying about the well-being of the machines themselves.

Should they have the same rights as humans? Is it ethical to create an AGI with subjective experiences whose entire reward function is geared towards performing mundane tasks for humans. Is it even possible to build an AI that is both highly intelligent, very general, and yet does not have subjective experiences and does not simulate beings with subjective experiences? etc.