comment by Charlie Steiner ·
2020-11-19T00:08:36.366Z · LW(p) · GW(p)
Congrats! Here are my totally un-researched first thoughts:
Pre-1950 History: Speculation about artificial intelligence (if not necessarily in the modern sense) dates back extremely far. Brazen head, Frankenstein, the mechanical turk, R.U.R. robots. Basically all of this treats the artificial intelligence as essentially a human, though the brazen head mythology is maybe more related to deals with djinni or devils, but basically all of it (together with more modern science fiction) can be lumped into a pile labeled "exposes and shapes human intuition, but not very serious".
Automating the work of logic was of interest to logicians such as Hilbert, Pierce, Frege, so there might be interesting discussions related to that. Obviously you'll also mention modern electronic computing, Turing, Good, Von Neumann, Wiener, etc.
There might actually be a decent amount already written about the ethics of AI for central planning of economies. Thinking about Wiener, and also about the references of the recent book Red Plenty, about Kantorovich and halfhearted soviet attempts to use optimization algorithms on the economy.
The most important modern exercise of AI ethics is not trolley problems for self-driving cars (despite the press), but interpretability. If the law says you can't discriminate based on race, for instance, and you want to use AI to make predictions about someone's insurance or education or what have you, that AI had better not only not discriminate, it has to interpretably not discriminate in a way that will stand up in court if necessary. Simultaneously, there's the famous story of Target sending out advertisements for baby paraphernalia, and Facebook targets ads to you that, if they don't use your phone's microphone, make good enough predictions to make people suspect they do. So the central present issue of AI ethics is the control of information - both its practicality in places we've already legislated that we want certain information not to be acted on, and examining the consequences in places where there's free rein.
And obviously autonomous weapons, the race to detect AI-faked images, the ethics of automation of large numbers of jobs, and then finally maybe we can get to the ethical issues raised by superhuman AI.