[LINK] "Moral Machines" article in the New Yorker links to SI paper
post by Antisuji · 2012-11-28T01:38:34.679Z · LW · GW · Legacy · 5 commentsContents
5 comments
Within two or three decades the difference between automated driving and human driving will be so great you may not be legally allowed to drive your own car, and even if you are allowed, it would immoral of you to drive, because the risk of you hurting yourself or another person will be far greater than if you allowed a machine to do the work.
That moment will be significant not just because it will signal the end of one more human niche, but because it will signal the beginning of another: the era in which it will no longer be optional for machines to have ethical systems.
The discussion itself is mainly concerned with the behavior of self-driving cars and robot soldiers rather than FAI, but Marcus does obliquely reference the prickliness of the problem. After briefly introducing wireheading (presumably as an example of what can go wrong), he links to http://singularity.org/files/SaME.pdf, saying:
Almost any easy solution that one might imagine leads to some variation or another on the Sorceror’s Apprentice, a genie that’s given us what we’ve asked for, rather than what we truly desire.
He also mentions FHI and Yale Bioethics Center along with SingInst:
A tiny cadre of brave-hearted souls at Oxford, Yale, and the Berkeley California Singularity Institute are working on these problems, but the annual amount of money being spent on developing machine morality is tiny.
It's a mainstream introduction, and perhaps not the best or most convincing one, but I think it's a positive development that machine ethics is getting a serious treatment in the mainstream media.
5 comments
Comments sorted by top scores.
comment by lukeprog · 2012-11-28T03:12:13.816Z · LW(p) · GW(p)
The linked paper is not a whitepaper, it's a chapter in the forthcoming volume Singularity Hypotheses.
Replies from: Antisuji↑ comment by Antisuji · 2012-11-28T04:42:47.230Z · LW(p) · GW(p)
Changed the title, thanks.
I'm curious; what is the distinguishing feature of a whitepaper? That it contains original research?
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2012-11-28T07:48:16.656Z · LW(p) · GW(p)
Wikipedia gives me the impression that the main distinguishing feature is that it's published directly by the organization that wrote it, instead of being published in e.g. someone else's book.
comment by Dr_Manhattan · 2012-11-29T05:47:27.943Z · LW(p) · GW(p)
This is awesome; best popular coverage of the issues I've seen up to date
comment by mwengler · 2012-11-29T14:16:35.427Z · LW(p) · GW(p)
Its fun to see such an article in non-tech media. I don't think there is a particular value in encouraging such articles: they will arise on their own.
As to the autonomous car concern, what seems most likely to me is that there will (always?) be a human driver mode, where the autonomous functions provide a sort of safety cocoon. That is, if I want to drive down the street or up the interstate, why shouldn't I as long as I do it safely? And if a car can drive itself autonomously, it can limit the inputs I can make only to those that will be safe. That is, I try to drive into another vehicle, the steering wheel just won't turn that far, I try to tailgate, the brakes and accelerator don't let me. We are already seeing the technology of autonomous driving penetrate the real market in terms of driver-assist: self-parking, notifiers to the driver when she is backing up on to something or tailgating, cruisecontrol which can slow as traffic slows to keep safe distances, notification when drifting out of your lane. The only reason we wind up with autonomous vehicles that don't allow human driving is if there is no demand for driving.
I let my 11 year old steer while I am driving. I limit her steering inputs only to those that are safe. If I can figure out how to do that, so can Autonomous Drive.