The trolleycar dilemma, an MIT moral problem app

post by morganism · 2017-01-16T19:32:20.170Z · LW · GW · Legacy · 5 comments

This is a link post for http://moralmachine.mit.edu/

Contents

5 comments

5 comments

Comments sorted by top scores.

comment by TiffanyAching · 2017-01-17T00:52:17.961Z · LW(p) · GW(p)

This is pretty fun in a sick way. Suck it, pedestrians! I wonder how much their results will be skewed by people answering flippantly?

For the record I didn't mess with the test, I honestly tried to judge the scenarios, even though trolleycar problems drive me nuts. I swear if I'm ever in that freakin' trolley I'll run over the five kids on one track then go back and beat the other one to death with a shovel.

If a consensus emerges I predict that it would go "kids over adults, humans over animals, law-abiders over law breakers" and maybe "old adults over young adults" but what the hierarchy would be when rules conflict is trickier to guess.

Also interesting that they chose the emotive term "flouting the law" over the more obvious "breaking the law".

Replies from: cousin_it, morganism
comment by cousin_it · 2017-01-18T14:18:34.251Z · LW(p) · GW(p)

Amusingly, the test also wants to know your preferences on men vs women, overweight vs healthy, and poor vs rich. Or at least it's happy to insinuate such preferences even if you answered all questions using other criteria. I'm surprised the smart folks at MIT didn't add more questions to unambiguously figure out the user's criteria whenever possible.

Replies from: TiffanyAching
comment by TiffanyAching · 2017-01-18T19:25:03.697Z · LW(p) · GW(p)

They're allowing users to build their own scenarios and add them as well, so it looks like the intention is to let the complexity grow over time from a basic starting point.

Actually, I wonder whether they might find that people really don't want a great deal of complexity in the decision-making process. People might prefer to go with a simple "minimize loss off life, prioritize kids" rule and leave it at that, because we're used to cars as a physical hazard that kill blindly when they kill at all. People might be more morally comfortable with smart cars that aren't too smart.

comment by morganism · 2017-01-17T20:38:29.915Z · LW(p) · GW(p)

Am not able to load game myself but how about adding a scenario:

You have a computer researcher who is planning to pitch an upgrade to the trolley car system logic and computation systems on one track.....

comment by morganism · 2017-01-16T20:26:57.509Z · LW(p) · GW(p)

"The greater autonomy given machine intelligence in these roles can result in situations where they have to make autonomous choices involving human life and limb. This calls for not just a clearer understanding of how humans make such choices, but also a clearer understanding of how humans perceive machine intelligence making such choices.

Recent scientific studies on machine ethics have raised awareness about the topic in the media and public discourse. This website aims to take the discussion further, by providing a platform for 1) building a crowd-sourced picture of human opinion on how machines should make decisions when faced with moral dilemmas, and 2) crowd-sourcing assembly and discussion of potential scenarios of moral consequence."