[LINK] Utilitarian self-driving cars?

post by V_V · 2014-05-14T13:00:02.670Z · LW · GW · Legacy · 44 comments

Contents

44 comments

When a collision is unavoidable, should a self-driving car try to maximize the survival chances of its occupants, or of all people involved?

http://www.wired.com/2014/05/the-robot-car-of-tomorrow-might-just-be-programmed-to-hit-you/

44 comments

Comments sorted by top scores.

comment by Scott Garrabrant · 2014-05-14T20:41:57.563Z · LW(p) · GW(p)

I think a less contrived question is "Should a self driving car minimize travel time for you or all people on the road?"

Replies from: V_V
comment by V_V · 2014-05-14T21:12:59.983Z · LW(p) · GW(p)

A personal or privately operated self-driving car should probably minimize the passenger travel time as this probably best aligns with the customer's and, in a reasonably competitive market, the manufacturer's interests.
The crash case is more complicated because there are ethical and legal liability issues.

Replies from: kilobug, DanielLC
comment by kilobug · 2014-05-15T08:17:36.796Z · LW(p) · GW(p)

I think there is a confusion going on there. Should reflect to what is ethical, what would be the best option, and I don't see how the manufacturer’s interests really matter for that. Self-driving cars should cooperate with each other in various prisoner's dilemma, not defect with each other, and more generally, they should behave in a way to smooth traffic globally (which at the end of the year would lead to less traffic time for everyone if all cars do so), not behave selfishly and minimize the passenger's travel time.

Now, in a competitive market, due to manufacturer's interests, it is indeed unlikely they would do so. But that is different from should. That's a case of pure market leading to a suboptimal solution (as often with Nash equilibrium), but there might be ways to fix it, either from manufacturers negotiating with each others outside the market channel to implement more globally efficient algorithms (like many standard bodies do), or through the state imposing it to them (like EU imposing the same charger for all cell phones).

Of course there are drawbacks and potential pitfall with all those solutions, but that's a different matter than the should issue.

comment by DanielLC · 2014-05-15T21:38:45.078Z · LW(p) · GW(p)

What if a company makes a large number of cars? Would they make cars that minimize travel time for their occupants, or for all people who buy cars from that company? Would multiple companies band together, and make cars that minimize travel time for everyone who buys cars from any of those companies?

Replies from: private_messaging
comment by private_messaging · 2014-05-16T07:08:19.197Z · LW(p) · GW(p)

They'd all band together and create a regulatory agency which ensures everyone's doing that. This is what happens in other industries, and this is what happens in car manufacturing.

comment by jimrandomh · 2014-05-14T20:16:24.512Z · LW(p) · GW(p)

This is sort of fun to think about, but I don't think the actual software will look anything like a trolley-problem solver. Given a choice between swerving and hitting A, or hitting B, I predict its actual answer will be "never swerve" and all the interesting details about what A and B are will be ignored. And that will be fine, because cars almost never get forced into positions where they can choose what to crash into but can't avoid crashing entirely, especially not when they have superhuman reflexes, and manufacturers can justify it by saying that braking works slightly better when not swerving at the same time.

Replies from: private_messaging, DanielLC
comment by private_messaging · 2014-05-16T06:59:07.910Z · LW(p) · GW(p)

Yeah.

There's an actual trolley problem - a very trivial one - hidden in it as well, though. Do you put your engineering resources into resolving swerve vs not swerve, or do you put that into better avoiding those situations altogether?

Of course, the answer is the latter.

This is also the issue with classical trolley problems. In the trolley problem as stated, subject's brainfart is resulting in an extra death. Of course a fat man won't stop a trolley! (It's pretty easy to state such problems better, but you won't generate much discussion that way)

comment by DanielLC · 2014-05-15T21:37:20.506Z · LW(p) · GW(p)

More importantly, if it thinks it has a choice between hitting A and B, it's likely a bug, and it's better off not swerving.

comment by [deleted] · 2014-05-16T12:53:27.220Z · LW(p) · GW(p)

I'm not sure if this is going to come up in the way proposed in the article. Given a potential collision, before even calculating whether or not it is unavoidable or not, the car is likely going to start reducing speed by using the brakes, because generally that's what you need to do in almost all collisions. (The very high percent that aren't of this type.)

But once the car has jammed on the brakes, it has cut off a great deal of its ability to do any swerves. These types of cases may be so rare that giving the car the fractional second to make those calculations may lead to more deaths than just hitting the brakes sooner in all cases would.

From a utilitarian ethics point of view, I suspect the design decision may be something like "We will save 10X lives per billion vehicle miles if the car precommits to always reduce speed without thinking about it, even if we would save X more lives by thinking about how to pick swerve in certain cases... but we can't do that without NOT saving the 10X lives from immediate precommitment."

Although, once we actually have more data on self driving car crashes, I would not be surprised if I have to rethink some of the above.

comment by A1987dM (army1987) · 2014-05-15T06:10:12.947Z · LW(p) · GW(p)

They should minimize damage to their own occupants, but using some kind of superrational decision theory so they won't defect in prisoner's dilemmas against each other. I suspect that in sufficiently symmetric situations the result is the same as minimizing damage to everybody using causal decision theory.

comment by 2ZctE · 2014-05-15T04:47:11.113Z · LW(p) · GW(p)

This reminds me of the response to the surgeon's dilemma about trust in hospitals. I want to say occupants, because if fear of being sacrificed in trolley problems causes fewer people adopt safer non distractable non fatiguable robot cars then it seems like a net utilitarian loss. If that were not the case, like for example if the safety advantage became overwhelming enough that people bought them anyway, then probably it should just minimize deaths. (I only thought about this for a couple of minutes though)

comment by Shmi (shminux) · 2014-05-14T16:53:12.054Z · LW(p) · GW(p)

Suspected Nash-equilibrium ethics for the proprietary collision avoidance algorithm:

Utilitarian: minimize negative publicity for the car maker.

Resulting Asimov-like deontology:
1) Avoid collisions with the same make cars
2) Maximize survival of the vehicle's occupants, disregarding safety of the other vehicle involved, subject to 1)
3) Minimize damage to the vehicle, subject to 1) and 2)

Replies from: Lumifer
comment by Lumifer · 2014-05-14T16:59:53.916Z · LW(p) · GW(p)

Utilitarian: minimize negative publicity for the car maker.

The US is a litigious society. I suspect that minimizing damage from wrongful-death lawsuits will be more important than minimizing negative publicity.

In fact, I don't think self-driving cars can become widespread until the "in any accident, sue the deep-pocketed manufacturer" problem gets resolved, likely by an act of Congress limiting the liability.

Replies from: Baughn
comment by Baughn · 2014-05-16T14:28:11.787Z · LW(p) · GW(p)

The US is a litigious society.

Well, maybe not. http://www.theguardian.com/commentisfree/2013/oct/24/america-litigious-society-myth

Replies from: Lumifer
comment by Lumifer · 2014-05-16T14:51:16.288Z · LW(p) · GW(p)

Well, maybe not.

Maybe yes. The expression "litigious society" implies comparison with other societies, presumably less litigious, and the article you quoted is entirely silent on the topic, spending most of its words on rehashing the notorious McDonalds coffee case. And it does conclude with saying that the fear of litigation in the US is pervasive and often reaches ridiculous levels.

comment by Izeinwinter · 2014-05-15T19:34:09.384Z · LW(p) · GW(p)

... Noone would ever design a car with any priority other than "minimize impact velocity" Because that is a parameter it can actually try to minimize. In the extremely unlikely case of a car smart enough to parse the question you just posed, impacts would never ever happen. Barring outright malice.

Replies from: DanielLC
comment by DanielLC · 2014-05-15T21:43:23.414Z · LW(p) · GW(p)

The car doesn't parse the question. The programmer does. You design a car that will avoid impacts when possible. Then you tell it what to do if impact is unavoidable. It might slam on the brakes while following the road. It might look for an option with a low impact velocity. It might prioritize hitting cars over hitting pedestrians. Etc.

comment by JQuinton · 2014-05-14T21:32:54.297Z · LW(p) · GW(p)

I wonder if they're actually using a utility function as in [probability * utility] or just going with [aim for safe car > unsafe car] unilaterally regardless of the likelihood of crashing into either. E.g., treating a 1% chance of crashing into the safe car and a 80% chance of crashing into the unsafe car as equal to 99% chance of crashing into the safe car and a .05% chance of crashing into the unsafe car; choosing in both cases to crash into the safe car.

Replies from: V_V
comment by V_V · 2014-05-14T22:11:38.301Z · LW(p) · GW(p)

The article is speculation about moral (and legal) issues of plausibly near-future technology, current self-driving cars are experimental vehicles not designed to safely operate autonomously under emergency situations.

comment by Cube · 2014-05-14T15:22:37.012Z · LW(p) · GW(p)

Conventional mortality would dictate that the car minimize global loss of life, followed by permanent brain damage, permanent body damage. I think in the future that other algorithms will be illegal but existent.

However. The lives each car would have the most effect on would be those inside of it. So in most situations all actions would be directed towards said persons.

Replies from: Houshalter, Lumifer, DanielLC
comment by Houshalter · 2014-05-14T17:19:07.667Z · LW(p) · GW(p)

The issue is that it could create bad incentives. E.g. motorcyclists not wearing helmets and even acting inappropriately around self-driving cars, knowing it will avoid them, even if it causes it to crash. Or people stop buying safer cars because they are always chosen as "targets" by self-driving cars to crash into, making them statistically less safe.

I don't think the concerns are large enough to worry about, but hypothetically it's an interesting dilemma.

Replies from: roystgnr, Lumifer, Nornagest
comment by roystgnr · 2014-05-15T16:03:30.370Z · LW(p) · GW(p)

When I was a dumb kid, my friends and I regularly jaywalked (jayran?) across 3 lanes at a time of high speed traffic, just to get to a nicer place for lunch. Don't underestimate the populations of stupid and selfish people in the world, or the propensity to change behavior in response to changing incentives.

On the other hand, I'm not sure how the incentives here will change. Any self-driving car is going to be speckled with cameras, and "I know it will slam on the brakes or swerve to avoid me" might not be much temptation when followed with "then it will send my picture to the police".

Replies from: Transfuturist
comment by Transfuturist · 2014-05-15T18:48:08.185Z · LW(p) · GW(p)

Aaaaand now you brought privacy controversy into the mix.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2014-05-16T01:42:18.985Z · LW(p) · GW(p)

In a completely reasonable way. If your driving strategy involves making problems for other people, that's intrinsically a non-private activity.

comment by Lumifer · 2014-05-14T18:05:52.808Z · LW(p) · GW(p)

acting inappropriately around self-driving cars, knowing it will avoid them, even if it causes it to crash.

Ah, an interesting possibility. Self-driving cars can be gamed. If I know a car will always swerve to avoid me, I can manipulate it.

comment by Nornagest · 2014-05-14T17:35:44.284Z · LW(p) · GW(p)

I doubt if self-driving cars would have to choose between crashing into two vehicles often enough for these considerations to show up in statistics.

comment by Lumifer · 2014-05-14T15:42:23.876Z · LW(p) · GW(p)

Conventional mortality would dictate that the car minimize global loss of life

I don't know about that. "Conventional morality" is not a well-formed or a coherent system and there are a lot of situations where other factors would override minimizing loss of life.

Replies from: Cube
comment by Cube · 2014-05-14T15:47:37.229Z · LW(p) · GW(p)

What kind of things override loss of life and and can be widely agreed upon?

Replies from: Lumifer, Eugine_Nier
comment by Lumifer · 2014-05-14T16:00:00.477Z · LW(p) · GW(p)

What kind of things override loss of life and and can be widely agreed upon?

Going to war, for example.

Or consider involuntary organ harvesting.

comment by Eugine_Nier · 2014-05-20T03:14:26.201Z · LW(p) · GW(p)

In the self-driving car example, say "getting to your destination". Keep in mind that the mere act of the car getting out on the road increases the expected number of resulting deaths.

comment by DanielLC · 2014-05-15T21:44:55.461Z · LW(p) · GW(p)

The lives each car would have the most effect on would be those inside of it.

I disagree. The driver of a car is much less in danger than a pedestrian.

Replies from: RowanE
comment by RowanE · 2014-05-24T15:23:19.864Z · LW(p) · GW(p)

No one pedestrian is more likely to die as a result of an accident involving a particular car than the owner of that car, though, which I think is what Cube meant.

Replies from: DanielLC
comment by DanielLC · 2014-05-25T00:02:41.244Z · LW(p) · GW(p)

True, but that doesn't change the fact that if you're at risk of crashing into a pedestrian, your car will act to save the pedestrian, rather than you.

comment by Lalartu · 2014-05-14T13:46:00.094Z · LW(p) · GW(p)

It should act in favor of its passengers of course.

Replies from: raisin, ThrustVectoring
comment by raisin · 2014-05-14T13:50:40.697Z · LW(p) · GW(p)

Why 'of course'? This doesn't seem obvious to me.

Replies from: HungryHobo, roystgnr
comment by HungryHobo · 2014-05-14T17:37:32.494Z · LW(p) · GW(p)

Probably because almost every other safety decision in a cars design is focused on the occupants.

those reinforced bars protecting the passenger: Do you think they care if they mean that any car hitting the side of the car suffers more damage due to hitting a more solid structure?

They want to sell the cars, thus they likely want the cars priorities to be somewhat in line with the buyer. They buyer doesn't care all much about the toddler in the other car except in a philosophical sense. They care about the toddler in their own car. The person is not the priority of the seller or the buyer.

In terms of liability it makes sense to try to make sure that the accident remains legally the fault of the other party no matter the number of deaths and the law rarely accepts intentionally harming one person who wasn't at fault in order to avoid an accident and spare the lives of a car with 2 people who were at fault themselves.

Replies from: V_V
comment by V_V · 2014-05-14T21:00:06.029Z · LW(p) · GW(p)

In terms of liability it makes sense to try to make sure that the accident remains legally the fault of the other party no matter the number of deaths and the law rarely accepts intentionally harming one person who wasn't at fault in order to avoid an accident and spare the lives of a car with 2 people who were at fault themselves.

Makes sense. Though the design of a motion control algorithm where the inverse dynamics model interacts with a road law expect system to make decisions in a fraction of a second would be... interesting.

comment by roystgnr · 2014-05-15T16:11:31.562Z · LW(p) · GW(p)

HungryHobo gave good arguments from tradition and liability; here's an argument from utility:

Google's cars are up over a million autonomously-driven km without an accident. That's not proof that they're safer than the average human-driven car (something like 2 accidents per million km in the US?) but it's mounting evidence. If car AI written to prioritize its passengers turns out to still be an order of magnitude safer for third parties than human drivers, then the direct benefit of optimizing for total safety may be outweighed by the indirect benefit of optimizing for own-passenger safety and thereby enticing more rapid adoption of the technology.

comment by ThrustVectoring · 2014-05-14T14:28:19.456Z · LW(p) · GW(p)

They'd be better off using a shared algorithm if involved in a situation with cars reasoning in a similar fashion.

Replies from: Transfuturist
comment by Transfuturist · 2014-05-15T18:51:26.331Z · LW(p) · GW(p)

This is definitely a case for superrationality. If antagonists in an accident are equipped, communicate. Not sure what to do about human participants, though.

This issue brought up seems to greatly overestimate the probability of crashing into something. IIRC, the main reason people crash is because 1) they oversteer and 2) they steer to where they're looking, and they often look in the direction of the nearest or most inevitable obstacle.

These situations would involve human error almost every time, and crashing would be most likely due to the human driver crashing into the autocar, not the other way around. Something that would increase the probability would be human error in heavy traffic.

comment by Jinoc · 2014-05-15T14:27:08.556Z · LW(p) · GW(p)

It seems there are few distinct cases

  • I am someone who does not wear helmet in our current society where this is illegal and people don't exactly discriminate in case of car accidents, so the introduction of smart cars will only confirm my current (bad) decision - no change there.

  • I currently wear a helmet, but would stop wearing one if smart cars were introduced.
    Assuming every car magically became a smart car, that means I am willing to suffer a fine in exchange for a slightly greater likelihood of surviving a nearby car crash.
    Considering smart cars are better drivers than humans, and that car crashes are already rare, that means if I considered the fine adequate to incentivize me into wearing a helmet previously I should consider them adequate now.
    There is an edge case here : smart cars are better drivers, but only by a small fraction that is offset by their tendency to aim away from me.

  • I currently wear a helmet, and will continue to do so.

Only the edge case would create a morally ambiguous situation, but that seems pretty unlikely (you'd hope that a swarm of cars with superhuman reaction speed would be more than marginally better at preventing accidents).

comment by Lumifer · 2014-05-14T15:02:12.571Z · LW(p) · GW(p)

Hello, trolley problem :-)

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2014-05-14T16:29:12.150Z · LW(p) · GW(p)

The car may face a trolley problem, but designing the algorithm isn't one.

Replies from: Lumifer
comment by Lumifer · 2014-05-14T16:52:27.543Z · LW(p) · GW(p)

Designing the algorithm necessitates providing a (note: a) solution to the trolley problem.

The car, not being an AI, doesn't actually face any problems.