The autopilot problem: driving without experience

post by Stuart_Armstrong · 2013-05-13T12:42:57.106Z · LW · GW · Legacy · 37 comments

Contents

37 comments

Consider a mixed system, in which an automated system is paired with a human overseer. The automated system handles most of the routine tasks, while the overseer is tasked with looking out for errors and taking over in extreme or unpredictable circumstances. Examples of this could be autopilots, cruise control, GPS direction finding, high-frequency trading – in fact nearly every automated system has this feature, because they nearly all rely on humans "keeping an eye on things".

But often the human component doesn't perform as well as it should do – doesn't perform as well as it did before part of the system was automated. Cruise control can impair driver performance, leading to more accidents. GPS errors can take people far more off course than following maps did. When the autopilot fails, pilots can crash their planes in rather conventional conditions. Traders don't understand why their algorithms misbehave, or how to stop this.

There seems to be three factors at work here:

  1. Firstly, if the automation performs flawlessly, the overseers will become complacent, blindly trusting the instruments and failing to perform basic sanity checks. They will have far less procedural understanding of what's actually going on, since they have no opportunity to exercise their knowledge.
  2. This goes along with a general deskilling of the overseer. When the autopilot controls the plane for most of its trip, pilots get far less hands-on experience of actually flying the plane. Paradoxically, less efficient automation can help with both these problems: if the system fails 10% of the time, the overseer will watch and understand it closely.
  3. And when the automation does fail, the overseer will typically lack situational awareness of what's going on. All they know is that something extraordinary has happened, and they may have the (possibly flawed) readings of various instruments to guide them – but they won't have a good feel for what happened to put them in that situation.

So, when the automation fails, the overseer is generally dumped into an emergency situation, whose nature they are going to have to deduce, and, using skills that have atrophied, they are going to have to take on the task of the automated system that has never failed before and that they have never had to truly understand.

And they'll typically get blamed for getting it wrong.

Similarly, if we design AI control mechanisms that rely on the presence of a human in the loop (such as tools AIs, Oracle AIs, and, to a lesser extent, reduced impact AIs), we'll need to take the autopilot problem into account, and design the role of the overseer so as not to deskill them, and not count on them being free of error.

37 comments

Comments sorted by top scores.

comment by mwengler · 2013-05-13T15:47:37.258Z · LW(p) · GW(p)

The complacency and deskilling are a feature, not a bug. The less I have to learn to get from place to place, the more attention I have for other things that can't be automated (yet).

Attributing to a GPS faillure a woman driving 900 miles to croatia when she intended to drive 38 miles within Belgium is naive. Most likely she put the wrong address in, possibly with the help of autocomplete, possibly not. But crazy, drug-addled, and or senile people have been winding up hundreds of miles from where they thought they were for a long time before there were any GPS satellites in orbit. Actual GPS errors in my experience take you to a street behind your inou tended destination, or direct you to streets that are closed. And these errors fall off quickly as the expert system becomes, well, more expert. The GPS navigation app errors tend to be small, bringing you near where you need to go but then requiring some intelligence to realize how to fix the error the system has made. Meanwhile, I drove two hours out of my way on vacation in Florida, an error I could not have possibly made to that extent if I had had the GPS navigation systems I now use all the time.

Automated cars WILL be blamed for all sorts of problems including deaths. The unwashed innumerates will tell detailed stories about how they went wrong and be unmoved by the overall statistics of a system which will cause FEWER deaths per mile driven than do humans. Some of those deaths will occur in ways that after-the-fact innumerates, and other elements of the infotainment industry known as democracy, will tell wonderful anecdotes about them. There may even be congressional hearings and court cases. The idea that a few deaths that MIGHT have been avoided under the old regime is literally a small price to pay for an overall lower death rate will be too complex a concept to get legs in the infotainment industry.

But in the long run, the nerds will win, and economically useful automation will be broadly adopted. We don't know how to grow our own food or build our own houses anymore and we've gotten over that. We'll get over this too and the innumerate infotainment industry known as democracy will move on to its next stupidity.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2013-05-13T17:09:46.362Z · LW(p) · GW(p)

This isn't a progress vs luddite debate - the fact that the human element of a automation+overseer performs worse than if the human were entirely in charge, is not a general argument against automation (at most, it might be an argument against replacing a human with an automation+overseer model if the gains are expected to be small).

The fact that humans can exercise other skills (pilots apparently do a lot when the autopilot is engaged) does not negate the fact they lose skills when it comes to taking over from the automation.

Replies from: bartimaeus
comment by bartimaeus · 2013-05-15T16:32:37.121Z · LW(p) · GW(p)

The autopilot problem seems to arise in the transition phase between the two pilots (the human and the machine). If just the human does the task, he remains sufficiently skilled to handle the emergency situations. Once the automation is powerful enough to handle all but the situations that even a fully-trained human wouldn't even know how to handle, then the deskilling of the human just allows him to focus on more important tasks.

To take the example of self-driving cars: the first iterations might not know how to deal with, say, a differently-configured zone due to construction or some other hazard (correct me if I'm wrong, I don't know much about self-driving car AI). So it's important that the person in the driver's seat can take over; if the person is blind, or drunk, or has never ever operated a car before, we have a problem. But I can imagine that at some point self-driving cars will handle almost any situation better than a person.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2013-05-16T08:50:21.034Z · LW(p) · GW(p)

And the risky areas are those where the transition period is very long.

comment by [deleted] · 2013-05-13T19:43:21.082Z · LW(p) · GW(p)

I read your piece and replaced 'autopilot' with 'social structure' and it still works. When you use the autopilot of membership in a group, you get the same errors.

Replies from: SilasBarta
comment by SilasBarta · 2013-05-14T05:52:03.950Z · LW(p) · GW(p)

I seems like the curse of the gifted student is similar as well -- being naturally good-enough at the first 90% of the education makes you miss out on developing habits necessary for the last 10%.

comment by ESRogs · 2013-05-14T03:19:35.386Z · LW(p) · GW(p)

This post reminds me of this essay, which I enjoyed, on the topic of automation and deskilling: http://www.macroresilience.com/2011/12/29/people-make-poor-monitors-for-computers/.

Replies from: None, Stuart_Armstrong
comment by [deleted] · 2013-05-14T09:55:18.318Z · LW(p) · GW(p)

That was a good article! I also find it noteworthy that the sucessful example of humans recovering from a failure involved them extensively using checklists, particularly in reference to automation and deskilling in general.

comment by Stuart_Armstrong · 2013-05-14T08:12:30.598Z · LW(p) · GW(p)

Thanks!

comment by Vaniver · 2013-05-13T14:14:43.469Z · LW(p) · GW(p)

Firstly, if the automation performs flawlessly, the overseers will become complacent, blindly trusting the instruments and failing to perform basic sanity checks. They will have far less procedural understanding of what's actually going on, since they have no opportunity to exercise their knowledge.

There's a related problem in manufacturing whose name I've forgotten, but basically, the less frequent defective parts are, the less likely it is human quality control people will notice defective parts, because their job is more boring and so they're less likely to be paying attention when a defective part does happen. (Conditioned on the part being defective, of course.)

Replies from: shminux, Richard_Kennaway
comment by Shmi (shminux) · 2013-05-13T16:13:00.836Z · LW(p) · GW(p)

Right, one of the original solutions, though rarely implemented, is to add a steady stream of defective parts to guarantee optimal human attention. These artificially defective parts are marked in a way that lets them to be automatically separated and recycled later, should any slip by the human QA.

Replies from: maia, Stuart_Armstrong
comment by maia · 2013-05-13T20:10:55.713Z · LW(p) · GW(p)

Wow. That's an really cool example of careful design, taking humans into account as well as technical issues.

Replies from: shminux
comment by Shmi (shminux) · 2013-05-13T20:16:35.088Z · LW(p) · GW(p)

Yeah, I was equally impressed when one of my instructors at the uni explained the concept, some decades ago, as an aside while teaching CPU design.

comment by Stuart_Armstrong · 2013-05-13T16:57:06.708Z · LW(p) · GW(p)

They apparently do this in airport x-rays - inject an image of a bag with a gun, to see if the observer reacts.

Replies from: shminux
comment by Shmi (shminux) · 2013-05-13T17:07:04.819Z · LW(p) · GW(p)

But apparently not for keeping pilots alert in flight... A "Fuel pressure drop in engine 3!" drill exercise would probably not, umm, fly.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2013-05-13T17:10:39.833Z · LW(p) · GW(p)

There might be other ways - you could at least do it on simulators, or even on training flights (with no passengers).

Replies from: shminux
comment by Shmi (shminux) · 2013-05-13T17:35:42.572Z · LW(p) · GW(p)

Surely they already do that. The trick is not knowing whether an abnormal input is a drill or not, or at least not knowing when a drill might happen. All these issues have been solved in the military a long time ago.

Replies from: Decius
comment by Decius · 2013-05-14T03:09:59.947Z · LW(p) · GW(p)

Knowing when a drill might happen improves alertness during the drill period only. Drills do develop and maintain the skills required to respond to a non-standard situation.

comment by Richard_Kennaway · 2013-05-13T14:38:03.248Z · LW(p) · GW(p)

I've heard that in proof-reading, optimal performance is achieved when there are about 2 errors per page.

Replies from: SilasBarta, falenas108
comment by SilasBarta · 2013-05-14T05:47:55.106Z · LW(p) · GW(p)

I've heard that when you play mouse-chasing-themed games with your cat, the maximal cat fun is achieved when there are between 1 and 2 successes for every 6 pounces.

comment by falenas108 · 2013-05-13T16:25:58.157Z · LW(p) · GW(p)

Optimal performance may be maximized, but the output isn't.

I would be surprised if there were less overall errors in the final product if it started at 2 per page, rather than say 1/4 per page.

This is also valid against the suggestion in the OP. Although humans will catch more errors if there are more to begin with, that doesn't mean there will be less failures overall.

Replies from: shminux
comment by Shmi (shminux) · 2013-05-13T16:41:49.731Z · LW(p) · GW(p)

As I mentioned in my other comment, if some of the errors are injected to keep the attention at the optimal level, and then removed post-QA, the other errors are removed with better efficiency. As an added benefit, you get an automated and reliable metric of how attentive the proof-reader is.

comment by tgb · 2013-05-13T17:53:48.378Z · LW(p) · GW(p)

Only the cruise control link is an actual comparison of automation+overseer versus just humans. The rest given are examples of automation+overseer failing but there are of course examples of just humans failing just as badly. Is there any further evidence of this phenomenon? In particular, is there evidence that the total success rate decreases as the success rate of the automation increases?

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2013-05-13T21:47:52.502Z · LW(p) · GW(p)

Well, if you're willing to extend automation to cover automatic pricing from a specific set of equations, then we have the recent financial crisis...

comment by Luke_A_Somers · 2013-05-13T14:53:02.702Z · LW(p) · GW(p)

I wonder if it's possible to bring the success rate back up in QA conditions by requiring the identification of the candidate furthest from ideal within a given period, whether or not that is within tolerances. Of course, in some cases, that would completely negate the purpose of the automatic behavior.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2013-05-13T16:57:42.278Z · LW(p) · GW(p)

Right, I don't understand what you're saying there. Can you develop it?

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-05-13T17:41:10.222Z · LW(p) · GW(p)

So you have a batch of things that need to pass muster. The failure mode presented above is that you'll get bored with just saying 'pass, pass, pass...'

The corrective proposed is to ask for the worst item, whether or not it passes, in addition to asking for rejects.

It would be something to think about while looking at a bunch of good ones, and would keep one in practice... if one tries. If you just fake it and no one can tell because they're all passes anyway, then it doesn't work.

Replies from: Pentashagon, Stuart_Armstrong, Decius
comment by Pentashagon · 2013-05-14T19:11:58.699Z · LW(p) · GW(p)

It may also be useful to identify the best thing. The difference between the best and worst is probably a useful measure of quality control as well as ensuring the tests are general enough to detect good as well as bad.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-05-14T20:07:20.252Z · LW(p) · GW(p)

If your process is good enough that this is a problem, then 'so good you can't tell it's not perfect' could well be the most common case. In any case, it's most important to concentrate the expertise around the border of OK and not.

comment by Stuart_Armstrong · 2013-05-13T21:41:40.855Z · LW(p) · GW(p)

Interesting. May be applicable to some of the situations we're studying...

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-05-13T22:17:31.936Z · LW(p) · GW(p)

Just look out that you don't end up picking out something that's not the worst, and think you're still doing a good job.

comment by Decius · 2013-05-14T03:12:57.214Z · LW(p) · GW(p)

The failure mode presented above is that you'll get bored with just saying 'pass, pass, pass...'

That looks like an ideal case for automation...

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-05-14T14:19:47.422Z · LW(p) · GW(p)

And then you miss the one in ten thousand that was no good.

Replies from: Decius
comment by Decius · 2013-05-15T00:15:06.695Z · LW(p) · GW(p)

If you are using humans to mass-test for a failure rate of 1/10,000 you are doing something wrong. Ship ten thousand units, let the end-user test them at the time of use/installation/storage, and ship replacement parts to the user who got a defective part. That way no one human gets bored with testing that part (though they might get bored with inspecting good parts in general)

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-05-15T00:39:40.307Z · LW(p) · GW(p)

Sounds great if failure is acceptable. I don't want my parachute manufacturer taking on that method, though.

Replies from: Decius
comment by Decius · 2013-05-15T03:04:13.554Z · LW(p) · GW(p)

Don't you demand that your parachute packer inspects it when he packs it? Especially given that more than zero parachutes will be damaged after manufacture but before first use.

comment by Decius · 2013-05-14T03:20:25.782Z · LW(p) · GW(p)

I think that you're noticing that automation does not require that the overseer ever develop the skills required to perform the task manually; drivers don't have to learn how to maintain constant speed for hours on end, pilots don't have to develop the endurance to maintain altitude and heading. There is an element of skill atrophy, and of encouraging distractions, and the distractions are likely to result in worse immediate responses to the failure of automation; the skill (responding to emergencies) would have atrophied anyway.