(misleading title removed)

post by The_Jaded_One · 2015-01-28T23:00:58.639Z · LW · GW · Legacy · 8 comments

Contents

8 comments

An article by AAAI president Tom Dietterich and Director of Microsoft Research Eric Horvitz has recently got some media attention (BBC, etc) downplaying AI existential risks. You can go read it yourself, but the key paragraph is this:  


A third set of risks echo the tale of the Sorcerer’s Apprentice. Suppose we tell a self-driving car to “get us to the airport as quickly as possible!” Would the autonomous driving system put the pedal to the metal and drive at 300 mph while running over pedestrians? Troubling scenarios of this form have appeared recently in the press. Other fears center on the prospect of out-of-control superintelligences that threaten the survival of humanity. All of these examples refer to cases where humans have failed to correctly instruct the AI algorithm in how it should behave.

This is not a new problem. An important aspect of any AI system that interacts with people is that it must reason about what people intend rather than carrying out commands in a literal manner. An AI system should not only act on a set of rules that it is instructed to obey — it must also analyze and understand whether the behavior that a human is requesting is likely to be judged as “normal” or “reasonable” by most people. It should also be continuously monitoring itself to detect abnormal internal behaviors, which might signal bugs, cyberattacks, or failures in its understanding of its actions. In addition to relying on internal mechanisms to ensure proper behavior, AI systems need to have the capability — and responsibility — of working with people to obtain feedback and guidance. They must know when to stop and “ask for directions” — and always be open for feedback.

Some of the most exciting opportunities ahead for AI bring together the complementary talents of people and computing systems. AI-enabled devices are ... (examples follow) ...

In reality, creating real-time control systems where control needs to shift rapidly and fluidly between people and AI algorithms is difficult. Some airline accidents occurred when pilots took over from the autopilots. The problem is that unless the human operator has been paying very close attention, he or she will lack a detailed understanding of the current situation.

AI doomsday scenarios belong more in the realm of science fiction than science fact.

They continue:

However, we still have a great deal of work to do to address the concerns and risks afoot with our growing reliance on AI systems. Each of the three important risks outlined above (programming errors, cyberattacks, “Sorcerer’s Apprentice”) is being addressed by current research, but greater efforts are needed

We urge our colleagues in industry and academia to join us in identifying and studying these risks and in finding solutions to addressing them, and we call on government funding agencies and philanthropic initiatives to support this research. We urge the technology industry to devote even more attention to software quality and cybersecurity as we increasingly rely on AI in safety-critical functions. And we must not put AI algorithms in control of potentially-dangerous systems until we can provide a high degree of assurance that they will behave safely and properly.


I feel that Horvitz and Dietterich somewhat contradict themselves here. They start their rebuttal by confidently asserting that "This is not a new problem" - but later go on to say that an AI system should "be continuously monitoring itself to detect ... failures in its understanding of its actions". Of course anyone who knows anything about the history of AI will know that AI systems are notoriously bad at knowing when they've completely "lost the plot" and that the solutions outlined - of an AI system understanding what counts as "reasonable" (the commonsense knowledge problem) and of an AI usefully self-monitoring in situations with real-life complexity are both hopelessly beyond the current state of the art. Yes, the problem is not new, but that doesn't mean it isn't a problem. 

More importantly, Horvitz and Dietterich don't really engage with the idea that superintelligence makes the control problem qualitatively harder.

Reading between the lines, I suspect that they don't really think superintelligence is a genuine possibility, their mental model of the world seems to be that from now until eternity we will have a series of incrementally better SIRIs and Cortanas which helpfully suggest which present to buy for grandma or what to wear on a trip to Boston. I.e. they think that the current state of the art will never be qualitatively superceded, there will never be an AI that is better at AI science than they are, that will self-improve, etc. 

This would make the rest of their position make a lot of sense. 

One question that keeps kicking around in my mind is that if someone's true but unstated objection to the problem of AI risk is that superintelligence will never happen, how do you change their mind?


8 comments

Comments sorted by top scores.

comment by jsteinhardt · 2015-01-28T23:39:32.391Z · LW(p) · GW(p)

I think the excerpt you give is pretty misleading, and gave me a much different understanding of the article (which I had trouble believing based on my previous knowledge of Tom and Eric) than when I actually read it. In particular, your quote ends mid-paragraph. The actual paragraph is:

However, we still have a great deal of work to do to address the concerns and risks afoot with our growing reliance on AI systems. Each of the three important risks outlined above (programming errors, cyberattacks, “Sorcerer’s Apprentice”) is being addressed by current research, but greater efforts are needed.

The next paragraph is:

We urge our colleagues in industry and academia to join us in identifying and studying these risks and in finding solutions to addressing them, and we call on government funding agencies and philanthropic initiatives to support this research. We urge the technology industry to devote even more attention to software quality and cybersecurity as we increasingly rely on AI in safety-critical functions. And we must not put AI algorithms in control of potentially-dangerous systems until we can provide a high degree of assurance that they will behave safely and properly.

Can you please fix this ASAP? (And also change your title to actually be an accurate synopsis of the article as well?) Otherwise you're just adding to the noise.

Replies from: The_Jaded_One
comment by The_Jaded_One · 2015-01-29T07:25:59.763Z · LW(p) · GW(p)

I disagree that it is as inaccurate as you claim. Specifically, they did actually say that "AI doomsday scenarios belong more in the realm of science fiction". I don't think it's inaccurate to quote what someone actually said.

When they talk about "having more work to do" etc, it seems that they are emphasizing risks of sub-human intelligence and de-emphasizing the risks of superintelligence.

Of course LW being LW I know that balance and fairness is valued very highly, so would you kindly suggest what you think the title should be and I will change it.

I will also add in the paragraphs you suggest.

comment by Vaniver · 2015-01-29T00:19:02.405Z · LW(p) · GW(p)

Yeah, echoing jsteinhardt, I think you misread the letter, and science journalists in general are not to be trusted when it comes to reporting on AI or AI dangers. Dietterich is the second listed signatory and Horvitz is the third of the FLI open letter, and this letter seems to me to be saying "hey general public, don't freak out about the Terminator, the AI research field has this under control--we recognize that safety is super important and are working hard on it (and you should fund more of it)."

Replies from: The_Jaded_One
comment by The_Jaded_One · 2015-01-29T07:29:25.725Z · LW(p) · GW(p)

AI research field has this under control--we recognize that safety is super important and are working hard on it

Great, except

a) they don't have it under control and

b) no-one in mainstream AI academia is working on the control problem for superintelligence

Replies from: Vaniver
comment by Vaniver · 2015-01-29T16:47:45.787Z · LW(p) · GW(p)

So, can you find the phrase in the letter that's the MIRI open problem that Nate Soares presented on at the AAAI workshop on AI ethics that Dietterich was at a few days later?

If not, maybe you should reduce your confidence about your interpretation. My suspicion is that MIRI is rapidly becoming mainstream, and that the FLI grant is attracting even more attention. Perhaps more importantly, I think we're in a position where it's more effective to treat AI safety issues as mainstream than fringe.

I also think that we're interpreting "under control" differently. I'm not making the claim that the problem is solved, just that it's being worked on (in the way that academia works on these problems), and getting Congress or the media or so on involved in a way not mediated by experts is likely to do more harm than good.

comment by JoshuaZ · 2015-01-29T00:26:24.682Z · LW(p) · GW(p)

One question that keeps kicking around in my mind is that if someone's true but unstated objection to the problem of AI risk is that superintelligence will never happen, how do you change their mind?

Note that superintelligence doesn't by itself provide much of a risk. It is extreme superintelligence, together with variants of the orthogonality thesis and an intelligence that is able to rapidly achieve its superintelligence. The first two of these seem to be much easier to convince people of than the third, which shouldn't be that surprising because the third is really the most questionable. (At the same time there seems to be a hard core of people who absolutely won't budge on orthogonality. I disagree with such people on such fundamental intuitions and other issues that I'm not sure I can model well what they are thinking.)

Replies from: The_Jaded_One
comment by The_Jaded_One · 2015-01-29T07:42:11.691Z · LW(p) · GW(p)

The orthogonality thesis, in the form "you can't get an ought from an is", is widely accepted or at least widely considered a popular position in public discourse.

It is true that slow superintelligence is less risky, but that argument isn't explicitly made in this letter.

comment by lisaharris7 · 2015-01-30T05:26:02.620Z · LW(p) · GW(p)

Valuable information! This is excellent work and actually helped me with the details. Thumbs up for you! Click Here: http://desertleather.com/Captain-America-Avengers-Age-of-Ultron-Jacket