Notes on the Safety in Artificial Intelligence conference

post by UmamiSalami · 2016-07-01T00:36:57.309Z · LW · GW · Legacy · 15 comments

Contents

15 comments

These are my notes and observations after attending the Safety in Artificial Intelligence (SafArtInt) conference, which was co-hosted by the White House Office of Science and Technology Policy and Carnegie Mellon University on June 27 and 28. This isn't an organized summary of the content of the conference; rather, it's a selection of points which are relevant to the control problem. As a result, it suffers from selection bias: it looks like superintelligence and control-problem-relevant issues were discussed frequently, when in reality those issues were discussed less and I didn't write much about the more mundane parts.

SafArtInt has been the third out of a planned series of four conferences. The purpose of the conference series was twofold: the OSTP wanted to get other parts of the government moving on AI issues, and they also wanted to inform public opinion.

The other three conferences are about near term legal, social, and economic issues of AI. SafArtInt was about near term safety and reliability in AI systems. It was effectively the brainchild of Dr. Ed Felten, the deputy U.S. chief technology officer for the White House, who came up with the idea for it last year. CMU is a top computer science university and many of their own researchers attended, as well as some students. There were also researchers from other universities, some people from private sector AI including both Silicon Valley and government contracting, government researchers and policymakers from groups such as DARPA and NASA, a few people from the military/DoD, and a few control problem researchers. As far as I could tell, everyone except a few university researchers were from the U.S., although I did not meet many people. There were about 70-100 people watching the presentations at any given time, and I had conversations with about twelve of the people who were not affiliated with existential risk organizations, as well as of course all of those who were affiliated. The conference was split with a few presentations on the 27th and the majority of presentations on the 28th. Not everyone was there for both days.

Felten believes that neither "robot apocalypses" nor "mass unemployment" are likely. It soon became apparent that the majority of others present at the conference felt the same way with regard to superintelligence. The general intention among researchers and policymakers at the conference could be summarized as follows: we need to make sure that the AI systems we develop in the near future will not be responsible for any accidents, because if accidents do happen then they will spark public fears about AI, which would lead to a dearth of funding for AI research and an inability to realize the corresponding social and economic benefits. Of course, that doesn't change the fact that they strongly care about safety in its own right and have significant pragmatic needs for robust and reliable AI systems.

Most of the talks were about verification and reliability in modern day AI systems. So they were concerned with AI systems that would give poor results or be unreliable in the narrow domains where they are being applied in the near future. They mostly focused on "safety-critical" systems, where failure of an AI program would result in serious negative consequences: automated vehicles were a common topic of interest, as well as the use of AI in healthcare systems. A recurring theme was that we have to be more rigorous in demonstrating safety and do actual hazard analyses on AI systems, and another was that we need the AI safety field to succeed in ways that the cybersecurity field has failed. Another general belief was that long term AI safety, such as concerns about the ability of humans to control AIs, was not a serious issue.

On average, the presentations were moderately technical. They were mostly focused on machine learning systems, although there was significant discussion of cybersecurity techniques.

The first talk was given by Eric Horvitz of Microsoft. He discussed some approaches for pushing into new directions in AI safety. Instead of merely trying to reduce the errors spotted according to one model, we should look out for "unknown unknowns" by stacking models and looking at problems which appear on any of them, a theme which would be presented by other researchers as well in later presentations. He discussed optimization under uncertain parameters, sensitivity analysis to uncertain parameters, and 'wireheading' or short-circuiting of reinforcement learning systems (which he believes can be guarded against by using 'reflective analysis'). Finally, he brought up the concerns about superintelligence, which sparked amused reactions in the audience. He said that scientists should address concerns about superintelligence, which he aptly described as the 'elephant in the room', noting that it was the reason that some people were at the conference. He said that scientists will have to engage with public concerns, while also noting that there were experts who were worried about superintelligence and that there would have to be engagement with the experts' concerns. He did not comment on whether he believed that these concerns were reasonable or not.

An issue which came up in the Q&A afterwards was that we need to deal with mis-structured utility functions in AI, because it is often the case that the specific tradeoffs and utilities which humans claim to value often lead to results which the humans don't like. So we need to have structural uncertainty about our utility models. The difficulty of finding good objective functions for AIs would eventually be discussed in many other presentations as well.

The next talk was given by Andrew Moore of Carnegie Mellon University, who claimed that his talk represented the consensus of computer scientists at the school. He claimed that the stakes of AI safety were very high - namely, that AI has the capability to save many people's lives in the near future, but if there are any accidents involving AI then public fears could lead to freezes in AI research and development. He highlighted the public's irrational tendencies wherein a single accident could cause people to overlook and ignore hundreds of invisible lives saved. He specifically mentioned a 12-24 month timeframe for these issues.

Moore said that verification of AI system safety will be difficult due to the combinatorial explosion of AI behaviors. He talked about meta-machine-learning as a solution to this, something which is being investigated under the direction of Lawrence Schuette at the Office of Naval Research. Moore also said that military AI systems require high verification standards and that development timelines for these systems are long. He talked about two different approaches to AI safety, stochastic testing and theorem proving - the process of doing the latter often leads to the discovery of unsafe edge cases.

He also discussed AI ethics, giving an example 'trolley problem' where AI cars would have to choose whether to hit a deer in order to provide a slightly higher probability of survival for the human driver. He said that we would need hash-defined constants to tell vehicle AIs how many deer a human is worth. He also said that we would need to find compromises in death-pleasantry tradeoffs, for instance where the safety of self-driving cars depends on the speed and routes on which they are driven. He compared the issue to civil engineering where engineers have to operate with an assumption about how much money they would spend to save a human life.

He concluded by saying that we need policymakers, company executives, scientists, and startups to all be involved in AI safety. He said that the research community stands to gain or lose together, and that there is a shared responsibility among researchers and developers to avoid triggering another AI winter through unsafe AI designs.

The next presentation was by Richard Mallah of the Future of Life Institute, who was there to represent "Medium Term AI Safety". He pointed out the explicit/implicit distinction between different modeling techniques in AI systems, as well as the explicit/implicit distinction between different AI actuation techniques. He talked about the difficulty of value specification and the concept of instrumental subgoals as an important issue in the case of complex AIs which are beyond human understanding. He said that even a slight misalignment of AI values with regard to human values along one parameter could lead to a strongly negative outcome, because machine learning parameters don't strictly correspond to the things that humans care about.

Mallah stated that open-world discovery leads to self-discovery, which can lead to reward hacking or a loss of control. He underscored the importance of causal accounting, which is distinguishing causation from correlation in AI systems. He said that we should extend machine learning verification to self-modification. Finally, he talked about introducing non-self-centered ontology to AI systems and bounding their behavior.

The audience was generally quiet and respectful during Richard's talk. I sensed that at least a few of them labelled him as part of the 'superintelligence out-group' and dismissed him accordingly, but I did not learn what most people's thoughts or reactions were. In the next panel featuring three speakers, he wasn't the recipient of any questions regarding his presentation or ideas.

Tom Mitchell from CMU gave the next talk. He talked about both making AI systems safer, and using AI to make other systems safer. He said that risks to humanity from other kinds of issues besides AI were the "big deals of 2016" and that we should make sure that the potential of AIs to solve these problems is realized. He wanted to focus on the detection and remediation of all failures in AI systems. He said that it is a novel issue that learning systems defy standard pre-testing ("as Richard mentioned") and also brought up the purposeful use of AI for dangerous things.

Some interesting points were raised in the panel. Andrew did not have a direct response to the implications of AI ethics being determined by the predominantly white people of the US/UK where most AIs are being developed. He said that ethics in AIs will have to be decided by society, regulators, manufacturers, and human rights organizations in conjunction. He also said that our cost functions for AIs will have to get more and more complicated as AIs get better, and he said that he wants to separate unintended failures from superintelligence type scenarios. On trolley problems in self driving cars and similar issues, he said "it's got to be complicated and messy."

Dario Amodei of Google Deepbrain, who co-authored the paper on concrete problems in AI safety, gave the next talk. He said that the public focus is too much on AGI/ASI and wants more focus on concrete/empirical approaches. He discussed the same problems that pose issues in advanced general AI, including flawed objective functions and reward hacking. He said that he sees long term concerns about AGI/ASI as "extreme versions of accident risk" and that he thinks it's too early to work directly on them, but he believes that if you want to deal with them then the best way to do it is to start with safety in current systems. Mostly he summarized the Google paper in his talk.

In her presentation, Claire Le Goues of CMU said "before we talk about Skynet we should focus on problems that we already have." She mostly talked about analogies between software bugs and AI safety, the similarities and differences between the two and what we can learn from software debugging to help with AI safety.

Robert Rahmer of IARPA discussed CAUSE, a cyberintelligence forecasting program which promises to help predict cyber attacks. It is a program which is still being put together.

In the panel of the above three, autonomous weapons were discussed, but no clear policy stances were presented.

John Launchbury gave a talk on DARPA research and the big picture of AI development. He pointed out that DARPA work leads to commercial applications and that progress in AI comes from sustained government investment. He classified AI capabilities into "describing," "predicting," and "explaining" in order of increasing difficulty, and he pointed out that old fashioned "describing" still plays a large role in AI verification. He said that "explaining" AIs would need transparent decisionmaking and probabilistic programming (the latter would also be discussed by others at the conference).

The next talk came from Jason Gaverick Matheny, the director of IARPA. Matheny talked about four requirements in current and future AI systems: verification, validation, security, and control. He wanted "auditability" in AI systems as a weaker form of explainability. He talked about the importance of "corner cases" for national intelligence purposes, the low probability, high stakes situations where we have limited data - these are situations where we have significant need for analysis but where the traditional machine learning approach doesn't work because of its overwhelming focus on data. Another aspect of national defense is that it has a slower decision tempo, longer timelines, and longer-viewing optics about future events.

He said that assessing local progress in machine learning development would be important for global security and that we therefore need benchmarks to measure progress in AIs. He ended with a concrete invitation for research proposals from anyone (educated or not), for both large scale research and for smaller studies ("seedlings") that could take us "from disbelief to doubt".

The difference in timescales between different groups was something I noticed later on, after hearing someone from the DoD describe their agency as having a longer timeframe than the Homeland Security Agency, and someone from the White House describe their work as being crisis reactionary.

The next presentation was from Andrew Grotto, senior director of cybersecurity policy at the National Security Council. He drew a close parallel from the issue of genetically modified crops in Europe in the 1990's to modern day artificial intelligence. He pointed out that Europe utterly failed to achieve widespread cultivation of GMO crops as a result of public backlash. He said that the widespread economic and health benefits of GMO crops were ignored by the public, who instead focused on a few health incidents which undermined trust in the government and crop producers. He had three key points: that risk frameworks matter, that you should never assume that the benefits of new technology will be widely perceived by the public, and that we're all in this together with regard to funding, research progress and public perception.

In the Q&A between Launchbury, Matheny, and Grotto after Grotto's presentation, it was mentioned that the economic interests of farmers worried about displacement also played a role in populist rejection of GMOs, and that a similar dynamic could play out with regard to automation causing structural unemployment. Grotto was also asked what to do about bad publicity which seeks to sink progress in order to avoid risks. He said that meetings like SafArtInt and open public dialogue were good.

One person asked what Launchbury wanted to do about AI arms races with multiple countries trying to "get there" and whether he thinks we should go "slow and secure" or "fast and risky" in AI development, a question which provoked laughter in the audience. He said we should go "fast and secure" and wasn't concerned. He said that secure designs for the Internet once existed, but the one which took off was the one which was open and flexible.

Another person asked how we could avoid discounting outliers in our models, referencing Matheny's point that we need to include corner cases. Matheny affirmed that data quality is a limiting factor to many of our machine learning capabilities. At IARPA, we generally try to include outliers until they are sure that they are erroneous, said Matheny.

Another presentation came from Tom Dietterich, president of the Association for the Advancement of Artificial Intelligence. He said that we have not focused enough on safety, reliability and robustness in AI and that this must change. Much like Eric Horvitz, he drew a distinction between robustness against errors within the scope of a model and robustness against unmodeled phenomena. On the latter issue, he talked about solutions such as expanding the scope of models, employing multiple parallel models, and doing creative searches for flaws - the latter doesn't enable verification that a system is safe, but it nevertheless helps discover many potential problems. He talked about knowledge-level redundancy as a method of avoiding misspecification - for instance, systems could identify objects by an "ownership facet" as well as by a "goal facet" to produce a combined concept with less likelihood of overlooking key features. He said that this would require wider experiences and more data.

There were many other speakers who brought up a similar set of issues: the user of cybersecurity techniques to verify machine learning systems, the failures of cybersecurity as a field, opportunities for probabilistic programming, and the need for better success in AI verification. Inverse reinforcement learning was extensively discussed as a way of assigning values. Jeanette Wing of Microsoft talked about the need for AIs to reason about the continuous and the discrete in parallel, as well as the need for them to reason about uncertainty (with potential meta levels all the way up). One point which was made by Sarah Loos of Google was that proving the safety of an AI system can be computationally very expensive, especially given the combinatorial explosion of AI behaviors.

In one of the panels, the idea of government actions to ensure AI safety was discussed. No one was willing to say that the government should regulate AI designs. Instead they stated that the government should be involved in softer ways, such as guiding and working with AI developers, and setting standards for certification.

Pictures: https://imgur.com/a/49eb7

In between these presentations I had time to speak to individuals and listen in on various conversations. A high ranking person from the Department of Defense stated that the real benefit of autonomous systems would be in terms of logistical systems rather than weaponized applications. A government AI contractor drew the connection between Mallah's presentation and the recent press revolving around superintelligence, and said he was glad that the government wasn't worried about it.

I talked to some insiders about the status of organizations such as MIRI, and found that the current crop of AI safety groups could use additional donations to become more established and expand their programs. There may be some issues with the organizations being sidelined; after all, the Google Deepbrain paper was essentially similar to a lot of work by MIRI, just expressed in somewhat different language, and was more widely received in mainstream AI circles.

In terms of careers, I found that there is significant opportunity for a wide range of people to contribute to improving government policy on this issue. Working at a group such as the Office of Science and Technology Policy does not necessarily require advanced technical education, as you can just as easily enter straight out of a liberal arts undergraduate program and build a successful career as long as you are technically literate. (At the same time, the level of skepticism about long term AI safety at the conference hinted to me that the signalling value of a PhD in computer science would be significant.) In addition, there are large government budgets in the seven or eight figure range available for qualifying research projects. I've come to believe that it would not be difficult to find or create AI research programs that are relevant to long term AI safety while also being practical and likely to be funded by skeptical policymakers and officials.

I also realized that there is a significant need for people who are interested in long term AI safety to have basic social and business skills. Since there is so much need for persuasion and compromise in government policy, there is a lot of value to be had in being communicative, engaging, approachable, appealing, socially savvy, and well-dressed. This is not to say that everyone involved in long term AI safety is missing those skills, of course.

I was surprised by the refusal of almost everyone at the conference to take long term AI safety seriously, as I had previously held the belief that it was more of a mixed debate given the existence of expert computer scientists who were involved in the issue. I sensed that the recent wave of popular press and public interest in dangerous AI has made researchers and policymakers substantially less likely to take the issue seriously. None of them seemed to be familiar with actual arguments or research on the control problem, so their opinions didn't significantly change my outlook on the technical issues. I strongly suspect that the majority of them had their first or possibly only exposure to the idea of the control problem after seeing badly written op-eds and news editorials featuring comments from the likes of Elon Musk and Stephen Hawking, which would naturally make them strongly predisposed to not take the issue seriously. In the run-up to the conference, websites and press releases didn't say anything about whether this conference would be about long or short term AI safety, and they didn't make any reference to the idea of superintelligence.

I sympathize with the concerns and strategy given by people such as Andrew Moore and Andrew Grotto, which make perfect sense if (and only if) you assume that worries about long term AI safety are completely unfounded. For the community that is interested in long term AI safety, I would recommend that we avoid competitive dynamics by (a) demonstrating that we are equally strong opponents of bad press, inaccurate news, and irrational public opinion which promotes generic uninformed fears over AI, (b) explaining that we are not interested in removing funding for AI research (even if you think that slowing down AI development is a good thing, restricting funding yields only limited benefits in terms of changing overall timelines, whereas those who are not concerned about long term AI safety would see a restriction of funding as a direct threat to their interests and projects, so it makes sense to cooperate here in exchange for other concessions), and (c) showing that we are scientifically literate and focused on the technical concerns. I do not believe that there is necessarily a need for the two "sides" on this to be competing against each other, so it was disappointing to see an implication of opposition at the conference.

Anyway, Ed Felten announced a request for information from the general public, seeking popular and scientific input on the government's policies and attitudes towards AI: https://www.whitehouse.gov/webform/rfi-preparing-future-artificial-intelligence

Overall, I learned quite a bit and benefited from the experience, and I hope the insight I've gained can be used to improve the attitudes and approaches of the long term AI safety community.

15 comments

Comments sorted by top scores.

comment by James_Miller · 2016-07-01T00:57:59.131Z · LW(p) · GW(p)

An excellent summary which raises my estimate of doom.

Replies from: Dagon
comment by Dagon · 2016-07-01T01:52:46.909Z · LW(p) · GW(p)

has that change in estimate caused any behavior or decision changes?

Replies from: James_Miller
comment by James_Miller · 2016-07-01T02:39:59.597Z · LW(p) · GW(p)

Not yet.

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2016-07-11T23:40:18.806Z · LW(p) · GW(p)

A few questions, and requests for elaboration:

  • In what ways, and for what reasons, did people think that cybersecurity had failed?
  • What techniques from cybersecurity were thought to be relevant?

  • Any idea what Mallah meant by “non-self-centered ontologies”? I am imagining things like CIRL (https://arxiv.org/abs/1606.03137)

Can you briefly define (any of) the following terms (or give you best guess what was meant by them)?:

  • meta-machine-learning
  • reflective analysis
  • knowledge-level redundancy
Replies from: UmamiSalami
comment by UmamiSalami · 2016-07-12T15:24:19.112Z · LW(p) · GW(p)

In what ways, and for what reasons, did people think that cybersecurity had failed?

Mostly that it's just so hard to keep things secure. Organizations have been trying for decades to ensure security but there are continuous failures and exploits. One person mentioned that one third of exploits take advantage of security systems themselves.

What techniques from cybersecurity were thought to be relevant?

Don't really remember any specifics, but I think formal methods were part of it.

Any idea what Mallah meant by “non-self-centered ontologies”? I am imagining things like CIRL (https://arxiv.org/abs/1606.03137)

I didn't know to be honest.

Can you briefly define (any of) the following terms (or give you best guess what was meant by them)?: meta-machine-learning reflective analysis * knowledge-level redundancy

I remember that knowledge level redundancy involves giving multiple representations of concepts and things to avoid misspecification/misrepresentation of human ideas. So you can define a concept or an object in multiple ways, and then check that a given object fits all those definitions before being certain about its identity.

comment by David Scott Krueger (formerly: capybaralet) (capybaralet) · 2016-07-11T21:53:47.621Z · LW(p) · GW(p)

FYI, Dario is from Google Brain (which is distinct from Google DeepMind).

comment by username2 · 2016-07-02T01:25:31.210Z · LW(p) · GW(p)

I was encouraged to see that Paul Christiano collaborated on the Google paper, and that it seemed to recognize many of the failure modes of AGI. I think it's likely that with more/stronger evidence of rapid self-improvement the full AI safety arguments will make sense to most of those researchers at least. My personal prediction/guess is that the next big public showcase of AI will be code/mathematics generation of superhuman quality (e.g. deep neural nets out-coding/out-solving the best coder/mathematician in the world at arbitrary problem statements) which will make it fairly obvious that rapid recursive self-improvement is highly likely. The obvious next step is to tell it to self-optimize but hopefully the initial results will scare the researchers.

Replies from: The_Jaded_One
comment by The_Jaded_One · 2016-07-10T20:08:51.112Z · LW(p) · GW(p)

deep neural nets out-coding/out-solving the best coder

As far as I know, neural networks (deep or shallow) are hopelessly unsuited to programming and mathematics.

Replies from: gwern
comment by gwern · 2016-07-11T00:01:20.502Z · LW(p) · GW(p)

Hopelessly? I wouldn't say hopelessly. We just saw a nice new use of NNs in premise selection for theorem-proving which expands the number of proofs which can now be automatically proven:

And if you search 'neural programming' or 'neural interpretation', there are a number of interesting papers on getting NNs to write or implement programs. Nothing yet that could be considered useful, but some results are interesting and show promise like:

(Blanket statements about what NNs can or can't do lately is ill-advised unless you spend a lot of time reading Arxiv.)

Replies from: The_Jaded_One
comment by The_Jaded_One · 2016-07-17T10:09:42.647Z · LW(p) · GW(p)

We just saw a nice new use of NNs in premise selection for theorem-proving

Yes, but the NN itself is not doing the theorem proving. A theorem prover, which is not a neural network, actually does the work. The NN is providing advice to it. Any machine learning system could provide heuristics to a theorem prover.

Blanket statements about what NNs can or can't do lately is ill-advised unless you spend a lot of time reading Arxiv.

I would stand by my original statement - if you actually read through these papers they do demonstrate the amount of acrobatics that has to be done to join the world of neural networks to anything discrete like programming or proofs.

That's not to say that that acrobatics won't eventually yield some useful hybrid approach, but I think people like the poster above will just see lots of hype about neural networks/deep learning, and assume that it's some kind of magic that easily tackles any problem type. Well, actually programming and mathematics is uniquely unsuited to neural networks, and that fact is central to the literature that you have linked to. That's why people are working on Neural Turing Machines etc.

Replies from: gwern
comment by gwern · 2016-07-17T16:48:47.026Z · LW(p) · GW(p)

The NN is providing advice to it. Any machine learning system could provide heuristics to a theorem prover.

I don't think this is a meaningful distinction. The system requires NN for top performance. Would you say that AlphaGo doesn't show NNs can play Go because 'really, it's the tree search which is doing the Go playing, all the CNN is doing is providing advice to it'?

I would stand by my original statement - if you actually read through these papers they do demonstrate the amount of acrobatics that has to be done to join the world of neural networks to anything discrete like programming or proofs.

Explain again how working NN systems delivering results, sometimes better than previous approaches, despite minimal research effort so far, shows that 'neural networks (deep or shallow) are hopelessly unsuited to programming and mathematics.'

This must be some new and novel definition of 'hopeless', I am hitherto unfamiliar with, where it now means not 'nearly impossible' but 'possible and already done sometimes' As a descriptivist, I of course must move with the times and try to understand new uses of old words, and so I don't object to this use, but I do want to be sure I am understanding you correctly.

Replies from: The_Jaded_One
comment by The_Jaded_One · 2016-07-19T16:25:48.468Z · LW(p) · GW(p)

Approaches that make some use of Neural Networks, or incorporate them in some way are indeed making progress. What I want to make clear is that you can't just take some code, throw deep learning at it and abracadabra you have an superhuman AI programmer.

comment by The_Jaded_One · 2016-07-01T10:01:22.631Z · LW(p) · GW(p)

None of them seemed to be familiar with actual arguments or research on the control problem

that is unfortunate

Replies from: UmamiSalami
comment by UmamiSalami · 2016-07-01T16:38:58.279Z · LW(p) · GW(p)

In fairness, I didn't directly ask any of them about it, and it wasn't really discussed. There could have been some who had read the relevant work, and many who believed it to be reasonable, but just didn't happen to speak up during the presentations or in any of the conversations I was in.

Replies from: The_Jaded_One
comment by The_Jaded_One · 2016-07-07T22:58:59.672Z · LW(p) · GW(p)

Hmmm ok.

It's interesting that this divide is appearing, and it does make me wonder how we can get more people to take the value alignment problem seriously.