What are the best arguments and/or plans for doing work in "AI policy"?
post by Eli Tyre (elityre) · 2019-12-09T07:04:57.398Z · LW · GW · 2 commentsThis is a question post.
Contents
Answers 16 Shri 7 rohinmshah None 2 comments
I'm looking to get oriented in the space of "AI policy": interventions that involve world governments (particularly the US government) and existential risk from strong AI.
When I hear people talk about "AI policy", my initial reaction is skepticism, because (so far) I can think of very few actions that governments could take that seem to help with the core problems of AI ex-risk. However, I haven't read much about this area, and I don't know what actual policy recommendations people have in mind.
So what should I read to start? Can people link to plans and proposals in AI policy space?
Research papers, general interest web pages, and one's own models, are all admissible.
Thanks.
Answers
I'll post the obvious resources:
Future of Life Institute's summaries of AI policy resources
AI Governance: A Research Agenda (Allan Dafoe, FHI)
Allen Dafoe's research compilation: Probably just the AI section is relevant, some overlap with FLI's list.
The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation (2018). Brundage and Avin et al.: One of the earlier "large collaboration" papers I can recall, probably only the AI Politics and AI Ideal Governance sections are relevant for you.
Policy Desiderata for Superintelligent AI: A Vector Field Approach: Far from object-level, in Bostrom's style, but tries to be thorough in what AI policy should try to accomplish at a high level.
CSET's Reports: Very new AI policy org, but pretty exciting as it's led by the former head of IARPA so their recommendations probably have a higher chance of being implemented than the academic think tank reference class. Their work so far focuses on documenting China's developments and US policy recommendations, e.g. making US immigration more favorable for AI talent.
Published documents can trail the thinking of leaders at orgs by quite a lot. You might be better off emailing someone at the relevant orgs (CSET, GovAI, etc.) with your goals, what you plan to read, and seeing what they would recommend so you can catch up more quickly.
↑ comment by Eli Tyre (elityre) · 2019-12-09T19:47:13.414Z · LW(p) · GW(p)
The "obvious resources" are just what I want. Thanks.
↑ comment by Ofer (ofer) · 2019-12-09T15:55:29.901Z · LW(p) · GW(p)
Also, this 80,000 Hours podcast episode with Allan Dafoe.
What? I feel like I must be misunderstanding, because it seems like there are broad categories of things that governments can do that are helpful, even if you're only worried about the risk of an AI optimizing against you. I guess I'll just list some, and you can tell me why none of these work:
- Funding safety research
- Building aligned AIs themselves
- Creating laws that prevent races to the bottom between companies (e.g. "no AI with >X compute may be deployed without first conducting a comprehensive review of the chance of the AI adversarially optimizing against humanity")
- Monitoring AI systems (e.g. "we will create a board of AI investigators; everyone making powerful AI systems must be evaluated once a year")
I don't think there's a concrete plan that I would want a government to start on today, but I'd be surprised if there weren't such plans in the future when we know more (both from more research, and the AI risk problem is clearer).
You can also look at the papers under the category "AI strategy and policy" in the Alignment Newsletter database.
↑ comment by Eli Tyre (elityre) · 2019-12-10T19:41:48.557Z · LW(p) · GW(p)
As I said, I haven't oriented on this subject yet, and I'm talking from my intuition, and I might be about to say stupid things. (And I might think different things on further thought. I think, I 60% to 75% "buy" the arguments that I make here.]
I expect we have very different worldviews about this area, so I'm first going to lay out a general argument, which is intended to give context, and then respond to your specific points. Please let me know if anything I say seems crazy or obviously wrong.
General Argument
My intuition says that in general, governments can only be helpful after the core, hard problems of alignment have been solved. After that point, there isn't much for them to do, and before that point, I think they're much more likely to cause harm, for the sorts of reasons I outline in this [EA(p) · GW(p)] comment.
(There is an argument that EAs should go into policy because the default trajectory involves governments interfering in the development of powerful AI, and having EAs in the mix is apt to make that interference smaller and saner. I'm sympathetic to that, if that's the plan.)
To say it more specifically: governments are much stupider than people, and can only do sane, useful things if there is a very clear, legible, common knowledge standard for which things are good and which things are bad.
- Governments are not competent to do things like assess which technical research is promising. Especially not in fields that are as new and confusing as AI safety, where the experts themselves disagree about which approaches are promising. But my impression is that governments are mostly not even competent to do much more basic assessment of things like "which kinds of batteries for electric cars, seem promising to invest in? (Or even physically plausible?)"
- There do appear to be some exceptions to this. DARPA and IARPA seem to well designed for solving some kinds of important engineering problems, via a mechanism that spawns many projects and culls most of them. I bet DARPA could make progress on AI alignment if there were clear, legible targets to try and hit.
- Similarly governments can constrain the behavior of other actors via law, but this only seems useful if it is very clear what standards they should be enforcing. If legislatures freak out about the danger of AI, and then come up with the best compromise solution they can, for making sure "no one does anything dangerous" (from a partial, at best, understanding of the technical details), I expect this to be harmful on net, because it inserts semi-random obstacles in the way of technical experts on the ground trying to solve the problem.
. . .
There are only two situations in which I can foresee policy having a major impact, a non-extreme story, and an extreme story.
The first, non-extreme story is when all of the following conditions hold...
1) Earth experiences a non-local takeoff.
2) We have known, common knowledge, technical solutions to intent alignment.
3) Those technical solutions are not competitive with alternative methods that "cut corners", with regard to alignment, but which do succeed in hitting the operator's goals in the short term.
In this case we know what needs to be done to ensure safe AI, but we have a commons problem: Everyone is tempted to forgo the alignment "best practices" because they're very expensive (in money, or time, or whatever) and you can get your job done without any fancy alignment tech.
But every unit of unaligned optimization represents a kind of "pollution", which adds up to a whimper, or eventually catalyzes a bang [AF · GW].
In this case, what governments should do is simple: tax, or outlaw, unalignment pollution. We still have a bit of an issue in that this tax or ban needs to be global, and free riders who do pollute will get huge gains from their cheaper unaligned AI, but this is basically analogous to the problem of governments dealing with global climate change.
But if any of the above conditions don't hold, then it seems like our story starts to fall apart.
1) If takeoff is local, then I'm confused about how things are supposed to play out. Deep Mind (or some other team) builds a powerful AI system that automates AI research, but is constrained by the government telling them what to do? How does the government know how to manage the intelligence explosion better than the by-definition, literal leaders of the field?
I mean, I hope they use the best alignment technology available, but if the only reason why they are doing that is "its the law", something went horribly wrong already. I don't expect constraints made by governments to compensate for a team that doesn't know or care about alignment. And given how effective most bureaucracies are, I would prefer that a team that does know and care about alignment not be needing to work around the constraints imposed by a legislature somewhere.
(More realistically, in a local takeoff scenario, it seems plausible that the leading team is nationalized, or there is otherwise a very close cooperation between the technical leaders of that team, and military (and political?) leaders of the state, in the style of the Manhattan project.
But this doesn't look much like "policy" as we typically think about it, and the only way to influence this development would be to be part of the technical team, or be one of the highest ranking members of the military, up to the president him/herself. [I have more to say about the Manhattan project, and the relevance to large AI projects, but I'll go into that another time.])
But maybe the government is there as a backstop to shutdown any careless or reckless projects, while the leaders are slowly and carefully checking and double checking the alignment of their system? In which case, see the extreme scenario below.
2) If we don't have solutions to intent alignment, or we don't have common knowledge that they work, then we don't have anything that we can carve off as "fine and legal" in contrast to the systems that are bad and should be taxed or outlawed.
If we don't have such a clear distinction, then there's not much that we can do, except ban AI, or ML entirely (or maybe ban AI above a certain compute threshold, or optimization threshold), which seems like a non-starter.
3) If there aren't more competitive alternatives to intent-aligned systems, then we don't need to bother with policy: that natural thing to do is to use intent-aligned systems.
The second, extreme scenario in which government can help:
We're establishing a global coalition that is going to collectively build safe AI, and we're going to make building advance AI outside of that coalition illegal.
Putting the world on lock-down, and survailing all the compute to make sure that no one is building an AI, while the global coalition figures out how to launch a controlled, aligned intelligence explosion.
This seems maybe good, if totally implausible from looking at today's world.
Aside from those two situations, I don't see how governments can help, because governments are not savvy enough to do the right thing in technical complicated topics.
Responding to your specific points
- Funding safety research
This is only any use at all if governments can easily identify tractable research programs that actually contribute to AI safety, instead of have "AI safety" as a cool tagline. I guess that you imagine that that will be the case in the future? Or maybe you think that it doesn't matter if they fund a bunch of terrible, pointless research if some "real" research also gets funded?
- Building aligned AI themselves?
What? It seems like this is only possible if the technical problem is solved and known to be solved. At that point, the problem is solved
- Creating laws that prevent races to the bottom between companies (e.g. "no AI with >X compute may be deployed without first conducting a comprehensive review of the chance of the AI adversarially optimizing against humanity")
Again, if there are existing, legible standards of what's safe and what isn't this seems good. But without such standards I don't know how this helps?
It seems like most of what makes this work is inside of the "comprehensive review"? If our civilization knows how to do that well, then having the government insist on it seems good, but if we don't know how to do that well, then this looks like security theater.
- Monitoring AI systems (e.g. "we will create a board of AI investigators; everyone making powerful AI systems must be evaluated once a year")
This has the same issue as above.
[Overall, I something like 60% to 75% believe the arguments that I outline in this comment.]
(Some) cruxes:
- [Partial] We are going to have clear, legible, standards for aligning AI systems.
- We're going to be in scenario 1 or scenario 2 that I outlined above.
- For some other reason, we will have some verified pieces of alignment technology, but AI employers won't use that technology by default
- Maybe because tech companies are much more reckless or near-sighted than I'm imagining?
- Governments are much more competent than I currently believe, or will become much more competent before the endgame.
- EAs are planning to go into policy to try to make the governmental reaction smaller and saner, rather than try to push the government into positive initiatives, and the EAs are well-coordinated about this.
- In a local takeoff scenario, the leading team is not concerned about alignment or is basically not cosmopolitan in its values.
Replies from: rohinmshah
↑ comment by Rohin Shah (rohinmshah) · 2019-12-11T08:16:02.453Z · LW(p) · GW(p)
If we don't have such a clear distinction, then there's not much that we can do, except ban AI, or ML entirely (or maybe ban AI above a certain compute threshold, or optimization threshold), which seems like a non-starter.
Idk, if humanity as a whole could have a justified 90% confidence that AI above a certain compute threshold would kill us all, I think we could ban it entirely. Like, why on earth not? It's in everybody's interest to do so. (Note that this is not the case with climate change, where it is in everyone's interest for them to keep emitting while others stop emitting.)
This seems probably true even if it was 90% confidence that there is some threshold over which AI would kill us all, that we don't yet know. In this case I imagine something more like a direct ban on most people doing it, and some research that very carefully explores what the threshold is.
This is only any use at all if governments can easily identify tractable research programs that actually contribute to AI safety, instead of have "AI safety" as a cool tagline. I guess that you imagine that that will be the case in the future? Or maybe you think that it doesn't matter if they fund a bunch of terrible, pointless research if some "real" research also gets funded?
A common way in which this is done is to get experts to help allocate funding, which seems like a reasonable way to do this, and probably better than the current mechanisms excepting Open Phil (current mechanism = how well you can convince random donors to give you money).
What? It seems like this is only possible if the technical problem is solved and known to be solved. At that point, the problem is solved
In the world where the aligned version is not competitive, a government can unilaterally pay the price of not being competitive because it has many more resources.
Also there are other problems you might care about, like how the AI system might be used. You may not be too happy if anyone can "buy" a superintelligent AI from the company that built it; this makes arbitrary humans generally more able to impact the world; if you have a group of not-very-aligned agents making big changes to the world and possibly fighting with each other, things will plausibly go badly at some point.
Again, if there are existing, legible standards of what's safe and what isn't this seems good. But without such standards I don't know how this helps?
Telling what is / isn't safe seems decidedly easier than making an arbitrary agent safe; it feels like we will be able to be conservative about this. But this is mostly an intuition.
I think a general response to your intuition is that I don't see technical solutions as the only options; there are other ways we could be safe (1, 2 [LW(p) · GW(p)]).
Cruxes:
- We're going to have clear, legible things that ensure safety (which might be "never build systems of this type").
- Governments are much more competent than you currently believe (I don't know what you believe, but probably I think they are more competent than you do)
- We have so little evidence / argument so far, that just the model uncertainty means that we can't conclude "it is unimportant to think about how we could use the resources of the most powerful actors in the world".
2 comments
Comments sorted by top scores.
comment by Ofer (ofer) · 2019-12-09T16:03:51.067Z · LW(p) · GW(p)
Note that research related to governments is just a part of "AI policy" (which also includes stuff like research on models/interventions related to cooperation between top AI labs and publications norms in ML).
Replies from: elityre↑ comment by Eli Tyre (elityre) · 2019-12-09T19:47:33.221Z · LW(p) · GW(p)
Ok. Good to note.