AI Policy?
post by ChrisHallquist · 2013-11-11T19:40:07.830Z · LW · GW · Legacy · 28 commentsContents
28 comments
Here's a question: are there any policies that could be worth lobbying for to improve humanity's chances re: AI risk?
In the near term, it's possible that not much can be done. Human-level AI still seems a long ways off (and it probably is), making it both hard to craft effective policy on, and hard to convince people it's worth doing something about. The US government currently funds work on what it calls "AI" and "nanotechnology," but that mostly means stuff that might be realizable in the near-term, not human-level AI or molecular assemblers. Still, if anyone has ideas on what can be done in the near term, they'd be worth discussing.
Furthermore, I suspect that as human-level AI gets closer, there will be a lot the US government will be able to do to affect the outcome. For example, there's been talk of secret AI projects, but if the US gov got worried about those, I suspect they'd be hard to keep secret from a determined US gov, especially if you believe (as I do) that larger organizations will have a much better shot at building AI than smaller ones.
The lesson of Snowden's NSA revelations seems to be that, while in theory there are procedures humans can use to keep secrets, in practice humans are so bad at implementing those procedures that secrecy will fail against a determined attacker. Ironically, this applies both to the government and everyone the government has spied on. However, the ability of people outside the US gov to find out about hypothetical secret government AI projects seems less predictable, dependent on decisions of individual would-be leakers.
And it seems like, as long as the US government is aware of an AI project, there will be a lot it will be able to do to shut the project down if desired. For foreign projects, there will be the possibility of a Stuxnet-style attack, though the government might be reluctant to do that against a nuclear power like China or Russia (or would it?) However, I expect the US to lead the world in innovation for a long time to come, so I don't expect foreign AI projects to be much of an issue in the early stages of the game.
The real issue is US gov vs. private US groups working on AI. And there, given the current status quo for how these things work in the US, my guess is that if the government ever became convinced that an AI project was dangerous, they would find some way to shut it down citing "national security" and basically that would work. However, I can see big companies with an interest in AI lobbying the government to make that not happen. I can also see them deciding to pack their AI operations off to Europe or South Korea or something.
And on top of all this is simply the fact that, if it becomes convinced that AI is important, the US government has a lot of money to throw at AI research.
These are just some very hastily sketched thoughts, don't take them too seriously, and there's probably a lot more that can be said. I do strongly suspect, however, that people who are concerned about risks from AI ignore the government at our peril.
28 comments
Comments sorted by top scores.
comment by ChristianKl · 2013-11-11T21:08:04.399Z · LW(p) · GW(p)
And it seems like, as long as the US government is aware of an AI project, there will be a lot it will be able to do to shut the project down if desired. For foreign projects, there will be the possibility of a Stuxnet-style attack, though the government might be reluctant to do that against a nuclear power like China or Russia (or would it?)
The US government doesn't even manage to shut down Chinese cyber army groups that attack US targets.
Replies from: ChrisHallquist, Lumifer, lukeprog↑ comment by ChrisHallquist · 2013-11-11T21:17:32.901Z · LW(p) · GW(p)
That's a good point. Cyber-targets may be harder to sabotage than nuclear centrifuges.
EDIT: On the other hand, to the extent that AI is hard, it may be more vulnerable to sabotage than cyber warfare projects. Big problem is ease of making backups.
↑ comment by Lumifer · 2013-11-16T00:32:19.620Z · LW(p) · GW(p)
The US government doesn't even manage to shut down Chinese cyber army groups that attack US targets.
And how do you imagine it might do that?
Drone strikes on Chinese cities sound... a teeny bit unwise.
Replies from: ChristianKl↑ comment by ChristianKl · 2013-11-16T22:46:00.589Z · LW(p) · GW(p)
The point is that taking down a Chinese group that develops an AI would be as hard or harder than taking down a Chinese cyber army group.
Replies from: Lumifer↑ comment by Lumifer · 2013-11-17T23:29:02.941Z · LW(p) · GW(p)
I am a bit confused about the context.
So, the situation is that there are indicators that a Chinese group is developing an AI. And you are saying it would be a good thing for the US government to go in and blow it up. Is that about right?
Replies from: ChristianKl↑ comment by ChristianKl · 2013-11-17T23:33:08.251Z · LW(p) · GW(p)
I"m not making a judgement about whether or not it would be a good thing. I'm saying that I don't believe it will happen even if you could convince the US government to be against AGI development.
Replies from: Lumifer↑ comment by Lumifer · 2013-11-18T00:08:20.300Z · LW(p) · GW(p)
Depends on the degree. If the US government is firmly convinced that if the AGI project is allowed to come to fruition it will be all paperclips all the time, why, it might as well bomb Beijing (or whatever city is appropriate).
On the other hand, if the the US government thinks that there might be some danger from the AGI project but the danger of a nuclear exchange with China is, ahem, a bigger issue, then it would not and it would be a good thing.
If you're not making a judgement then I don't see much point in this branch, anyway. The US government is not a global government, most of the time. So what?
↑ comment by lukeprog · 2013-11-16T00:02:58.321Z · LW(p) · GW(p)
The US government doesn't even manage to shut down Chinese cyber army groups that attack US targets.
This is a question I'm quite interested in. Can you recommend any good sources on the subject?
Replies from: ChristianKlcomment by blogospheroid · 2013-11-12T08:53:52.797Z · LW(p) · GW(p)
David Brin believes that high speed trading bots are a high probability route to human indifferent AI. If you agree with him, then laws governing the usage of high speed trading algorithms could be useful. There is a downside in terms of stock liquidity, but how much that will affect overall economic growth is still a research area.
Replies from: None, ChrisHallquist, Jayson_Virissimo, Lumifer, hylleddin↑ comment by [deleted] · 2013-11-12T14:19:37.477Z · LW(p) · GW(p)
I found at least one article relating to this from Brin.
http://davidbrin.blogspot.com/2011/12/gingrich-asimov-and-computer-trading.html
In my opinion here are two of the notable pieces of evidence he advances in that article:
1: The people involved are already spending billions of dollars on ways to get information processing slightly faster.
2: The people involved doesn't significantly value ethics or tight control.
Replies from: ChristianKl, Lumifer↑ comment by ChristianKl · 2013-11-13T14:18:20.018Z · LW(p) · GW(p)
The article seems to miss the point. In the current world where a high frequency trading algortihm has to make decisions in a milli second it doesn't have time to run advanved artificial intelligence algorithms.
If you would slow down trading through some public policy more focus would move into complex AI algorithms instead of focus on fast algorithms.
↑ comment by Lumifer · 2013-11-12T18:13:54.694Z · LW(p) · GW(p)
http://davidbrin.blogspot.com/2011/12/gingrich-asimov-and-computer-trading.html
Brin is full of the brown stuff here and he is making an emotional scare-mongering string-up-the-bastards argument in favor of a financial transaction tax.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2013-11-12T23:36:31.867Z · LW(p) · GW(p)
Care to present an argument rather than just insults?
Replies from: Lumifer↑ comment by Lumifer · 2013-11-13T02:20:23.524Z · LW(p) · GW(p)
There is nothing to argue against.
Brin's post is a mindkilling political rant full of demagoguery and appeal to emotions.
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-11-13T16:39:15.084Z · LW(p) · GW(p)
Somebody may make a mindkilling political rant full of demagoguery and appeal to emotions about the Sun shining, but that doesn't make it dark out.
Replies from: Lumifer↑ comment by Lumifer · 2013-11-13T16:55:07.528Z · LW(p) · GW(p)
It does not, but presenting a mindkilling political rant as evidence that the Sun is shining is... not the best of arguments.
If you want to claim that the evolution of HFT algorithms into Skynet is likely, please show relevant data and reasoning.
↑ comment by ChrisHallquist · 2013-11-12T19:02:06.759Z · LW(p) · GW(p)
This seems unlikely. How do the stock-trading bots make the jump to being good at anything other than trading stock? Maybe if it spurs a bunch of investment in natural-language processing so the bots can read written-for-humans information on the companies they're buying and selling stock in, but unless that ends up being a huge percentage of the investment in NLP, it would probably make more sense to worry about NLP directly.
Replies from: ChristianKl↑ comment by ChristianKl · 2013-11-12T19:58:27.988Z · LW(p) · GW(p)
This seems unlikely. How do the stock-trading bots make the jump to being good at anything other than trading stock?
Modelling the decisions of various stackholders that can influence stock prices.
↑ comment by Jayson_Virissimo · 2013-11-12T18:02:08.000Z · LW(p) · GW(p)
Does Brin give an argument for that? That significantly conflicts with my priors.
comment by fubarobfusco · 2013-11-12T23:56:11.655Z · LW(p) · GW(p)
One of the great things about software development is that any reasonably bright human with the right training (or self-training, given available documentation) can do it. It doesn't require special equipment; your average PC and an Internet connection is quite enough.
(And it's gotten cheaper and easier. Back when I learned BASIC on the Commodore 64, that computer cost $1400 in today's dollars, plus another $700 for the floppy disk drive and $100 for a box of blank disks. Today, kids can learn to code on a Raspberry Pi that costs $35, plus $5 for an SD card; so it's cheaper by a factor of fifty-five ... and thousands of times more powerful.)
A consequence of this is that governments cannot prevent people from doing AI research in secret.
Would it help to require ethics classes for computer-science students? Maybe — but given the political climate, I would expect them to focus more on anti-piracy, pro-espionage, and telling students that they would be horrible, evil monsters if they wrote the next PGP, Tor, BitTorrent, or Bitcoin.
Replies from: ChrisHallquist↑ comment by ChrisHallquist · 2013-11-13T06:07:56.649Z · LW(p) · GW(p)
I don't see how your argument at all follows. Some software projects are impossible without large teams - or at least, impossible to do in any reasonable amount of time. And as the number of people who know a secret increases, it gets harder and harder to keep, not just because of increasing risk of deliberate betrayal, but because of increasing risk that someone will screw up on the procedures for keeping communications between the people involved a secret.
comment by [deleted] · 2013-11-15T13:29:07.456Z · LW(p) · GW(p)
Many people seem unsure of the size of the effects of political advocacy even in the short run. So this seems a little like saying, here's an NP-complete problem and for good measure take another problem in an even harder complexity class. It makes you want to throw up your hands and run.
My current guess is that AI policy is not a good idea because the ground is shifting underneath us as we walk--by the time a self-sufficient machine civilization got even close, human politics (if we were still human) would already be very different.
Edit: So when we think about the US government right before AI, it's like ancient Rome or Han China and nuclear weapons. There might well not be a US government.
comment by ChristianKl · 2013-11-13T14:09:36.360Z · LW(p) · GW(p)
And there, given the current status quo for how these things work in the US, my guess is that if the government ever became convinced that an AI project was dangerous, they would find some way to shut it down citing "national security" and basically that would work.
Who do you mean when you say "the government"?
comment by [deleted] · 2013-11-12T15:33:59.453Z · LW(p) · GW(p)
From a policy perspective, my current hypothesis is that we would have to use the approach of solving the problems in approximately the order they are likely to come up, modified by the likelihood of them happening and the severity they would have if they occured.
As an example: It doesn't help to write policy for how to keep a super intelligent AI from using a hacked communication channel that is likely to take over everything that is connected to the internet in 2053 and then turn earth into unfriendly computronium in 2054, if a country hands over nuclear launch capabilities to an dumb AI in 2023 and because the dumb AI doesn't have a Petrov module, there is a nuclear war in 2024 that kills billions. We would probably want to write the nuclear policy first, to give ourselves time to write the communication channel policy, unless the nuclear war risk was lower then the communication channel risk by a big margin.
Also, I suppose there are two different types of AI risk.
Type A: The AI itself, through either accident, malice, poor design, causes the existential risk. Type B: Humans institute an AI as part of a countermeasure to help avoid an existential risk (Imagine automated security at a Nuclear Weapons facility). Other Humans find a flaw in the AI's abilities and cause the existential risk themselves. (Bypassing the automated security and stealing a nuclear weapon)
My understanding is that most people are generally discussing type A, with a nod to type B, since they are related. As an example, it is possible to design an AI, attempt to avoid Type B by giving it sufficient capabilities, and then accidentally cause a Type A problem because you forgot to convey what you considered to be common sense. (An AI stops a nuclear weapon from being stolen from the facility by detonating the nuclear weapon: It doesn't exist, and so can't be stolen from the facility!)
But in terms of specific policies, I am not sure what the criteria above would find as the most important policy to advance first so I am not sure what to suggest.