No Safe AI and Creating Optionality

post by JohnBuridan · 2019-04-17T14:08:41.843Z · LW · GW · 4 comments

Contents

4 comments

[I am working on some formal arguments about the possibility of safe AI, and what realistic alternatives might be. I wrote the following to get imaginative feedback before I continue my arguments. I strongly believe that before developing formal argumentation, it is very helpful to play around with imaginative stories so that possibilities that otherwise would not have been considered can surface. Stories are like epistemic lubricant.]

Within a span of 3 months two formal proofs were published showing:

1) There is no learning algorithm of complexity X or greater such that humans can prove there are no unknown nonlinear effects. And then…

2) Any learning algorithm which interacts with agents which are not itself cannot have rigorously bounded effects.

Most people understood these proofs, which were far more technical and caveated than these summaries, to mean, in shortest form, that no safe AI is even possible. Some pointed out that Google Maps qualified as “unsafe” by this definition. The reactions to the Google Maps example spawned two camps. The one said that intuition and experience tells us Google Maps is safe, so need to worry, the other side said Google Maps is not safe, but the problems it causes are minimal and so we put up with them. This argument was just window dressing on a march darker and more dire arguments happening all over the world.

Deep inside the hallways of American policy Pentagon generals postulated that although these systems are not safe, we must build some and test them in foreign countries. “The only way through this labyrinth of technology was trial and error, and so the U.S. should be on the forefront of the trials.”

In China, Xi Jinping’s government reasserted control over all technology companies and began pulling all electronic systems unnecessary for party rule out of the hands of consumers and promoted a more natural world-oriented China.

Brussels took the most drastic steps. Three times they tried to pass legislation, one banning microprocessors, another banning certain classes of algorithms, another banning private or public AI research and funding. None of these laws passed, but Poland and Finland left the EU over the controversy.

Far from the centers of power other stews were brewing. Large bands of people on the internet were arguing for the disestablishment of the internet. The NYT wondered whether more education could have prevented this discovery from being true. Cable television hosts pointed out that we didn’t have these problems before video streaming. An American nationalist movement advocated that U.S. close herself off from the rest of the world as quickly as possible. Some religious groups pleaded with the Amish to teach them their ways, others prayed for dissolution of nation-states into city-states. Renewed interest in outdoorsmanship swept the developed world. However, no solution presented itself, just panic and mounting pressure on democratic government to do something about the robot menace.

What are more options for No Safe AI?

4 comments

Comments sorted by top scores.

comment by interstice · 2019-06-24T02:33:21.564Z · LW(p) · GW(p)

What this would mean is that we would have to recalibrate our notion of "safe", as whatever definition has been proved impossible does not match our intuitive perception. We consider lots of stuff we have around now to be reasonably safe, although we don't have a formal proof of safety for almost anything.

comment by hazelzhang · 2019-04-19T09:44:31.939Z · LW(p) · GW(p)

it is much safer for me to walk outside with cameras around in the night. so in this way , I dont know if I should support the AI system or not .

I come from China .

comment by Uriel Fiori (uriel-fiori) · 2019-04-17T15:46:09.177Z · LW(p) · GW(p)

What are more options for No Safe AI?

let it go rampant over the world

comment by avturchin · 2019-04-17T14:26:38.988Z · LW(p) · GW(p)

Hate to say it and do not endorse it, but only large scale nuclear war could stop AI development.