Fear mitigated the nuclear threat, can it do the same to AGI risks?

post by Igor Ivanov (igor-ivanov) · 2022-12-09T10:04:09.674Z · LW · GW · 8 comments

Contents

  The Cold War and the nuclear threat
    The Cuban Missile Crisis
    Vasily Arkhipov
    The day after
  What lesson can we draw from this?
None
8 comments

It seems that only a small minority of the general public realizes that misaligned and powerful AIs might be a serious threat to the world as we know it, and the problems might happen not that far into the future. 

Nobody wants humanity to go extinct. People just need to feel the risks posed by powerful but misaligned AIs. At the moment for most people, such risks are some vague abstraction, and vague abstractions are not scary.

So, I asked myself the question: What can be done to raise awareness about the risks related to powerful misaligned AIs?

I might have a piece of the answer.
 

 

The Cold War and the nuclear threat

Let's discuss the original existential threat to humanity. The nuclear war.

During the Cold War, many people believed that there was a real possibility of nuclear war, but at first, countries just produced more and more nuclear charges and developed new ways to deliver them. Obviously, politicians and military leaders were aware of the consequences of a possible nuclear conflict, but for some time they weren't afraid enough to cooperate to mitigate that threat.

I want to show several important points from that period which gave me hints on possible ways to raise awareness about misaligned AIs.


The Cuban Missile Crisis

In 1962 USSR sent nuclear missiles to Cuba, so they could rapidly reach the USA mainland in case of a war. As a response, the USA demanded to remove those missiles from Cuba, established the naval blockade of the island, and started preparations for an invasion. These events were named the Cuban Missile Crisis.

Due to mutual distrust, carelessness from all sides, unfortunate accidents, and the lack of communication, the situation rapidly started to spiral out of control and the possibility of an exchange of nuclear strikes became more and more probable, and it seriously affected decision-makers on all sides.

At the height of the crisis, US president Kennedy described the situation as "this looks like hell, looks real mean", and referred to the situation as "Horror"[1]. His advisor took a note "Those minutes were the time of the greatest worry by the president. His hand went up to his face and covered his mouth and he closed his fist".[1]

We know much less about the emotional reactions of the Soviet leaders, but a couple of days after the end of the crisis, Soviet premier Khrushchev wrote Kennedy a letter:
"... We want to continue the exchange of views on the prohibition of atomic and thermonuclear weapons, on general disarmament, and other issues relating to the easing of international tensions."[2]

Soon after the crisis ended a direct phone line between the White House and the Kremlin was established for immediate communication in case of possible future crises. Also, in 1963, the next year, the first agreement on the regulation of nuclear weapons was signed - The Partial Test Ban Treaty.

Before the crisis, everybody knew that nuclear war is possible, but they weren't scared enough to actually do something to reduce the risk.

 

Vasily Arkhipov

One particular incident from the crisis also demonstrated that the personal experience of tragedies caused by the nuclear disaster might have been particularly important for a peaceful resolution of this crisis.

At one point the US installed a naval blockade around Cuba to stop the transport of Soviet military equipment to the island. Long story short, they caught a Soviet submarine and started dropping depth charges around it to force it to surface. By that time for concealment reasons, this submarine maintained radio silence for several days and its crew didn't know what was going on in the world. While American depth charges exploded near the submarine, its commander believed that World War III might have already begun and decided to launch a torpedo with a nuclear warhead at the American ship.

There are several variations of the story which happened next,[3][4] but according to all of them, one senior officer, Vasily Arkhipov, declined to do it. 

This episode is well-known among history buffs, but it has a less well-known part.

Arkhipov had previously been an officer on another Soviet nuclear submarine, which had a reactor meltdown. The crew was severely irradiated, including Arkhipov himself. Several of his comrades have died. 

So for Arkhipov death and suffering from radiation were not an abstraction. It was a painful part of his personal experience.

To be fair, I couldn't find any concrete evidence that this previous incident influenced his decision to refuse to send a nuclear torpedo, but for me, it's a plausible hypothesis.

 

The day after

Another interesting case is the effect of the 1983 movie "The day after".

The movie depicts a conflict that started as a military escalation in West Berlin, which rapidly deteriorated into the nuclear war between NATO and the Warsaw pact. The most peculiar thing about the film is its perspective. This scenario is shown through the eyes of ordinary Americans in rural Kansas. In the beginning, the characters are listening to radio broadcasts about the crisis, feeling nervous, and trying to convince each other that everything will resolve by itself. But in the movie, it all ended as a full-blown nuclear war.

I think everyone can relate to the first part of the film, in which its characters are listening to worrisome news of some conflict in the other part of the world, and the bridge from this familiar experience to the apocalypse is very effective. Honestly, when I watched the movie, it made me seriously anxious about the possible nuclear war. And I was not the only one who was influenced by it.

Ronald Regan, US president at the time the film was aired, watched it and wrote in his diary: 
"I ran the tape of the movie ... “The Day After.” It is powerfully done ... It’s very effective & left me greatly depressed... My own reaction was one of our having to do all we can to have a deterrent & to see there is never a nuclear war"[5]

He also wrote, comparing the movie to the military briefings from his advisors “Simply put, it was a scenario for a sequence of events that could lead to the end of civilization as we knew it. In several ways, the sequence of events described in the briefing paralleled those in the ABC movie. Yet there were still some people at the Pentagon who claimed a nuclear war was ‘winnable.’ I thought, they were crazy. Worse, it appeared that there were also Soviet generals who thought in terms of winning a nuclear war"[6]

In 1987, Reagan and the Soviet premier Gorbachev signed the Intermediate-Range Nuclear Forces Treaty, which put significant restrictions on nuclear weapons. The movie director recalled "I got a telegram from his (Regan) administration that said, 'Don't think your movie didn't have any part of this, because it did.'"[7]

Of course, US officials had a lot of data on possible numbers of deaths and destruction which might have been caused by the nuclear war, but the experience of fear and anxiety, and the clear image of a plausible scenario of a nuclear war turned out to be a very effective instrument and actually influenced the course of the Cold War.


What lesson can we draw from this?

For me, the main takeaway is that we should show people a clear vision of how a familiar and safe situation can turn into something really dark if we'll fail to align powerful AIs. Of course, fear can cause other unexpected problems, and it should be introduced carefully since it can cause more harm than good [LW · GW], but, in my opinion, fear is much better than indifference.

ChatGPT was released recently, and I know some people who become anxious about AI because of it. People had conversations with AI, experienced its capabilities firsthand, and some of them were scared by them. And all these posts about ChatGPT writing plans for the destruction of humanity are a great way to raise awareness.

What are the best ways to achieve those effects on the public? I don't know yet, and I would love to read other people's opinions on this topic.

 

  1. ^
  2. ^
  3. ^
  4. ^
  5. ^
  6. ^
  7. ^

8 comments

Comments sorted by top scores.

comment by mruwnik · 2022-12-09T11:07:31.125Z · LW(p) · GW(p)

The reason these events were scary, and subsequent fiction was able to capitalise on that, was that they were near misses. Very near misses. There is already a lot of fiction about various misaligned AIs, but that doesn't affect people much. So what you seem to be advocating is generating some near misses in order to wake everybody up.

Fear is useful. It would be good to have more of it. The question is how plausible it is to generate situations that are scary enough to be useful, but under enough control to be safe.

Replies from: igor-ivanov
comment by Igor Ivanov (igor-ivanov) · 2022-12-09T12:04:04.059Z · LW(p) · GW(p)

The reactor meltdown on a Soviet submarine was not posing an existential threat. In the worst case, it would be a little version of Chernobyl. We might compare it to an AI which causes some serious problems, like a stock market crash, but not existential ones. And the movie is not a threat at all.

"The question is how plausible it is to generate situations that are scary enough to be useful, but under enough control to be safe."
That is a great summary of what I wanted to say!

comment by the gears to ascension (lahwran) · 2022-12-09T23:06:39.857Z · LW(p) · GW(p)

isn't this chatgpt anyway?

comment by Dave Lindbergh (dave-lindbergh) · 2022-12-09T14:59:19.432Z · LW(p) · GW(p)

A movie or two would be fine, and might do some good if well-done. But in general - be careful what you wish for.

Replies from: dave-lindbergh
comment by Dave Lindbergh (dave-lindbergh) · 2022-12-09T15:19:55.455Z · LW(p) · GW(p)

Fearmongering may backfire, leading to research restrictions that push the work underground, where it proceeds with less care, less caution, and less public scrutiny. 

Too much fear could doom us as easily as too little. With the money and potential strategic advantage at stake, AI could develop underground with insufficient caution and no public scrutiny. We wouldn't know we're dead until the AI breaks out and already is in full control.

All things considered, I'd rather the work proceeds in the relatively open way it's going now.

Replies from: igor-ivanov, dave-lindbergh
comment by Igor Ivanov (igor-ivanov) · 2022-12-09T16:01:15.792Z · LW(p) · GW(p)

I agree that fearmongering is thin ice, and can easily backfire, and it must be done carefully and ethically, but is it worse than the alternative in which people are unaware of AGI-related risks? I don't think that anybody can say with certainty

Replies from: dave-lindbergh
comment by Dave Lindbergh (dave-lindbergh) · 2022-12-10T18:25:23.914Z · LW(p) · GW(p)

Agreed. We sail between Scylla and Charybdis - too much or too little fear are both dangerous and it is difficult to tell how much is too much.

I had an earlier pro-fearmongering comment which, on further thought, I replaced with a repeat of my first comment (since there seems to be no "delete comment").

I want the people working on AI to be fearful, and careful. I don't think I want the general public, or especially regulators, to be fearful. Because ignorant meddling seems far more likely to do harm than good - if we survive this at all, it'll likely be because of (a) the (fear-driven) care of AI researchers and (b) the watchfulness and criticism of knowledgeable skeptics who fear a runaway breakout. Corrective (b) is likely to disappear or become ineffective if the research is driven underground even a tiny bit.

Given that (b) is the only check on researchers who are insufficiently careful and working underground, I don't want anything done to reduce the effectiveness of (b). Even modest regulatory suppression of research, or demands for fully "safe" AI development (probably an impossibility) seem likely to make those funding and performing the research more secretive, less open, and less likely to be stopped or redirected in time by (b).

I think there is no safe path forward. Only differing types and degrees of risk. We must steer between the rocks the best we can.

comment by Dave Lindbergh (dave-lindbergh) · 2022-12-09T15:29:05.541Z · LW(p) · GW(p)

Fearmongering may backfire, leading to research restrictions that push the work underground, where it proceeds with less care, less caution, and less public scrutiny. 

Too much fear could doom us as easily as too little. With the money and potential strategic advantage at stake, AI could develop underground with insufficient caution and no public scrutiny. We wouldn't know we're dead until the AI breaks out and already is in full control.

All things considered, I'd rather the work proceeds in the relatively open way it's going now.