Pivotal Acts are easier than Alignment?

post by Michael Soareverix (michael-soareverix) · 2024-07-21T12:15:12.818Z · LW · GW · 4 comments

Contents

4 comments

The prevailing notion in AI safety circles is that a pivotal act—an action that decisively alters the trajectory of artificial intelligence development—requires superhuman AGI, which itself poses extreme risks. I challenge this assumption.

Consider a pivotal act like "disable all GPUs globally." This could potentially be achieved through less advanced means, such as a sophisticated computer virus akin to Stuxnet. Such a virus could be designed to replicate widely and render GPUs inoperable, without possessing the capabilities to create more dangerous weapons like bioweapons.

I've observed a lack of discussion around these "easier" pivotal acts in the AI safety community. Given the possibility that AI alignment might prove intractable, shouldn't we be exploring alternative strategies to prevent the emergence of superhuman AI?

I propose that this avenue deserves significantly more attention. If AI alignment is indeed unsolvable, a pivotal act to halt or significantly delay superhuman AI development could be our most crucial safeguard.

I'm curious to hear the community's thoughts on this perspective. Are there compelling reasons why such approaches are not more prominently discussed in AI safety circles?

4 comments

Comments sorted by top scores.

comment by interstice · 2024-07-21T16:02:39.132Z · LW(p) · GW(p)

Weaker AI probably wouldn't be sufficient to carry out an actually pivotal act. For example the GPU virus would probably be worked around soon after deployment, via airgapping GPUs, developing software countermeasures, or just resetting infected GPUs.

Replies from: michael-soareverix
comment by Michael Soareverix (michael-soareverix) · 2024-07-22T03:14:13.223Z · LW(p) · GW(p)

Is it possible to develop specialized (narrow) AI that surpasses every human at infecting/destroying GPU systems, but won't wipe us out? LLM-powered Stuxnet would be an example. Bacteria isn't smarter than humans, but it is still very dangerous. It seems like a digital counterpart could prevent GPUs and so, prevent AGI.

(Obviously, I'm not advocating for this in particular since it would mean the end of the internet and I like the internet. It seems likely, however, that there are pivotal acts possible by narrow AI that prevent AGI without actually being AGI.)

Replies from: interstice
comment by interstice · 2024-07-22T03:43:24.544Z · LW(p) · GW(p)

No I don't think so because people could just airgap the GPUs.

comment by Jonathan Claybrough (lelapin) · 2024-07-23T12:21:17.543Z · LW(p) · GW(p)

The naming might be confusing because pivotal act sounds like a one time action, but in most cases getting to a stable world without any threat from AI requires constant pivotal processes. This makes almost all the destructive approaches moot (and they're probably already bad for ethical concerns and many others already discussed) because you'll make yourself a pariah.

The most promising venue for a pivotal act/pivotal process that I know of is doing good research so that ASI risks are known and proven, doing good outreach and education so most world leaders and decision makers are well aware of this, and helping setup good governance worldwide to monitor and limit the development of AGI and ASI until we can control it.