Yes, AI research will be substantially curtailed if a lab causes a major disaster

post by lc · 2022-06-14T22:17:01.273Z · LW · GW · 31 comments

There's a narrative that Chapman and other smart people seem to endorse that goes:

People say a public AI disaster would rally public opinion against AI research and create calls for more serious AI safety. But the COVID pandemic killed several million people and wasted upwards of a year of global GDP. Pandemics are, as a consequence, now officially recognized as a non-threat that should be rigorously ignored. So we should expect the same outcome from AI disasters.

I'm pretty sure I have basically the same opinion and mental models of U.S. government, media, and poltics as Eliezer & David, but even then, this argument seems like it's trying too hard to be edgy. 

Here's another obvious historical example that I find much more relevant. U.S. anti-nuclear activists said for years that nuclear power wasn't safe, and nuclear scientists replied over and over that the activists were just non-experts misinformed by TV, and that a meltdown was impossible. Then the Three Mile Island meltdown happened. The consequence of that accident, which didn't even conclusively kill any particular person, was that anti-nuclear activists got nuclear power regulated in the U.S. to the point where making new plants is completely cost inefficient, as a rule, even in the event of technology advancements

The difference, of course, between pandemics and nuclear safety breaches, is that pandemics are a natural phenomenon. When people die from diseases, there are only boring institutional failures. In the event of a nuclear explosion, the public, the government, and the media get scapegoats and an industry to blame for the accident. To imply that established punching bags like Google and Facebook would just walk away from causing an international crisis on the scale of the Covid epidemic, strikes me as confusingly naive cynicism from some otherwise very lucid people. 

If the media had been able to squarely and emotively pin millions of deaths on some Big Tech AI lab, we would have faced a near shutdown of AI research and maybe much of venture capital. Regardless of how performative our government's efforts in responding to the problem were, they would at least succeed at introducing extraordinarily imposing costs and regulations on any new organization that looked to a bureaucractic body like it wanted to make anything similar. The reason such measures were not enforced on U.S. gain-of-function labs following Covid, is because Covid did not come from U.S. gain-of-function labs, and the public is not smart/aware enough to know that they should update towards those being bad.

To be sure, politicians would do a lot of other counterproductive things too. We might still fail. But the long term response to an unprecedented AI catastrophe would be a lot more like the national security establishment response to 9/11 than it would our bungling response to the Coronavirus. There'd be a TSA and a war in the wrong country, but there'd also be a DHS, and a vastly expanded NSA/CIA budget and "prerogative".

None of this is to say that such an accident is likely to happen. I highly doubt any misaligned AI influential enough to cause a disaster on this scale would not also be in a position to just end us. But I do at least empathize with the people who hope that whatever DeepMind's cooking, it'll end up in some bungled state where it only kills 10 million people instead of all of us and we can maybe get a second chance.

31 comments

Comments sorted by top scores.

comment by Rob Bensinger (RobbBB) · 2022-06-15T03:02:52.636Z · LW(p) · GW(p)

I think the real answer is that we don't know what would happen, and there are a variety of possibilities. It's entirely plausible that a "warning shot" would lengthen timelines on net, and entirely plausible that it would shorten timelines on net.

I'm more confident in saying that I don't think a "warning shot" will suddenly move civilization from 'massively failing at the AGI alignment problem' to 'handling the thing pretty reasonably'. If a warning shot shifts us from a failure trajectory to a success trajectory, I expect that to be because we were already very close to a success trajectory at a time.

Replies from: lc, None
comment by lc · 2022-06-15T04:22:46.098Z · LW(p) · GW(p)

I'm more confident in saying that I don't think a "warning shot" will suddenly move civilization from 'massively failing at the AGI alignment problem' to 'handling the thing pretty reasonably'. If a warning shot shifts us from a failure trajectory to a success trajectory, I expect that to be because we were already very close to a success trajectory at a time.

I agree with that statement. I don't expect our civilization to handle anything as hard and tail-ended as the alignment problem reasonably even if it tries.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2022-06-15T06:02:50.369Z · LW(p) · GW(p)

FWIW I object to the title "Yes, AI research will be substantially curtailed if a lab causes a major disaster"; seems too confident.

comment by [deleted] · 2022-06-16T03:55:17.467Z · LW(p) · GW(p)Replies from: RobbBB, ryan_b
comment by Rob Bensinger (RobbBB) · 2022-06-16T04:14:32.138Z · LW(p) · GW(p)

A thing I wrote on FB a few months ago, in response to someone asking if a warning shot might happen:

[... I]t's a realistic possibility, but I'd guess it won't happen before AI destroys the world, and if it happens I'm guessing the reaction will be stupid/panicky enough to just make the situation worse.

(It's also possible it happens before AI destroys the world, but six weeks before rather than six years before, when it's too late to make any difference.)

A lot of EAs feel confident we'll get a "warning shot" like this, and/or are mostly predicating their AI strategy around "warning-shot-ish things will happen and suddenly everyone will get serious and do much more sane things". Which doesn't sound like, eg, how the world reacted to COVID or 9/11, though it sounds a bit like how the world (eventually) reacted to nukes and maybe to the recent Ukraine invasion?

Someone then asked why I thought a warning shot might make things worse, and I said:

It might not buy time, or might buy orders of magnitude less time than matters; and/or some combination of:

- the places that are likely to have the strictest regulations are (maybe) the most safety-conscious parts of the world. So you may end up slowing down the safety-conscious researchers much more than the reckless ones.

- more generally, it's surprising and neat that the frontrunner (DM) is currently one of the least allergic to thinking about AI risk. I don't think it's anywhere near sufficient, but if we reroll the dice we should by default expect a worse front-runner.

- regulations and/or safety research are misdirected, because people have bad models now and are therefore likely to have bad models when the warning shot happens, and warning shots don't instantly fix bad underlying models.

The problem is complicated, and steering in the right direction requires that people spend time (often years) setting multiple parameters to the right values in a world-model. Warning shots might at best fix a single parameter, 'level of fear', not transmit the whole model. And even if people afterwards start thinking more seriously and thereby end up with better models down the road, their snap reaction to the warning shot may lock in sticky bad regulations, policies, norms, culture, etc., because they don't already have the right models before the warning shot happens.

- people tend to make worse decisions (if it's a complicated issue like this, not just 'run from tiger') when they're panicking and scared and feeling super rushed. As AGI draws visibly close / more people get scared (if either of those things ever happen), I expect more person-hours spent on the problem, but I also expect more rationalization, rushed and motivated reasoning, friendships and alliances breaking under the emotional strain, uncreative and on-rails thinking, unstrategic flailing, race dynamics, etc.

- if regulations or public backlash do happen, these are likely to sour a lot of ML researchers on the whole idea of AI safety and/or sour them on xrisk/EA ideas/people. Politicians or the public suddenly caring or getting involved, can easily cause a counter-backlash that makes AI alignment progress even more slowly than it would have by default.

- software is not very regulatable, software we don't understand well enough to define is even less regulatable, whiteboard ideas are less regulatable still, you can probably run an AGI on a not-expensive laptop eventually, etc.

So regulation is mostly relevant as a way to try to slow everything down indiscriminately, rather than as a way to specifically target AGI; and it would be hard to make it have a large effect on that front, even if this would have a net positive effect.

- a warning shot could convince everyone that AI is super powerful and important and we need to invest way more in it.

- (insert random change to the world I haven't thought of, because events like these often have big random hard-to-predict effects)

Any given big random change will tend to be bad on average, because the end-state we want requires setting multiple parameters to pretty specific values and any randomizing effect will be more likely to break a parameter we already have in approximately the right place, than to coincidentally set a parameter to exactly the right value.

There are far more ways to set the world to the wrong state than the right one, so adding entropy will usually make things worse.

We may still need to make some high-variance choices like this, if we think we're just so fucked that we need to reroll the dice and hope to have something good happen by coincidence. But this is very different from expecting the reaction to a warning shot to be a good one. (And even in a best-case scenario we'll need to set almost all of the parameters via steering rather than via rerolling; rerolling can maybe get us one or even two values close-to-correct if we're crazy lucky, but the other eight values will still need to be locked in by optimization, because relying on ten independent one-in-ten coincidences to happen is obviously silly.)

- oh, [redacted]'s comments remind me of a special case of 'worse actors replace the current ones': AI is banned or nationalized and the UK or US government builds it instead. To my eye, this seems a lot likelier to go poorly than the status quo.

There are plenty of scenarios that I think make the world go a lot better, but I don't think warning shots are one of them.

Replies from: None
comment by [deleted] · 2022-06-16T06:41:05.713Z · LW(p) · GW(p)Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2022-06-16T08:58:26.887Z · LW(p) · GW(p)

I'm not sure about comparing warning shots to adding entropy more generally, cause warning shots are very specific and useful datapoint, not just any random data.

I don't think warning shots are random, but if they have a large impact it may be in unexpected directions, perhaps for butterfly-effect-y reasons.

[Just for definitions sake, I'm assuming warning shot here is an AI that a) clearly demonstrates deceptive intent and malevolent-to-human intent, and b) either has enough capabilities to do some damage, or enough capability that one can much more easily extrapolate why increasing capability will do damage, and this extrapolation can be done even if you don't have a really good world model.]

I'm not defining warning shots that way; I'd be much more surprised to see an event like that happen (because it's more conjunctive), and I'd be much more confident that a warning shot like that won't shift us from a very-bad trajectory to an OK one (because I'd expect an event like that to come very shortly before AGI destroys or saves the world, if an event like that happened at all).

When I say 'warning shot' I just mean 'an event where AI is perceived to have caused a very large amount of destruction'.

The warning shots I'd expect to do the most good are ones like:

'20 or 30 or 40 years before we'd naturally reach AGI, a huge narrow-AI disaster unrelated to AGI risk occurs. This disaster is purely accidental (not terrorism or whatever). Its effect is mainly just to cause it to be in the Overton window that a wider variety of serious technical people can talk about scary AI outcomes at all, and maybe it slows timelines by five years or whatever. Also, somehow none of this causes discourse to become even dumber; e.g., people don't start dismissing AGI risk because "the real risk is narrow AI symptoms like the one we just saw", and there isn't a big ML backlash to regulatory/safety efforts, and so on.'

I don't expect anything at all like that to happen, not least because I suspect we may not have 20+ years left before AGI. But that's a scenario where I could imagine real, modest improvements. Maybe. Optimistically.

Replies from: None
comment by [deleted] · 2022-06-16T12:45:10.126Z · LW(p) · GW(p)Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2022-06-17T01:31:32.377Z · LW(p) · GW(p)

And I can see why convincing anyone this late has less benefit, but maybe it's still worth it?

I don't know what you mean by "worth it". I'm not planning to make a warning shot happen, and would strongly advise that others not do so either. :p

A very late-stage warning shot might help a little, but the whole scenario seems unimportant and 'not where the action is' to me. The action is in slow, unsexy earlier work to figure out how to actually align AGI systems (or failing that, how to achieve some non-AGI world-saving technology). Fantasizing about super-unlikely scenarios that wouldn't even help strikes me as a distraction from figuring out how to make alignment progress today.

I'm much more excited by scenarios like: 'a new podcast comes out that has top-tier-excellent discussion of AI alignment stuff, it becomes super popular among ML researchers, and the culture norms, and expectations of ML thereby shift such that water-cooler conversations about AGI catastrophe are more serious, substantive, informed, candid, and frequent'.

It's rare for a big positive cultural shift like that to happen; but it does happen sometimes, and it can result in very fast changes to the Overton window. And since it's a podcast containing many hours of content, there's the potential to seed subsequent conversations with a lot of high-quality background thoughts.

By comparison, I'm less excited about individual researchers who explicitly say the words 'I'll only work on AGI risk after a catastrophe has happened'. This is a really foolish view to consciously hold, and makes me a lot less optimistic about the relevance and quality of research that will end up produced in the unlikely event that (a) a catastrophe actually happens, and (b) they actually follow through and drop everything they're working on to do alignment.

Replies from: None
comment by [deleted] · 2022-06-17T11:22:14.621Z · LW(p) · GW(p)
comment by ryan_b · 2022-06-16T18:46:37.365Z · LW(p) · GW(p)

Answering independently, I'd like to point out a few features of something like governance appearing as a result of the warning shot.

  • If a wave of new funding appears, it will be provided via grants according to the kind of criteria that make sense to Congress, which means AI Safety research will probably be in a similar position to cancer research since the War on Cancer was launched. This bodes poorly for our concerns.
  • If a set of regulations appear, they will ban or require things according to criteria that make sense to Congress. This looks to me like it stands a substantial chance of making several winning strategies actually illegal by accident, as well as accidentally emphasizing the most dangerous directions.
  • In general, once something has laws about it people stop reasoning about it morally, and default to the case of legal -> good. I expect this to completely deactivate a majority of ML researchers with respect to alignment; it will simply be one more bureaucratic procedure for getting funding.
comment by NicholasKross · 2022-06-14T23:28:21.211Z · LW(p) · GW(p)

Good catch on the natural-vs-man-made accidental bait-and-switch in the common argument. This post changed my mind to think that, at least for scaling-heavy AI (and, uh, any disaster that leaves the government standing), regulation could totally help the overall situation.

comment by burmesetheater (burmesetheaterwide) · 2022-06-14T23:13:24.668Z · LW(p) · GW(p)

Well, there's a significant probability COVID isn't a "natural" pandemic, although the story behind that is too complicated without an unambiguous single point of failure which hinders uptake among would-be activists.

If there's an AI failure will things be any different? There may be numerous framings of what went wrong or what might be addressed to fix it, details sufficient to give real predictive power will probably be complicate and it's a good bet that however interested "the powers that be" are in GOF, they're probably much MUCH more interested in AI development. So there can be even more resources to spin the story in favor of forestalling any pressure that might build to regulate.

Nuclear regulation also might not be a good example of a disaster forcing meaningful regulation because the real pressure was against military use of nuclear power and that seems to have enjoyed general immunity against real regulation. So it's more like if an AI incident results in the general public being banned from buying GPUs or something while myriad AI labs still churn toward AGI. 

Replies from: lc, TrevorWiesinger
comment by lc · 2022-06-15T05:41:30.724Z · LW(p) · GW(p)

There may be numerous framings of what went wrong or what might be addressed to fix it, details sufficient to give real predictive power will probably be complicate and it's a good bet that however interested "the powers that be" are in GOF, they're probably much MUCH more interested in AI development. So there can be even more resources to spin the story in favor of forestalling any pressure that might build to regulate.

My main thesis regarding how a non-existential AI disaster would happen in practice is (and I don't think this would happen), Google or Facebook or some other large tech company publicly releases an agent that's intelligent enough to be middling at wargames but not enough to do things like creative ML research, and people put it in one or more of IOT devices/critical infrastructure/ military equipment. Surprise: it has a bad value function and/or edge case behavior, and a group of agents end up deliberately and publicly defecting and successfully killing large numbers of people. 

In this scenario, it would be extremely obvious that the party responsible for marketing and selling the AI was FaceGoog, and no matter what the Powers That Be wanted, the grieving would be directing their anger towards those engineers. Politicians wouldn't individually give much of a shit about the well being of The Machine and instead be racing to see who could make the most visible condemnations of Big Tech and argue over which party predicted this would happen all along. Journalists would do what they always do and spin the story according to their individual political ideologies and not according to some institutional incentives, which would be more about painting their political opponents as Big Tech supporters than instrumentally supporting the engineers. Whatever company was responsible for the problem would, at a minimum, shutter all AI research. Congress would pass some laws written by their lobbyist consultants, of whom who knows, maybe even one or two could even be said to be "alignment people", and there's a new body of oversight analogous to the FDA for biotech companies.

And I appreciate the viewpoint that this is either just one timeline, or relies on premises that might be untrue, but in my head at least it just seems like it falls into place without making many critical assumptions.

comment by trevor (TrevorWiesinger) · 2022-06-15T04:31:10.680Z · LW(p) · GW(p)

Generally. I endorse the comparison of AI with nuclear weapons (especially because AI is currently being mounted on literal nuclear weapons).

But in this case, there's a really big distinction that should be made between mass-media and specialized institutions. Intelligence/military agencies, specialized Wall-street analyst firms, and bureaucracy leadership all probably know things like exactly how frequently Covid causes brain damage [LW · GW] and have the best forecasters predicting the next outbreak. For them, it's less about spinning stories, and more about figuring out what type of professional employees tend to write accurate/predictive reports and forecasts. Spun stories are certainly more influential then they were 10 years ago, and vastly more influential than they appear to the uninitiated, but I don't know if we've gotten to the point where they can fool the professionals at not getting fooled.

Arms control has happened in the past even though it was difficult to verify, and nuclear weapons were centralized by default so it's hard to know anything about how hard it is to centralize that sort of thing.

Replies from: burmesetheaterwide
comment by burmesetheater (burmesetheaterwide) · 2022-06-15T05:39:54.019Z · LW(p) · GW(p)

and have the best forecasters

With forecasters from both sides given equal amounts of information, these institutions might not even reliably beat the Metaculus community. If one is such a great forecaster then they can forecast that jobs like this might not be, among other things, that fulfilling.

I don't know if we've gotten to the point where they can fool the professionals at not getting fooled

Quite a few professionals (not at not getting fooled) still believe in a roughly 0% probability of a certain bio-related accident a couple three years ago thanks in large part to a spun story. Maybe the forecasters at the above places know better but none of the entities who might act on that information are necessarily incentivized to push for regulation as a result. So it's not clear it would matter if most forecasters know AI is probably responsible for some murky disaster while the public believes humans are responsible. 

comment by anonymousaisafety · 2022-06-14T22:58:11.680Z · LW(p) · GW(p)

This argument is important because it is related to a critical assumption in AGI x-risk, specifically with regard to the effectiveness of regulation.

If an AGI can be created by any person, in their living room, with a 10 year old laptop, then regulation is going to struggle to make a difference. Case in point: strong encryption was made illegal (and still is) in various places, and yet, teenagers use Signal and the internet runs on HTTPS.

If, on the other hand, true agent-like AGI turns out to be computationally expensive and requires very specialized hardware to run efficiently, such that only very large corporations can foot the bill to do so (e.g. designing and building increasingly custom hardware like Google TPUs, Nvidia JETSON, Cerebras Wafer, Microsoft / Graphcore IPU, Mythic AMP), then regulation is going to be surprisingly effective, in the same way that it has become stupidly difficult to build new nuclear reactors in the United States, despite advances in safety / efficiency / etc.

comment by Bucky · 2022-06-15T10:05:52.948Z · LW(p) · GW(p)

I think the natural/manmade comparison between COVID and Three Mile has alot of merit but there are other differences which might explain the difference. Some of them would imply that there would be a strong response to an AI , others less so. 

Local vs global

To prevent nuclear meltdowns you only need to ban them in the US - it doesn't matter what you do elsewhere. This is more complicated for pandemic preparedness.

Active spending vs loss of growth

Its easier to pass a law putting in nuclear regulations which limit growth as this isn't as obvious a loss as spending money from the public purse on measures for pandemics.

Activity of lobbying groups

I get the impression that the anti-nuclear lobby was alot bigger than any pro-pandemic-preparedness lobby. Possibly this is partly caused by the natural vs manmade thing so might be kind of a subpoint.

Tractability of problem

Preventing nuclear disasters seems more tractable than pandemic preparedness

1979 vs 2020

Were our institutions stronger back then?

 

FWIW I agree that a large AI disaster would cause some strong regulation and international agreements, my concern is more that a small one would not and small ones from weaker AIs seem more likely to happen.

Replies from: lc
comment by lc · 2022-06-15T16:58:35.769Z · LW(p) · GW(p)

Yeah, this is a better explanation than my post has. There were definitely multiple factors.

One aspect of tractability of these sorts of coordination problems that makes it different from the tractability of problems in everyday life: I don't think people largely "expect" their government to solve pandemic preparedness. It seems like something that can't be solved, to the average voter. Whereas there's pretty much a "zero-tolerance policy" (?) on nuclear meltdowns because that seems to most people like something that should never happen. So it's not necessarily about the problem being solvable in a traditional sense, more about the tendency of the public to blame their government officials when things go wrong.

I predict the instinct of the public if "something goes wrong" with AGI will be to say "this should never happen, the government needs to Do Something", which in practice will mean blaming the companies involved and completely hampering their ability to publish or complete relevant research.

comment by Evan R. Murphy · 2022-06-14T22:37:29.284Z · LW(p) · GW(p)

Nice post - seems reasonable.

Minor suggestion to revise the title to something like "Yes,  AI research will be substantially curtailed if a major disaster from AI happens". Before I read your post, I was pretty sure it was going to be about generic disasters, arguing that e.g. a major climate disaster or nuclear disaster would slow down AI research. 

Replies from: lc
comment by lc · 2022-06-14T23:15:31.395Z · LW(p) · GW(p)

Updated the title. I changed it a couple times but I didn't want it to have too many words.

I think if a major disaster was caused by some unique, sci-fi kind of technological development (not nuclear, that's already established) that would also lead to small scale increase in concern about AI risk as well, but I'm not sure about that.

comment by Donald Hobson (donald-hobson) · 2022-06-15T20:40:17.188Z · LW(p) · GW(p)

If the media had been able to squarely and emotively pin millions of deaths on some Big Tech AI lab,

 

Covid may well have been caused by some lab. Partly there is a big difference between AI causes X and it quickly becoming common knowledge that AI caused X.

Suppose some disaster. To be specific, lets say a nuclear warhead detonates in a city.  Some experts blame an AI hacking the control systems, other experts disagree. Some people say this proof AI is dangerous. Other people say its proof that badly secured nukes are dangerous. Some people say humans hackers took control of the nukes. Others say terrorists stole the nukes. Others say it was an accident. The official position is to blame an asteroid impact that hit a truck carrying nuclear waste. A year later, there is a fairly convincing case it was probably an AI, but some experts still disagree. The public is unsure. The disaster itself is no longer news. No one has a clue which AI. 

comment by jonmenaster · 2022-06-15T02:08:41.545Z · LW(p) · GW(p)

Interesting post, and I generally agree.

One note - you appear to be quoting David Chapman, not Yudkowsky. The Twitter post you linked to was written by Chapman. It's also not exactly what the tweet says. Can you maybe update to reflect that it's a Chapman quote, or directly link to where Yudkowsky said this? Apologies if I'm missing something obvious in the link.

Replies from: lc
comment by lc · 2022-06-15T02:12:15.502Z · LW(p) · GW(p)

Eliezer retweeted that earlier today. He's also said similar things in the past. I will update the content though so the link text is "Chapman".

comment by Jackson Wagner · 2022-06-15T21:42:50.625Z · LW(p) · GW(p)

An AI "warning shot" plays an important role in my finalist entry to the FLI's $100K AI worldbuilding contest [LW · GW]; but civilization only has a good response to the crisis because my story posits that other mechanisms (like wide adoption of "futarchy"-inspired governance) had already raised the ambient wisdom & competence level of civilization.

I think a warning shot in the real world would probably push out timelines a bit by squashing the most advanced projects, but then eventually more projects would come along (perhaps in other countries, or in secret) and do AGI anyways, so I'd be worried that we'd get longer "timelines" but a lower actual chance of getting aligned AI.  For a warning shot to really be net-positive for humanity, it would need to achieve a very strong response, such as the international suppression of all AI research (not just cumbersome regulation on a few tech companies) with a ferocity that meets or exceeds how we currently handle the threat of nuclear proliferation [LW · GW].

comment by Leosha Trushin · 2022-06-15T13:37:18.253Z · LW(p) · GW(p)

Deaths being from natural phenomena seem to just be one factor determining how strong our emotional response to disasters is, and there are plenty of others. People seem to give a greater emotional response if the deaths are flashy, unexpected, instant rather than slow (both in regards to each individual and the length of the disaster), could happen to anyone at any time, and are inversely correlated with age (people care much less if old people die and much more if children do). This would explain why 9/11, school shootings, or shark attacks have a much greater emotional response than covid or the classic comparison of 9/11 to the flu. It would also help if the disaster was international. So a lot probably depends on the circumstances of the AI disaster. 

A new and unfamiliar disaster could also come with fewer preconceptions that the size of the threat is bounded above by previous instances and that we can deal with it with known tools like medicine and vaccines in the case of pandemics. On the other hand, it could have the effect of making people set an upper bound on the possible size of future AI disasters.

It also seems to me like it would be a lot more actionable, easier, and less costly to regulate AI research than to put effective measures in place to prevent future pandemics, so the reluctance should be less.

comment by TekhneMakre · 2022-06-15T05:46:20.474Z · LW(p) · GW(p)

would at least succeed at introducing extraordinarily imposing costs

How could that work? What does it look like? How can you in practice e.g. ban all GPU clusters? You'd be wiping out a ton of non-AI stuff. If you don't, then AI stuff can just be made to look like non-AI stuff. Just banning "stuff like the stuff that caused That One Big Accident" seems like it doesn't do that much to slow AGI research. 

Replies from: lc
comment by lc · 2022-06-15T05:53:46.934Z · LW(p) · GW(p)

How could that work? What does it look like?

On the weak end, it looks like all the hoops that biotech companies have to jump through to get approval from the FDA and crew, except as applied to AI and ML companies. On the strong end, it looks like the hoops that, say, a new nuclear fusion startup would have to jump through.

You'd be wiping out a ton of non-AI stuff

Correct. You may have a lingering intuition congress would refuse to do this because it would prevent so much economic "growth", but they did the same thing, effectively, with nuclear power.

Just banning "stuff like the stuff that caused That One Big Accident" seems like it doesn't do that much to slow AGI research.

In practice it may not, but I expect it would extend timelines a little, depending on how much time we actually had between the incident and the really major "accidents".

Replies from: TekhneMakre
comment by TekhneMakre · 2022-06-15T06:09:42.541Z · LW(p) · GW(p)

Hm.... So we're not talking about banning GPUs, we're talking about banning certain kinds of organizations. Like, DeepMind isn't allowed to advertise as an AI research place, isn't allowed to publish results, and so on; and they have to have a bunch of operational security and buy-in from employees and lie to their governments, or else relocate to somewhere with less restrictive regulations; and investors and clients maybe have to do shenanigans. Is the commitment to the ban strong enough to lead to military invasions to enforce the ban globally? Relocating to a less Western country is enough of a cost to slow down research a little, maybe, yeah. There's still nuclear power plants in non-US places, and my impression is that there's biotech research that's pretty sketchy by US / Western standards going on in other places (e.g. Wuhan?). 

Replies from: lc
comment by lc · 2022-06-15T06:28:35.870Z · LW(p) · GW(p)

Hm.... So we're not talking about banning GPUs, we're talking about banning certain kinds of organizations. Like, DeepMind isn't allowed to advertise as an AI research place, isn't allowed to publish results, and so on; and they have to have a bunch of operational security and buy-in from employees and lie to their governments, or else relocate to somewhere with less restrictive regulations; and investors and clients maybe have to do shenanigans.

Correct, and a bunch of those things you listed even push them towards operational adequacy [LW · GW] instead of being just delaying tactics. I'd be pedantic and say DeepMind is probably the faction that causes the disaster in this tail-end scenario and is thus completely dismantled, but that's not really getting at the point.

Is the commitment to the ban strong enough to lead to military invasions to enforce the ban globally?

Not necessarily, and that would depend on the particular severity of the event. If AI killed a million plus young people I think it's not implausible.

If all of the relevant researchers are citizens of or present in the U.S. and U.K. however, and thus subject to U.S. and U.K. law, and there's no other country with strong enough network effects, then it can still have a tremendous, outsized effect on research progress. Note that the FDA seems to degrade the ability of the global medical establishment to accomplish groundbreaking research without having some sort of global pseudo-jurisdiction, just by preventing it from happening in the U.S. and the downstream effects of that for developing nations. People have tried going to e.g. the Philippines and doing good nuclear power work there (link pending). Unfortunately the "go to ${X} and do ${Y} there if it's illegal in ${Z}" strategy rarely tends to be workable in practice for goods more complicated than narcotics; you lose all of that nice Google funding, for one.

There's still nuclear power plants in non-US places, and my impression is that there's biotech research that's pretty sketchy by US / Western standards going on in other places (e.g. Wuhan?). 

Like what? It seems qualitatively apparent to me that there is less going on in biotech than in IT, because the country that does most of the world's innovation has outlawed it. When China's researchers get caught doing sketchy stuff like CRISPR, the global medical establishment applies some light pressure and they go to jail. We would outlaw AI research like they have effectively outlawed gene editing. There would be a bunch of second order effects on the broader IT industry but we would still, kind of, accomplish the primary goal.

Replies from: TekhneMakre
comment by TekhneMakre · 2022-06-15T07:16:13.692Z · LW(p) · GW(p)

(I'm not sure about this, thinking aloud; you may be right.)

AI is hard to regulate because 

  1. It's hard to understand what it is, hence hard to point at it, hence hard to enforce bans. For nuclear stuff, you need lumps of stuff dug out of mines that can be detected by waving a little device over it. For bio, you have to have, like, big expensive machines? If you're not just banning GPUs, what are you banning? Banning certain kinds of organizations is banning branding, and it doesn't seem that hard to do AGI research with different branding that still works for recruitment. (This is me a little bit changing my mind; I think I agree that a ban could cause a temporary slowdown by breaking up conspicuous AGI research orgs, like DM or whatnot, but I think it's not that much of a slowdown.) How could you ban compute? Could you ban having large clusters? What about networked piece-meal compute? How much slower would the latter be?
  2. It looks like the next big superweapon. Nuclear plants are regulated, but before that, and after we knew what nuclear weapons meant, there was an arms race and thousands of nukes made. This hasn't as much happened for biotech? The ban on chemical / bio weapons basically worked?
  3. Its inputs are ubiquitous. You can't order a gene synthesis machine for a couple hundred bucks with <week shipping, you can't order a pile of uranium, but you can order GPUs, on your own, as much as you want. Compute is fungible, easy to store, cheap, safe (until it's not), robust, and has a thriving multifarious economy supporting its production and R&D.  
  4. It's highly shareable. You can't stop the signal, so you can't stop source code, tools, and ideas from being shared. (Which is a good thing, except for AGI...) And there's a fairly strong culture of sharing in AI.
  5. It's highly scalable. Source code can be copied and run wherever, whenever, by whoever, and to some lesser extent also ideas. Costly inputs more temper the scalability of nuclear and bio stuff.
  6. Prerequisite knowledge is privately, individually accessible. It's easy to, on your own without anyone knowing, get a laptop and start learning to program, learning to program AI, and learning to experiment with AI. If you're super talented, people might pay you to do this! I would guess that this is a lot less true with nuclear and bio stuff?
  7. There's lots of easily externally checkable benchmarks and test applications to notice progress. 
comment by MSRayne · 2022-06-17T00:25:46.402Z · LW(p) · GW(p)

I think the only way governments could have any hope of preventing AI progress is by actually physically taking control of the factories all over the world that produce GPUs and making them require a license to use or else just discontinuing their manufacture altogether. And that just means people will try to invent other computing substrates to use for it besides GPUs. And I don't think it's very plausible even this will happen. Probably, as other commenters have said, the reaction would be bungled and only make matters worse.

comment by frontier64 · 2022-06-15T19:37:23.716Z · LW(p) · GW(p)

because Covid did not come from U.S. gain-of-function labs

Do you have any particular reason to believe this?

Replies from: lc
comment by lc · 2022-06-15T19:48:09.961Z · LW(p) · GW(p)

I can say with confidence that there is at least one reason: it didn't come from the United States 😜

comment by Botahamec · 2022-06-15T00:47:14.714Z · LW(p) · GW(p)

I've never seen that argument you're responding to before. Admittedly, I'm probably only thinking this in hindsight, but it seems like there are a ton of counterarguments, in addition to what you've presented. There isn't a large opposition to OSHA or the USDA.

That being said, I don't agree with the cause being about natural vs anthropogenic problems. I think the difference might be how much of an impact the decisions have on most people (rather than just companies). There's no way I can think of to prove either is correct, and there's certainly more than one factor involved, so a combination of the two is possible. My intuition is that the impact on the general population is a more important distinction.