Upcoming AI regulations are likely to make for an unsafer world
post by Shmi (shminux) · 2023-06-03T01:07:35.921Z · LW · GW · 14 commentsContents
14 comments
TL;DR: I am rather confident that involving governments in regulating AI development will make the world less safe, not more, because governments are basically incompetent and the representatives are beholden to special interests.
Governments can do very few things well, some number of things passably, and a lot of things terribly. We tolerate it because the alternative is generally worse. Regulations are needed to keep people and companies from burning the commons, and to create more commons. The environment, infrastructure, public health, military, financial stability all require enforceable regulations. Even there the governments are doing pretty poorly (remember FDA, CDC and their siblings in other countries, as well as WHO?), but having a free-for-all would likely be worse.
How the proponents of AI regulations expect it to go: the governments make it hard to hoard GPUs or to do unsanctioned AI research, or to use the whole internet to train the model without permission. Sort of like HIPAA in the US makes it harder to obtain someone's personal health data without their permission.
How it is likely to go: Companies will be pretending that their all new more capable and more dangerous AI model is just a bit improved old model, to avoid costly re-certification (see the Boeing 737 MAX 8 debacle).
14 comments
Comments sorted by top scores.
comment by Tamsin Leake (carado-1) · 2023-06-03T01:19:37.316Z · LW(p) · GW(p)
even the very vague general notion that the government is regulating at all could maybe help make investment in AI more frisky, which is a good thing.
the main risk i'm worried about is that it brings more attention to AI and causes more people to think of clever AI engineering tricks.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-06-03T06:26:36.745Z · LW(p) · GW(p)
Enthusiastic strong upvote on both OP and this reply. I think overall it's very slightly good mostly because those engineering tricks were already coming and it gives us very slight breathing room to have regulation on the table. It might push individual people with small computers to be even less careful with dangerous narrow systems due to fear of it being their last chance to experiment though.
comment by Seth Herd · 2023-06-03T01:25:48.771Z · LW(p) · GW(p)
Did you actually mention a downside vs. no regulation?
I think this is a vitally important question that we should be discussing.
The best argument I've heard is that regulation slows down progress even when it's not well done.
Regulatory capture is a real thing, but even that would limit the number of companies developing advanced AI.
Replies from: rudi-c↑ comment by Rudi C (rudi-c) · 2023-06-04T05:10:28.468Z · LW(p) · GW(p)
Limiting advanced AI to a few companies is guaranteed to make for normal dystopian outcomes; its badness is in-distribution for our civilization. Justifying an all but certain bad outcome by speculative x-risk is just religion. (AI x-risk in the medium term is not at all in-distribution and it is very difficult to bound its probability in any direction. I.e, it’s Pascal mugging.)
Replies from: Seth Herd↑ comment by Seth Herd · 2023-06-04T20:03:07.076Z · LW(p) · GW(p)
Huh? X-risk isn't speculative, it's theory. If you don't believe in theory, you may be in the wrong place. It's one thing to think theory is wrong, and argue against it. It's another to dismiss it as "just theory".
It's not Pascal's Mugging, because that is about a-priori incredibly unlikely outcomes. Everyone with any sort of actual argument or theory puts AI x-risk between 1 and 99% likely.
Replies from: shminux↑ comment by Shmi (shminux) · 2023-06-04T21:10:20.807Z · LW(p) · GW(p)
Extinction-level AI x-risk is a plausible theoretical model without any empirical data for or against.
Replies from: Kaj_Sotala, Seth Herd↑ comment by Kaj_Sotala · 2023-06-06T18:25:40.556Z · LW(p) · GW(p)
without any empirical data for or against.
I think there's plenty of empirical data, but there's disagreement over what counts as relevant evidence and how it should be interpreted. (E.g. Hanson and Yudkowsky both cited a number of different empirical observations in support of their respective positions, back during their debate.)
Replies from: shminux↑ comment by Shmi (shminux) · 2023-06-06T23:15:25.003Z · LW(p) · GW(p)
Right. I'd think the only "empirical evidence" that counts is the one accepted by both sides as good evidence. I cannot think of any good examples.
↑ comment by Seth Herd · 2023-06-04T22:16:44.994Z · LW(p) · GW(p)
Right. So a really epistemically humble estimate would put the extinction risk at 50%. I realize this is arguable, and I think you can bring a lot of relevant indirect evidence to bear. But the claim that it's epistemically humble to estimate a low risk seems very wrong to me.
Replies from: shminux↑ comment by Shmi (shminux) · 2023-06-04T22:28:07.431Z · LW(p) · GW(p)
I agree that either a very low or a very high estimate of extinction due to AI is not.. epistemically humble. I asked a question about it. https://www.lesswrong.com/posts/R6kGYF7oifPzo6TGu/how-can-one-rationally-have-very-high-or-very-low [LW · GW]
Replies from: Seth Herd↑ comment by Seth Herd · 2023-06-06T18:15:51.051Z · LW(p) · GW(p)
Ah! I read that post, so that was probably partly shaping my response. I had been thinking about this since Tyler Cowan's "epistimic humbleness" for not worrying much about AI x-risk. I think applying similar probabilities to all of the futures he can imagine, with human extinction being only one of many. But that's succumbing to availability bias in a big way.
I agree with you that a 99% p(doom) estimate is not epistemically humble, and I think it sounds hubristic and causes negative reactions.
comment by Dagon · 2023-06-03T16:45:20.363Z · LW(p) · GW(p)
I tend to agree with the underlying model of human aggregate behavior called "governments". There's enough capture and manipulation to make most regulation rather ineffective.
HOWEVER, that's not evenly distributed - there's a lot of wasteful or neutral regulation, a bit of negative-value, and a bit of positive. It's not clear to me that AI regulation will be net-harmful. It's likely to have the same mix that other regulation has of annoying, useless, and wrong-headed with a few nuggets of actual improvement. By the nature of AI risk, if one of those nuggets lands well, it could be massively positive.
comment by JoeTheUser · 2023-06-04T01:15:36.812Z · LW(p) · GW(p)
Regulations are needed to keep people and companies from burning the commons, and to create more commons.
I would add that in modern society, the state is the entity tasked with protecting the commons because private for-profit entities don't have an incentive to do this (and private not-for-profit entities don't have the power). Moreover, it seems obvious to me that stopping dangerous AI should be considered a part of this commons-protecting.
You are correct that the state's commons-protecting-function has often been limited and perverted by private actors quite a few times in history, notably in the 20-40 years in the US. The phenomenon, regulatory capture, corruption and so-forth, have indeed damaged the commons. Sometimes these perversions of the state's function has allow the protections to be simply discarded while other time large enterprises to impose a private tax on regulator activity while still accepting some protections. In the case of FAA, for example, while the 737 Max debacle shows all sort of dubious regulatory capture, broadly air travel is highly regulated and that regulation has made it overall extremely safe (if only it could be made pleasant now).
So it's quite conceivable given the present qualities of state regulation that regulating AI indeed might not do much or any good. But as others have note, there's no reason to claim the results would be less safety. Your claim seems to lean too heavily on "government is bad" rhetoric. I'd say "government weak/compromised" is a better description.
Even, the thing with the discussion of regulatory capture is none of the problems describe here give the slightest indication that there is some other entity that could replace the state's commons-protecting function. Regulatory capture is only a problem because we trust the capturing entities less than the government. That is to say: if someone is aiming for the prevention of AI-danger, including AI-doom/X-risk, that someone wants a better state, a state capable of independent judgement and strong, well-considered regulation. That means either replacing the existing state or improving the given one and my suspicion is most would prefer improving the given state(s).
comment by jamesharbrook · 2023-06-03T16:00:03.275Z · LW(p) · GW(p)
AI Regulation will make the problem worse seems like a very strong statement that is unsupported by your argument. Even in your scenario where large training runs are licensed this will make things more expensive, increase the cost of training runs, and generally slow things down, particularly if it prevents smaller AI companies from pushing the frontier of research.
To take your example of GDPR, the draft version of the EU's AI Act seems so convoluted that it will cost companies a lot of money to comply and make investing in small AI startups more risky. Even though the law is aimed at issues like data privacy and bias, the costs of compliance will likely result in slower development (and based on the current version less open-source models) since resources will need to be diverted away from capabilities work into compliance & audits.