The Regulatory Option: A response to near 0% survival odds
post by Matthew Lowenstein · 2022-04-11T22:00:50.311Z · LW · GW · 21 commentsContents
A modest proposal: An FDA for Artificial Intelligence Objection 1: Regulators aren’t smart enough to impede progress Regulation is robust to entropy Objection 2: How is it nobody has thought about this? Another objection: This is politically impossible Objection 4: What about China? Who should do this? None 21 comments
This is inspired by Eliezer’s “Death with Dignity” [LW · GW] post. Simply put, AI Alignment has failed. Given the lack of Alignment technology AND a short timeline to AGI takeoff, chances of human survival have dropped to near 0%. This bleak outlook only considers one variable (the science) as a lever for human action. But just as a put option derives its value not merely from current prices and volatility, but also from time to expiration—so too do our odds of success. To hew more closely to the options metaphor, the expected value of human civilization hinges on our AGI timeline. Is there a way to buy more time?
A modest proposal: An FDA for Artificial Intelligence
The AI world has an instinctive revulsion to regulation. This is sensible. Committees of bloodless bureaucrats have an abysmal track record at achieving their stated goals. Eliezer has joked that if the government funded AI research, it might be a good thing because progress would stall. Let’s take that idea seriously.
What if we shifted efforts into regulating AI, domestically and globally? While regulation has a terrible track record at making consumers, investors, and workers better off, it excels at stalling growth. Scott has done yeoman’s work tracking the baleful impact of the FDA on pharmaceutical development and on health in general. I’m pretty skeptical of social scientists’ ability to measure regulation with stylized facts, but there is reason to think that the FDA effect generalizes.
I propose we enter the lobbyist business, with a goal toward erecting similar administrative hurdles to AI research. The benefits of such hurdles are obvious. They would throw a spanner into the works of AI companies around the globe. It is impossible to say ahead of time how much they would succeed at halting AI improvements. But the possibility that they might do so—perhaps significantly—should raise the chances of human survival at least higher than 0%.
In the possible worlds with intrusive regulatory agencies, how many make no progress toward AGI at all? I’d guess in at least 5% of possible worlds, a sufficiently arbitrary and bloated bureaucracy would halt AGI indefinitely. At this level of possibility, Eliezer is probably not even in “die with dignity” mode anymore. Heck, he may even be able to resume working ultra-hard, without breaking his promise to his previous self.
Objection 1: Regulators aren’t smart enough to impede progress
In a comment [LW(p) · GW(p)] that Eliezer endorses, Vaniver raises an immediate objection. Namely:
the actual result will be companies need large compliance departments in order to develop AI systems, and those compliance departments won't be able to tell the difference between dangerous and non-dangerous AI.
This logic is exactly backwards. To put a fine point on it, Vaniver thinks that because compliance departments won’t be able to distinguish between dangerous and non-dangerous AI, they will err toward permitting dangerous AI.
This is not remotely how regulators or compliance departments think. They will err on the side of caution and are more likely to put the kibosh on all research than to allow dangerous research. At near 0% survival chances, this is easily a worthwhile tradeoff.
Regulation is robust to entropy
When you’re trying to regulate the pharma industry, entropy is your enemy. Word the bill slightly wrong, hire the wrong people, get a judge who assigns damages too aggressively, and you’ve accidentally made it illegal to sell insulin or research heart disease. When you’re writing building codes, make them a tad too stringent and you’ve accidentally made it illegal for poor people to live. Most ways in which regulation can go wrong stymies growth and productive activity. That’s why for most activities in most worlds, the unregulated world outperforms the regulated world.
But our goal is to throw a spanner in the works—the more spanners, the better. Since we want to hinder progress toward AGI, the more vague wording, bizarre giveaways to special interests, and arbitrary powers granted to bureaucrats and courts, the better. To an AI Safety researcher, the worry about AGI effecting “jobs” or inducing ethnic bias is a joke; it’s patently trivial compared to the literal extinction of humanity forever.
But in the world of regulation—it’s a gift! The more trivial concerns the better! If the hacks at the Artificial Intelligence Regulatory Agency want DeepMind to fill out a lot of forms about how their latest project will impact jobs, well, that’s time away from blowing up the world. The courts say OpenAI can be held liable for damages owing to discrimination caused by AI. That’s a day’s worth of committee meetings per week in which they won’t be creating a superintelligence.
This also makes the regulation easier to pass. Normally, drafters interested in regulation want to preserve as much productive potential in the industry as possible. When drafting environmental regulation, you have to strike a balance between protecting the water supply and letting people frack for natural gas. In political terms, you have to strike a balance between environmental and petroleum-industry lobbyists. But since we want to be as obstructive as possible, we can offer everything to everyone.
Politicians are afraid of AI’s impact on jobs? Done, put it in the bill. Democrats are worried that AI will discriminate against minorities? Excellent, added. Republicans are afraid AGI might promote non-binary gender identities? Quite right—can’t have that! (Ideally, we would make it illegal to discriminate against and to promote non-binary gender identities).
This also addresses another objection of Vaniver’s:
…if someone says "I work on AI safety, I make sure that people can't use language or image generation models to make child porn" or "I work on AI safety, I make sure that algorithms don't discriminate against underprivileged minorities", I both believe 1) they are trying to make these systems not do a thing broader society disapproves of, which is making 'AI' more 'safe', and 2) this is not attacking the core problem, and will not generate inroads to attacking the core problem.
These are concerns that an engineer actually working on AI Safety should have. But if our only goal is to throw barriers in the way of AI research, we want as vast and ill-defined categories’ as possible. Ideally, everything from “AGI’s impact on the flavor of chocolate” to “AGI’s effect on the wealth gap between East and West Timor” will fall under the purview of AI Safety compliance departments.
Objection 2: How is it nobody has thought about this?
I’m going to get a lot of people linking me to previous posts [? · GW] on LW. Some are quite good. But they are besides the point. Previous posts are about crafting intelligent regulation that will allow AI research to continue without endangering the world. That ship has sailed. Now we want the stupidest possible regulation so long as it gums up the works.
Also, previous posts are about, “if we were to regulate, how might it work?” That is not this post. This post is, should I actually set in motion a law to regulate AI?
Another objection: This is politically impossible
One relevant objection [LW · GW] that Larks raises, is that the politics are unfeasible:
We don't want the 'us-vs-them' situation that has occurred with climate change, to happen here. AI researchers who are dismissive of safety law, regarding it as an imposition and encumbrance to be endured or evaded, will probably be harder to convince of the need to voluntarily be extra-safe - especially as the regulations may actually be totally ineffective.”
The only case I can think of where scientists are relatively happy about punitive safety regulations, nuclear power, is one where many of those initially concerned were scientists themselves. Given this, I actually think policy outreach to the general population is probably negative in expectation.
I agree that this is possible. But given how far under water our put options are, it seems at least worth trying. What if it turns out to be not that hard to convince governments to kneecap their research capabilities? Does that even seem farfetched?
FWIW, I think Larks is mistaken. An us-vs-them situation is perfectly compatible with our goal, since our goal is not to protect good AI research but to slow down all AI research. I also think Larks overestimates both the opposition of AI researchers, as well as their input in drafting legislation. (Does American regulation seem like it requires the consent of the best and brightest industry experts?)
Objection 4: What about China?
The reason why you can’t just convince everyone at DeepMind to stop working on AGI, is because barriers to entry are too low. If DeepMinders all went off into the wilderness to become hunter gatherers, someone else would just pick up the slack. Eliezer writes:
Even if DeepMind listened, and Anthropic knew, and they both backed off from destroying the world, that would just mean Facebook AI Research destroyed the world a year(?) later.
Some will argue that this holds at the country level. If the US regulates AI research to death, it will just mean that China blows up the world a few years later.
I have two responses to this.
One, a few years is a fantastic return on investment if we are really facing extinction.
Two, do not underestimate the institutional imperative: “…The behavior of peer companies, whether they are expanding, acquiring, setting executive compensation or whatever, will be mindlessly imitated.”
Governments work this way too. Governments like to regulate. When they see other governments pass sweeping regulatory measures, the institutional imperative pushes them to feel jealous and copy it. Empirically, I believe this is already the case: the Chinese regulatory framework is heavily influenced by copying America (and Japan).
Who should do this?
I don’t know that the people reading this forum are the best positioned to become lobbyists. For one, we still need people trying to solve the alignment problem. If you are working on that, the Ricardian dynamics almost definitely favor you continuing to do so.
However, I am a wordcel who barely knows how to use a computer. After discovering LessWrong, I instantly regretted not devoting my life to the Alignment problem, and regretted the fact that it was too late for me to learn the math necessary to make a difference. But—if the regulatory option seems sensible, I am willing to redirect my career to focus on it and to try to draft others with the right skills to do so as well.
If, however, you assure me that there are already brilliant lobbyists working day and night to do this, and it turns out that it’s harder than I thought, I will desist. Moreover, if there is something I’ve overlooked and this is a bad idea, I will desist.
But if it’s worth a try, I’ll try.
Is it?
21 comments
Comments sorted by top scores.
comment by Evan R. Murphy · 2022-04-12T07:28:53.717Z · LW(p) · GW(p)
This is a clever idea, using what's often considered a bug of regulation as a feature instead. I need to think on this, not sure whether I think it's a good approach or not yet...
Simply put, AI Alignment has failed.
I do think this is an overstatement. There's no misaligned AGI yet that I'm aware of, so how has alignment failed? I agree with the thrust of what you were saying though that it feels needlessly risky bet everything on the technical alignment lever when the governance & strategy lever is available too.
comment by Dagon · 2022-04-12T19:30:17.546Z · LW(p) · GW(p)
Interesting idea. I think a problem not brought up is that of https://en.wikipedia.org/wiki/Regulatory_capture . One of the reasons the FDA is so effective (at impeding progress) is that it's stated mission is different from it's actual supporters and constituents. If you see it through the lens of "ensuring monopoly and profit for established members of the drug industry", it's amazingly good at what it does.
The concept applied to AI oversight gets a lot scarier. It won't take long for AI creators (assisted by their fledgling tool AIs) to take over so it's actually helping them (and probably slowing down their competitors, which might or might not be valuable to us, the victims), either by giving them cover for not-obviously-dangerous-so-now-officially-permitted activities, or by enforced sharing of data, or by other things that sound wise for a regulatory agency but are contra to OUR goals of slowing things down.
Replies from: tamgent↑ comment by tamgent · 2022-05-07T17:59:41.232Z · LW(p) · GW(p)
So regulatory capture is a thing that can happen. I don't think I got a complete picture of your image of how oversight for dominant companies is scary. You mentioned two possible mechanisms: rubber stamping things, and enforcing sharing of data. It's not clear to me that either of these are obviously contra the goal of slowing things down. Like, maybe sharing of data (I'm imagining you mean to smaller competitors, as in the case of competition regulation) - but data isn't really useful alone, you need to compute and technical capability to use it. More likely would be forced sharing of the models themselves, but this is isn't the giving of an ongoing capability, although it could still be misused. Mandating sharing of data is less likely under regulatory capture though. And then the rubber stamping, well, maybe sometimes something would be stamped that shouldn't have been, but surely some stamping process is better than none? It at least slows things down. I don't think receiving a stamp wrongly makes an AI system more likely to go haywire - if it was going to it would anyway. AI labs don't just think, hm, this model doesn't have any stamp, let me check its safety. Maybe you think companies will do less self-regulation if external regulation happens? I don't think this is true.
comment by trevor (TrevorWiesinger) · 2022-04-12T00:32:08.801Z · LW(p) · GW(p)
https://en.wikipedia.org/wiki/TERCOM#Comparison_with_other_guidance_systems
Stronger AI is needed to make nuclear missiles that are better at locating their targets when communication is jammed, as well as outmaneuvering other missiles ("juke").
It's not regulation, it's arms control. It's buried in secrets, including many that are even more disturbing than whatever I'm willing to talk about on a public internet forum.
Policy experience like yours is in short supply, please move to DC and get in contact with EA groups there. You will learn plenty from working with them in-person, and rapidly begin contributing significantly.
comment by Aiyen · 2022-04-11T23:44:07.262Z · LW(p) · GW(p)
I advise against this in the strongest possible terms. While you are entirely correct that regulation is good at suppressing progress, and that that might buy time before transformational AI is reached, it also is likely to interfere with any efforts to actually develop alignment, as least when such efforts reach the stage that they ought to be integrated into actual coding projects. This could easily reduce the chance of FAI ever being developed, and while Eliezer's despair might justify desperation plays that would be otherwise unwise, it seems extremely likely that this would make things worse.
It also would create a very substantial suffering risk. If an FDA for AI actually managed to preside over controllable superintelligence, we run the risk of the same poor incentives and plain evil directing humanity's future. This ought to be prevented at any cost. For the love of all that is bright and beautiful, please do not do this.
Replies from: daniel-kokotajlo, tamgent↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2022-04-12T11:21:31.248Z · LW(p) · GW(p)
Can you explain more about how FDA for AI could lead to s-risk? "Risk of suffering on an astronomical scale." I'm skeptical. The FDA may be evil, but it's not that evil... ;)
Replies from: tamgent, Matthew Lowenstein↑ comment by Matthew Lowenstein · 2022-04-12T15:42:13.965Z · LW(p) · GW(p)
I take Ayen's concern very, very seriously. I think the most immediate risk is that the AI Regulatory Bureau (AIRB) would regulate real AI safety, so MIRI wouldn't be able to get anything done. Even if you wrote the law saying "this doesn't apply to AI Alignment research," the courts could interpret that sufficiently narrowly such that the moment you turn on an actual computer you are now a regulated entity per AIRB Ruling 3A..
In this world, we thought we were making it harder for DeepMind to conduct AI research. But they have plenty of money to throw at compliance so it barely slows them down. What we actually do, is it make it illegal for MIRI to operate.
I realize the irony in this. There is an alignment problem for regulation, which, while not as difficult as AI is also quite hard.
Replies from: tamgent↑ comment by tamgent · 2022-05-04T18:09:21.390Z · LW(p) · GW(p)
I'd find it really hard to imagine MIRI getting regulated. It's more common that regulation steps in where an end user or consumer could be harmed, and for that you need to deploy products to those users/consumers. As far as I'm aware, this is quite far from the kind of safety research MIRI does.
Sorry I must be really dumb but I didn't understand what you mean by the alignment problem for regulation? Aligning regulators to regulate the important/potentially harmful bits? I don't think this is completely random, even if focused more on trivial issues, they're more likely to support safety teams (although sure the models they'll be working on making safe won't be as capable, that's the point).
↑ comment by tamgent · 2022-05-04T18:15:35.403Z · LW(p) · GW(p)
I'm a bit confused about why you think it's so clearly a bad idea, your points weren't elaborated at all, so I'd absolutely love some elaboration by you or some of the people that voted up your comment, because clearly I'm missing something.
- on the reduction of chance of FAI being developed, sure, some of this of course would happen, but slowing down development of solutions to a problem (alignment problem) whilst slowing down growth of the problem itself even more is surely net good for stopping the problem? Especially if you're really worried about the problem and worried it'd happen faster than you could think of good solutions for!
- waiting for elaboration on the suffering point but let's assume you've got good reasons there
↑ comment by Aiyen · 2022-05-05T16:48:01.841Z · LW(p) · GW(p)
Hey, that’s a great question. When I get a bit more time I’ll write a clarification. Sorry for the delay.
Replies from: tamgent↑ comment by tamgent · 2022-05-05T16:48:51.553Z · LW(p) · GW(p)
No worries, thank you, I look forward to it
Replies from: Aiyen↑ comment by Aiyen · 2022-05-05T20:17:54.593Z · LW(p) · GW(p)
Alright, if we want to estimate the likely effects of allowing government regulation of AI, it's worth considering the effects of government regulation of everything else. The FDA's efforts to slow the adoption of new medicines kill far more people than they save (at least according to SlateStarCodex, which has a lot of excellent material on the topic). It is not uncommon for them to take actions that do not even pretend to be about patient safety, such as banning a drug because initial trials make it appear so helpful that it would be "unethical" to have a control group that was denied it in further studies, but apparently not so helpful that it's worth allowing the general public access. I highly recommend Zvi Moskowitz' blog posts on the subject; he's collected more sources and examples on the topic than this margin can contain.
There is a very common reaction I have noticed to these sorts of events, where most people brush them off as "just how the world works". A patient dying due to having been deliberately refused medicine is treated as a tragedy, but no one is actually at fault. Meanwhile, a patient who is slightly inconvenienced by an officially approved treatment is treated as strong evidence that we need more medical regulation. Certainly this reaction is not universal, but it's common enough to create an inferential gap between general perceptions of regulation and a claim like "the FDA are mass murderers". However, whether or not you want to call it murder, the real-world effect of medical regulation is primarily to make more people sick and dead.
This raises two concerns about having an "FDA for AI", as the original post recommends. First, that the same sorts of errors would occur, even in the absence of malice. And secondly, that malice would in fact be present, and create deliberate problems for the general population. How likely is this?
Enough errors would almost certainly occur in AI regulation to make it net negative. Even leaving aside the alarming example of the FDA, we can consider countless other examples of what regulation looks like in the real world. For instance, ALARA laws demand nuclear energy have radiation emissions As Low As Reasonably Possible, where Reasonably Possible means that if nuclear power becomes cheaper than its competitors, clearly it can afford to spend more money on shielding. There is no cost-benefit analysis here, just the attempt to look good by "standing up to nuclear risks". Never mind the fact that insisting that nuclear be made always more expensive cripples a likely source of both economic growth and carbon-free energy, and must at some point become unreasonable, as the laws are never lifted despite how low radiation emissions may become. Or actively counterproductive responses to Covid (again, Zvi explains this topic in as much detail as you care to read, so I will not reproduce it here, but you will likely greatly enjoy his posts on the subject). Or laws that demand everything from not using foreign dredging ships to expand our ports, to getting a college degree before braiding hair. And I defy anyone exposed to American public schools to claim that these are in any way a sane use of power.
All of these examples, moreover, are dealing with subjects relatively well understood by regulators. While the precise details of medical safety might be beyond a bureaucrat, they generally have a concept of "safe and effective" vs "unsafe or ineffective". Likewise with nuclear power and college degrees. But AI risks are complex and counterintuitive-try explaining to a congressman that your system is totally safe and productive unless it develops mesa optimizers, or unless someone builds an agentic version. Insofar as the counterproductive results of regulation are the result of misunderstandings and poor incentives rather than deliberate evil, a field where there is bound to be vastly more misunderstanding should be at least as prone to regulation backfiring.
As for deliberate evil, it's worth considering the track record of regimes both historically and in the present day. Even leaving aside the horrors of feudalism, Nazism and twentieth century Communism, Putin is currently pursuing a war of aggression complete with war-crimes, Xi is inflicting unspeakable tortures on captive Uighurs, Kim Jong-Un is maintaining a state which keeps its subjects in grinding poverty and demands constant near-worship, and the list continues. It should be quite obvious, I hope, why the idea of such a regime gaining controllable AI would produce an astronomical suffering risk.
The obvious counterargument here is to place liberal democratic regimes in a special category, such that one wouldn't expect them to engage in the same sorts of abuses. After all, the proposal is to create an FDA for AI, not a North Korean Politburo, and the horrors I just cited are all from non liberal democratic regimes. However, there are multiple problems with that.
The first problem is that while acts like denying people medicine and letting them die quietly are less flashy than locking Uighurs up in camps, they are no less evil. If the actions of the FDA were taken by a political opponent, we would have no hesitation in declaring them atrocities. It seems unwise to conclude that people who take blatant actions to kill innocents for political convenience would be safe custodians of AI, especially if that AI advanced to the point that what accountability to the public that still exists for democratic regulators was lost.
The second problem is that to the extent that liberal democracies behave better than their autocratic counterparts, this is often because of better incentives, and those incentives would not necessarily apply to AI governance. Western nations tend to have both liberal democracy and the rest of the Enlightenment values: individualism, free markets, tolerance and the like. These other values allow them to be rich and stable, removing many incentives for vicious acts of desperation, as well as creating an electorate which may punish certain types of abuses. However, the electorate does not understand AI, removing the possibility for meaningful checks on poor decisions (and given the other regulatory failures it's worth questioning the extent to which electoral checks on poor decisions actually occur. Does that model match the world we see?) Moreover, the possibility of a rival nation obtaining unchallengeable power might be expected to create the sort of desperation we normally associate with rogue states.
Finally, the third problem with that argument is that it borders on reference class tennis. Citing liberal democracy as a special category doesn't make sense without a proposed mechanism for such regimes actually making better decisions regarding AI! I am reminded of Eliezer's story [? · GW] about a speaker at the Singularity Summit calling for "democratic development of artificial intelligence" without having any concept of how to apply democracy to said development. The speaker appeared to simply have positive affect around citing democracy, no more, no less. Unless the voters are specifically concerned about AI, and knowledgeable enough for that concern to do more good than harm, and officials are selected as a result of this who are vastly more moral than almost all other officials, one wouldn't expect liberal democracy to get better results with AI regulation.
There is an inherent risk in discussing politics here, both in the possibility of getting mind-killed and the chance that differences in our background political assumptions may distract from the main point. If we have different priors on the likelihood of various regime types acting wisely and well, we should try to keep that from interfering with the discussion on the specific question of AI regulation and the more general question of how to obtain positive results from AI. (Not that I'm averse to discussing political questions if you want to, but that should for the most part not be in this thread.) But hopefully this should clarify to a degree why I anticipate both severe X risks and S risks from most attempts at AI regulation without being too inflammatory.
Replies from: tamgent, Matthew Lowenstein↑ comment by tamgent · 2022-05-07T17:43:27.617Z · LW(p) · GW(p)
Thank you for your elaboration, I appreciate it a lot, and upvoted for the effort. Here are your clearest points paraphrased as I understand them (sometimes just using your words), and my replies:
- The FDA is net negative for health, therefore creating an FDA-for-AI would be likely net negative for the AI challenges.
I don't think you can come to this conclusion, even if I agree with the premise. The counterfactuals are very different. With drugs the counterfactual of no FDA might be some people get more treatments, and some die but many don't, and they were sick anyway so need to do something, and maybe fewer die than do with the FDA around, so maybe the existence of the FDA compared to the counterfactual is net bad. I won't dispute this, I don't know enough about it. However, the counterfactual in AI is different. If unregulated, AI progress steams on ahead, competition over the high rewards is high, and if we don't have good safety plan (which we don't) then maybe we all die at some point, who knows when. However, if an FDA-for-AI creates bad regulation (as long as it's not bad enough to cause AI regulation winter) then it starts slowing down that progress. Maybe it's bad for, idk, the diseases that could have been solved during the 10 years slowing down from when AI would have solved cancer vs not, and that kind of thing, but it's nowhere near as bad as the counterfactual! These scenarios are different and not comparable, because the counterfactual of no FDA is not as bad as the counterfactual of no AI regulator.
- Enough errors would almost certainly occur in AI regulation to make it net negative.
You gave a bunch of examples from non-AI regulation of bad regulation (I am not going to bother to think about whether I agree that they are bad regulation as it's not cruxy) - but you didn't explain how exactly errors lead to making AI regulation net negative? Again I think similar to the previous claim, the counterfactuals likely make this not hold.
- ...a field where there is bound to be vastly more misunderstanding should be at least as prone to regulation backfiring
That is an interesting claim, I am not sure what makes you think it's obviously true, as it depends what your goal is. My understanding of the OP is that the goal of the type of regulation they advocate is simply to slow down AI development, nothing more, nothing less. If the goal is to do good regulation of AI, that's totally different. Is there a specific way in which you imagine it backfiring for the goal of simply slowing down AI progress?
- ...an [oppressive] regime gaining controllable AI would produce an astronomical suffering risk.
I am unsure what point you were making in the paragraph about evil. Was it about another regime getting there first that might not do safety? For response, see the OP Objection 4 which I share and added additional reason for that not being a real worry in this world.
- ...unwise to think that people who take blatant actions to kill innocents for political convenience would be safe custodians of AI..
I don't think it's fair to say regulators would be a custodian. They have a special kind of lever called "slow things down", and that lever does not mean that they can, for example, seize and start operating the AI. It is not in their power to do that, legally, nor do they have the capability to do anything with it. We are talking here about slowing things down before AGI, not post AGI.
- the electorate does not understand AI
Answer is same as my answer to 3. and also similar to OP Objection 1.
And finally to reply to this: "hopefully this should clarify to a degree why I anticipate both severe X risks and S risks from most attempts at AI regulation"
Basically, no, it doesn't really clarify it. You started off with a premise I agreed with or at least do not know enough to refute, that the FDA may be net negative, and then drew a conclusion that I disagree with (see 1. above), and then all your other points were assuming that conclusion, so I couldn't really follow. I tried to pick out bits that seemed like possible key points and reply, but yeah I think you're pretty confused.
What do you think of my reply to 1. - the counterfactuals being different. I think that's the best way to progress the conversation.
↑ comment by Matthew Lowenstein · 2022-05-07T20:02:10.599Z · LW(p) · GW(p)
Hi Aiyen, thanks for clarification.
(Warning: this response is long and much of it is covered by what Tamgen and others have said. )
The way I understand your fears, they fall into four main categories. In the order you raise them and, I think, in order of importance these concerns are as follows:
1) Regulations tend to cause harm to people, therefore we should not regulate AI.
I completely agree that a Federal AI Regulatory Commission will impose costs in the form of human suffering. This is inevitable, since Policy Debates Should Not Appear One Sided [LW · GW]. Maybe in the world without the FAIRC, some AI Startup cures Alzheimer’s or even aging a good decade before AGI. In the world with FAIRC, we risk condemning all those people to dementia and decrepitude. This is quite similar to FDA unintended consequences.
Response:
You suggest that the OP was playing reference class tennis, but to me looking at the problem in terms of "regulators" and "harm" is the wrong reference class. They are categories that do not help us predict the answer to the one question we care about most: what is the impact on timelines to AGI?
If we zoom in closer to the object level, it becomes clear that the mechanism by which regulators harm the public is by impeding production. Using Paul Christiano’s rules of reference class tennis [LW · GW], “regulation impedes production” is a more probable narrative (i.e. supported by greater evidence, albeit not simpler) than simply “regulation always causes harm.” At the object level, we see this directly as when the FDA shoots fines anyone with the temerity to produce cheaper EpiPens, the nuclear regulatory commission doesn't let anyone build nuclear reactors, etc. Or it can happen indirectly as a drag on innovation. To count the true cost of the FDA, we need to know how many wondrous medical breakthroughs we've already made on Earth prime.
But if you truly believe that AGI represents an existential threat, and that at present innovation speeds AGI happens before Alignment, then AI progress (even when it solves Alzheimers) is on net a negative. The lives saved by Alzheimer’s have to be balanced against human extinction--and the balance leaves us way, way in the red. This means that all the regulatory failure modes you cite in your reply become net beneficial. We want to impede production.
By way of analogy, it would be as if Pfizer were nearly guaranteed to be working its way toward making a pill that would instantly wipe out humanity; or if Nuclear power actually was as dangerous as its detractors believe! Under such scenarios, the FDA is your best friend. Unfortunately, that is where we stand with AI.
To return to the key question: once it is clear that, at a mechanical level, the things that regulatory agencies do are to impede production, it also becomes clear that regulation is likely to lengthen AGI timelines.
2) The voting public is insufficiently knowledgeable about AI.
I'm not sure I understand the objection here. The government regulates tons of things that the electorate doesn't understand. In fact, ideally that is what regulatory agencies do. They say, "hey we are a democracy, but you, the demos, don't understand how education works so we need a department of education." This is often self-serving patronage, but the general point stands that the way regulatory agencies come into being in practice is not because the electorate achieves subject-area expertise. I can see a populist appeal for a Manhattan project to speed up AI in order to "beat China" (or whatever enemy du jour), but this is not the sort of thing that regulators in permanent bureaucracies do. (Just look at operation "warp speed"; quite apart from the irony in the name, the FDA and the CDC had to be dragged kicking and screaming to do it.)
3) Governments might use AI to do evil things
In your response you write:
As for deliberate evil, it's worth considering the track record of regimes both historically and in the present day. Even leaving aside the horrors of feudalism, Nazism and twentieth century Communism, Putin is currently pursuing a war of aggression complete with war-crimes, Xi is inflicting unspeakable tortures on captive Uighurs, Kim Jong-Un is maintaining a state which keeps its subjects in grinding poverty and demands constant near-worship, and the list continues. It should be quite obvious, I hope, why the idea of such a regime gaining controllable AI would produce an astronomical suffering risk.
I agree, of course, that these are all terrible evils wrought by governments. But I’m not sure what it has to do with regulation of AI. The historical incidents you cite would be relevant if the Holocaust were perpetrated by the German Bureau of Chemical Safety or if the Uighurs were imprisoned by the Chinese Ethnic Affairs Commission. Permanent regulatory bureaucracies are not and never have been responsible for (or even capable of) mission-driven atrocities. They do commit atrocities, but only by preventing access to useful goods (i.e. impeding production).
Finally, one sentence in this section sticks out and makes me think we are talking past each other. You write:
the idea of such a regime gaining controllable AI would produce an astronomical suffering risk
By my lights, this would be a WONDERFUL problem to have. An AI that was controllable by anyone (including Kim Jung-Un, Pol Pot, or Hitler) would, in my estimation, be preferable to a completely unaligned paper clip maximizer. Maybe we disagree here?
4) Liberal democracies are not magic, and we can't expect them to make the right decisions just because of our own political values.
I don't think my OP mentioned liberal democracy, but if I gave that impression then you are quite right I did so in error. You may be referring to my point about China. I did not mean to imply a normative superiority of American or any other democracy, and I regret the lack of clarity. My intent was to make a positive observation that governments do, in fact, mimic each other's regulatory growth. Robin Hanson makes a similar point; that governments copy each other largely because of institutional and informal status associations. This observation is neutral with regard to political system. If we announce a FAIRC, I predict that China will follow, and with due haste.
comment by Evan R. Murphy · 2022-04-12T07:13:24.820Z · LW(p) · GW(p)
...if the regulatory option seems sensible, I am willing to redirect my career to focus on it and to try to draft others with the right skills to do so as well.
If, however, you assure me that there are already brilliant lobbyists working day and night to do this, and it turns out that it’s harder than I thought, I will desist. Moreover, if there is something I’ve overlooked and this is a bad idea, I will desist.
But if it’s worth a try, I’ll try.
I'm not sure about the regulatory approach you're putting forth here, but if you redirected your career to AI governance & strategy or committed substantial time to working on it as you say here, that could be a big contribution. My understanding is that people are desperately needed.
I just heard a talk today with Luke Muehlhauser. He's on the AI governance & policy side of Open Phil. During the talk he said it's a common perception that there are dozens of people working full-time on the important questions in this area, but there really aren't.
Rethink Priorities is hiring for two AI governance & strategy research roles now. You might try FLI too. I bet others in the space are hiring as well or have things they're happy to have help with if contacted.
Replies from: Matthew Lowenstein↑ comment by Matthew Lowenstein · 2022-04-12T13:11:34.699Z · LW(p) · GW(p)
These are both supremely helpful replies. Thank you.
comment by tamgent · 2022-05-04T18:24:18.822Z · LW(p) · GW(p)
Another response to the China objection is that similar to regulators copying each other internationally, so do academics/researchers, so if you slow down development of research in some parts of the world you also might slow down development of that research in other parts of the world too. Especially when there's an asymmetry with openness of publication of the research.
comment by Nicholas / Heather Kross (NicholasKross) · 2022-05-26T21:46:53.631Z · LW(p) · GW(p)
Mostly agree, nitpick on the institutional counterpoint: Europe introduced GDPR, and it doesn't seem like that inspired the US (or, uh, China) to do anything like that. And didn't China already announce a big AI investment?
Replies from: gbear605comment by azsantosk · 2022-04-13T02:05:21.622Z · LW(p) · GW(p)
I think it is an interesting idea, and it may be worthwhile even if Dagon [LW · GW] is right and it results in regulatory capture.
The reason is, regulatory capture is likely to benefit a few select companies to promote an oligopoly. That sounds bad, and it usually is, but in this case it also reduces the AI race dynamic. If there are only a few serious competitors for AGI, it is easier for them to coordinate. It is also easier for us to influence them towards best safety practices.