Why libertarians are advocating for regulation on AI

post by RobertM (T3t) · 2023-06-14T20:59:58.225Z · LW · GW · 13 comments

Contents

  Who is advocating for regulations?
    Non-libertarians
    Libertarians
    Eliezer Yudkowsky
  The basic argument
None
13 comments

Motivation: some people on the internet seem confused about why libertarians (and others generally suspicious of government intervention) are advocating for regulation to help prevent x-risk from AI.  The arguments here aren't novel and generally seem pretty obvious to me, but I wanted a quick reference.


Who is advocating for regulations?

This is obviously the first question: is the premise correct?  Are libertarians uncharacteristically advocating for regulations?

They probably aren't the loudest or most numerous among the set of people who are advocating for regulations while being motivated by reducing AI x-risk, but I'm one such person and know multiple others.

I tend to bucket those advocating for regulations into three buckets.

Non-libertarians

Probably the biggest group, though I have wide error bars here.  Most of the people I know working on the policy & comms side are either liberal, apolitical, or not usefully described by a label like that.  I think people in this group tend to have somewhat more optimistic views on how likely things are to go well, but this isn't a super strong correlation.

Libertarians

Although some of them are focusing primarily on policy & comms, those I know more often dual-class with a technical primary & comms secondary.  Tend to have more pessimistic views on our likelihood of making it through.

Eliezer Yudkowsky

I could have put him in the "libertarian" group, but he has a distinct and specific policy "ask".  Also often misquoted/misunderstood.


The basic argument

There are many arguments for why various regulations in this domain might seem compelling from a non-libertarian point of view, and people generally object to those on different grounds (i.e. accusations of regulatory capture, corruption, tribalism, etc.), so I'll skip trying to make their case for them.

Why might libertarians advocate for (certain kinds of) regulation on AI, given their general distrust of government?

Straightforwardly, the government is not a malicious genie optimizing for the inverse of your utility function, should you happen to ask it for a favor.  Government interventions tend to come in familiar shapes, with a relatively well-understood distribution of likely first- and second-order effects.

If you're mostly a deontological libertarian, you probably oppose government interventions because they violate people's sovereignty; nevermind the other consequences.

If you're mostly a consequentialist libertarian, you probably oppose government interventions because you observe that they tend to have undesirable second-order effects, often causing more harm than any good from the first-order effects[1].

But this is a contingent fact about the intersection between the effects of regulations, and the values of libertarians.  Many regulations often have the effect of slowing down technological progress and economic growth, by making it more expensive to operate a business, do R&D, etc.  Libertarians usually aren't fans, since, you know, technological progress and economic growth are good things.

Unless you expect a specific kind of technological progress to kill everyone, possibly in the next decade or few.  A libertarian who believes that the default course of AI development will end with us creating an unaligned ASI that kills us and eats the accessible lightcone is not going to object to government regulations on AI because "regulations bad".

Now, a libertarian who believes this, and is thinking sensibly about the subject[2], will have specific models about which regulations seem like they might help reduce the chance of that outcome, and which might hurt.  This libertarian is not going to make basic mistakes like thinking that the intent of the regulation[3] will be strongly correlated with its actual effects.

They will simply observe that, while there are certainly ways in which government regulation could make the situation worse, such as by speeding things up, very often the effect of regulations is to slow things down, instead.  The libertarian is not going to be confused about the likelihood that government-mandated evals will successfully catch and stop an unaligned ASI from being deployed, should one be developed.  They will not make the mistake of thinking that the government will miraculously solve ethics and philosophy, and provide us with neat guardrails to ensure that progress goes in the right direction.

To a first approximation, all they care about is buying time to solve the actual (technical) problem.

Eliezer's ask is quite specific, but targeted at the same basic endpoint: institute a global moratorium on AI training runs over a certain size, in order to (run a crash program on augmenting human intelligence, so that you can) solve the technical alignment problem before someone accidentally builds an unaligned ASI.

If you want to argue that these people are making a mistake according to their own values and starting with their premises, arguing that governments tend to mess up whatever they touch is a non-sequitur.  Yes, they do - in pretty specific ways!

Arguments that might actually address the cruxes of someone in this reference class might include:

Arguments that are not likely to be persuasive, since they rely on premises that most people in this reference class think are very unlikely to be true:

 

Thanks to Drake Thomas for detailed feedback.  Thanks also to Raemon, Sam, and Adria for their thoughts & suggestions.

  1. ^

    Which are often also negative!

  2. ^

    As always, this is a small minority of those participating in conversations on the subject.  Yes, I am asking you to ignore all the terrible arguments in favor of evaluating the good arguments.

  3. ^

    To the extent that it's meaningful to ascribe intent to regulations, anyways - maybe useful to instead think of the intent of those who were responsible for the regulation's existence.

  4. ^

    Example provided by Drake Thomas: Someone could have 10 year timelines with 90% chance of paperclips and 10% chance of tech CEO led utopia, but government intervention leads to 20 year timelines, 80% chance of paperclips and 20% chance of aligned to someone, but 75% probability of authoritarian dystopia conditional on someone being able to align an AI. Under this model, they wouldn't want regulations, because they move the good worlds from 10% to 5% even though the technical problem is more likely to get solved.

13 comments

Comments sorted by top scores.

comment by benjamincosman · 2023-06-15T05:25:43.963Z · LW(p) · GW(p)

Who is advocating for regulations?

...

Non-libertarians...tend to have somewhat more optimistic views on how likely things are to go well

...

Libertarians...tend to have more pessimistic views on our likelihood of making it through.

This claimed correlation between libertarianism and pessimism seemed surprising to me until I noticed that actually since we are conditioning on advocating-for-regulations, Berkson's Bias would make this correlation appear even in a world where libertarianism and pessimism were completely uncorrelated in the general population.

Replies from: localdeity, T3t
comment by localdeity · 2023-06-15T06:17:43.100Z · LW(p) · GW(p)

Berkson's Bias seems to be where you're getting a subset of people that are some combination of trait X and trait Y; that is, to be included in the subset, X + Y > threshold.  Here, "> threshold" seems to mean "willing to advocate for regulations".  It seems reasonably clear that "pessimism (about the default course of AI)" would make someone more willing to advocate for regulations, so we'll call that X.  Then Y is ... "being non-libertarian", I guess, since probably the more libertarian someone is, the more they hate regulations.  Is that what you had in mind?

I would probably put it as "Since libertarians generally hate regulations, a libertarian willing to resort to regulations for AI must be very pessimistic about AI."

comment by RobertM (T3t) · 2023-06-15T05:45:40.717Z · LW(p) · GW(p)

Yeah, that seems like a plausible contributor to that effect.

Edit: though I think this is true even if you ignore "who's calling for regulations" and just look at the relative optimism of various actors in the space, grouped by their politics.

comment by Robert Miles (robert-miles) · 2023-06-15T23:30:37.838Z · LW(p) · GW(p)

A slightly surreal experience to read a post saying something I was just tweeting about, written by a username that could plausibly be mine.

Replies from: T3t
comment by RobertM (T3t) · 2023-06-16T00:24:39.261Z · LW(p) · GW(p)

Your argument with Alexandros was what inspired this post, actually.  I was thinking about whether or not to send this to you directly... guess that wasn't necessary.

comment by johnswentworth · 2023-06-15T00:44:28.535Z · LW(p) · GW(p)

Y'know, I didn't realize until reading this that I hadn't seen a short post spelling it out before. The argument was just sort of assumed background in a lot of conversations. Good job noticing and spelling it out.

comment by Logan Zoellner (logan-zoellner) · 2023-06-15T13:34:05.613Z · LW(p) · GW(p)

Centralization of power (as is likely to result from many possible government interventions) is bad

 

Suppose that you expected AI research to rapidly reach the point of being able to build Einstein/Von Neumann level intelligence and thereafter rapidly stagnate.  In this world, would you be able to see why centralization is bad?

It seems like you're not doing a very good Ideological Turing Test if you can't answer that question in detail.

Replies from: T3t
comment by RobertM (T3t) · 2023-06-15T22:40:43.569Z · LW(p) · GW(p)

The question is not whether I can pass their ITT: that particular claim doesn't obviously engage with any cruxes that I or others like me to have, related to x-risk.  That's the only thing that section is describing.

Replies from: logan-zoellner
comment by Logan Zoellner (logan-zoellner) · 2023-06-15T23:33:06.821Z · LW(p) · GW(p)

I think maybe you misunderstand the word "crux".  Crux is a point where you and another person disagree.  If you're saying you can't understand why Libertarians think centralization is bad, that IS a crux and trying to understand it would be a potentially useful exercise.

Replies from: T3t
comment by RobertM (T3t) · 2023-06-16T00:28:53.865Z · LW(p) · GW(p)

If you're saying you can't understand why Libertarians think centralization is bad, that IS a crux and trying to understand it would be a potentially useful exercise.

I am not saying that.  Many libertarians think that centralization of power often has bad effects.  But trying to argue with libertarians who are advocating for government regulations because they're worried about AI x-risk by pointing out that government regulation will increase centralization of power w.r.t. AI is a non-sequitur, unless you do a lot more work to demonstrate how the increased centralization of power acts contrariwise the libertarian's goals in this case.

comment by Matthew Barnett (matthew-barnett) · 2023-06-16T01:28:42.128Z · LW(p) · GW(p)

Arguments that might actually address the cruxes of someone in this reference class might include: [...]

The distribution of outcomes from government interventions are so likely to give you less time, or otherwise make it more difficult to solve the technical alignment problem, that there are fewer surviving worlds where the government intervenes as a result of you asking them to, compared to the counterfactual.

The thing I care more about is quality-adjusted effort, rather than time to solve alignment. For example, I'd generally prefer 30 years to solve alignment with 10 million researchers to 3000 years with 10 researchers, all else being equal. Quality of alignment research comes from a few factors:

  • How good current AIs are, with the idea being that we're able to make more progress when testing alignment ideas on AIs that are closer to dangerous-level AGI.
  • The number of talented people working on the problem, with more generally being better

I expect early delays to lead to negligible additional alignment progress during the delay, relative to future efforts. For example, halting semiconductor production in 2003 for a year to delay AI would have given us almost no additional meaningful alignment progress. I think the same is likely true for 2013 and even 2018. The main impact would just be to delay everything by a year. 

In the future I expect to become more optimistic about the merits of delaying AI, but right now I'm not so sure. I think some types of delays might be productive, such as delaying deployment by requiring safety evaluations. But I'm concerned about other types of delays that don't really give us any meaningful additional quality-adjusted effort. 

In particular, the open letter asking for an AI pause appeared to advocate what I consider the worst type of delay: a delay on starting the training of giant models. This type of delay seems least valuable to me for two main reasons. 

The first reason is that it wouldn't significantly slow down algorithmic progress, meaning that after the pause ended, people could likely just go back to training giant models almost like nothing happened. In fact, if people anticipate the pause ending, then they're likely to invest heavily and then start their training runs on the date the pause ends, which could lead to a significant compute overhang, and thus sudden progress. The second reason is that, compared to a delay of AI deployment, delaying the start of a training run reduces the quality-adjusted effort that AI safety researchers have, as a result of preventing them from testing alignment ideas on more capable models.

If you think that there are non-negligible costs to delaying AI from government action for any reason, then I think it makes sense to be careful about how and when you delay AI, since early and poorly targeted delays may provide negligible benefits. However, I agree that this consideration becomes increasingly less important over time.

comment by the gears to ascension (lahwran) · 2023-06-15T21:56:39.075Z · LW(p) · GW(p)

For what it's worth, I think most people I know expect most professed values to be violated most of the time, and so they think that libertarians advocating for this is perfectly ordinary; the surprising thing would be if professed libertarians weren't constantly showing up advocating for regulating things. Show don't tell in politics and ideology. That's not to say professing values is useless, just that there's not an inconsistency to be explained here, and if I link people in my circles this post, they'd respond with an eyeroll at the possibility that if only they were more libertarian they'd be honest - because the name is most associated with people using the name to lie.

comment by Lara (lara-1) · 2023-06-16T14:45:25.046Z · LW(p) · GW(p)

Maybe a consideration worth considering: 

It seems plausible that sometimes people advocate for things in political realms, even if they don't support it. 

Strategically it would make sense for an AI Lab that opposes regulation that endangers their business model to publicly support or even ask for regulation broadly, generally, to then seem more credible when opposing a specific regulatory proposal. They can then say "We're not against regulation in general, in fact we've been calling for it for a long time, but this specific proposal is bad".