Safety-capabilities tradeoff dials are inevitable in AGI

post by Steven Byrnes (steve2152) · 2021-10-07T19:03:22.334Z · LW · GW · 4 comments

Contents

  1. Background (if it’s not obvious): why do safety-capabilities tradeoff dials stink?
  2. Why do I say that these dials are inevitable?
  3. A better way to think about it: “alignment tax”
None
4 comments

A safety-capabilities tradeoff is when you have something like a dial on your AGI, and one end of the dial says “more safe but less capable”, and the other end of the dial says “less safe but more capable”.

Make no mistake: safety-capabilities tradeoff dials stink. But I want to argue that they are inevitable, and we better get used to them. I will argue that the discussion should be framed as “Just how problematic is this dial? How do we minimize its negative impact?”, not “This particular approach has a dial, so it’s automatically doomed. Let’s throw it out and talk about something else instead.”

(Recent examples of the latter attitude, at least arguably: here [LW · GW], here [LW(p) · GW(p)].)

1. Background (if it’s not obvious): why do safety-capabilities tradeoff dials stink?

The biggest problem is that the world has ambitious people and competitive people and optimistic people and people desperate to solve problems (like climate change, or the fact that their child is dying of cancer, or the fact that their company is about to go bust and they’ll have to lay off thousands of employees who are depending on them, etc. etc.). Also, we don’t have research liability insurance that internalizes the societal cost of catastrophic accidents. And even if we did, I don’t think that would work for catastrophes so bad that no humans would be left to sue the insurer, or so weird and new that nobody knows how to price them, or if the insurance is so expensive that AGI research shifts to a country where insurance is not required, or shifts underground, etc.

So I expect by default that a subset of people would set their dial to “less safe but more capable”, and then eventually we get catastrophic accidents with out-of-control AGIs self-replicating around the internet and all that.

2. Why do I say that these dials are inevitable?

Here are a few examples.

3. A better way to think about it: “alignment tax”

I like the "alignment tax" framing [? · GW] (the terminology comes from Eliezer Yudkowsky via Paul Christiano). (Maybe it would have been slightly better to call it “safety tax” in the context of this post, but whatever.)

So if we’re talking about a proposal that entails a safety-capabilities tradeoff dial, the first questions I would ask are:

If yes to both, that’s VERY BAD. If any safety proposal is like that, it goes straight into the garbage!

Then the next question to ask is: if the alignment tax is less than infinity, that’s a good start, but just how high or low is the tax? There’s no right answer anymore: it’s just “The lower the better”.

After all, we would like a world of perfect compliance, where all relevant actors are always paying the alignment tax in order to set their dial to “safe”. I don’t think this is utterly impossible. There are three things that can work together to make it happen:

 

(Thanks Alex Turner & Miranda Dixon-Luinenburg for comments on a draft.)

4 comments

Comments sorted by top scores.

comment by Koen.Holtman · 2021-10-08T11:46:46.628Z · LW(p) · GW(p)

I agree with your central observation that Safety-capabilities tradeoff dials are inevitable in AGI. It is useless to search only a safety mechanism that all purely selfish owner of AGIs will use voluntarily, at a safe setting, even when such selfish owners are engaged in an arms race.

However, I disagree with another point you make:

Then the next question to ask is: if the alignment tax is less than infinity, that’s a good start, but just how high or low is the tax? There’s no right answer anymore: it’s just “The lower the better”.

The right answer is definitely not 'the lower the better'. I'd say that your are framing cost of the alignment tax as 'the tax is higher if it gets more in the way of selfish peoples' desire to fully pursue their selfish ends' . In this, you are following the usual 'alignment mechanisms need to be commercially viable' framing on this forum. But with this limited framing, lower alignment taxes will not necessarily imply better outcomes for all of society.

What you need to do is to examine the blank in your third bullet point above more carefully, the blank line you expect the AGI strategy / governance folks to hopefully fill in. What these AGI strategy / governance folks are looking for is clear: they want to prevent destructive arms races, and races-to-the-bottom-of-safety, from happening between somewhat selfish actors, where these actors may be humans or organisations run by humans. This prevention can only happen is there is a countervailing force to all this selfishness.

(The alternative to managing a countervailing force would be to put every selfish human in a re-education camp, and hope that it works. Not the kind of solution that the strategy/governance folks are looking for, but I am sure there is an SF novel somewhere about a re-education camp run by an AGI. Another alternative would be to abandon the moral justification for the market system and markets altogether, abandon the idea of having any system of resource allocation where self-interested actions can be leveraged to promote the common good. Again, not really what most governance folks are looking for,)

Strategy/governance folks are not necessarily looking for low-cost dials as you define them, they are looking for AGI safety dials that they can all force selfish actors to use, as part of the law or of a social contract (where a social contract can be between people but also between governments or government-shaped entities). To enforce the use of a certain dial at a certain setting, one must be able to detect if an actor is breaking the law or social contract by not using the dial, so that sanctions can be applied. The easier and more robust this detection process is, the more useful the dial is.

The cost of being forced to use the dial to the selfish actors is only a secondary consideration. The less self-aware of these selfish actors can be relied on to always complain about being bound to any law or social contract whatsoever.

Replies from: steve2152
comment by Steven Byrnes (steve2152) · 2021-10-08T13:03:35.010Z · LW(p) · GW(p)

The I part I'll agree with is: If we look at a dial, we can ask the question:

If there's an AGI with a safety-capabilities tradeoff dial, to what extent is the dial's setting externally legible / auditable to third parties?

More legible / auditable is better, because it could help enforcement.

I agree with this, and I have just added it to the article. But I disagree with your suggestion that this is counter to what I wrote. In my mind, it's an orthogonal dimension along which dials can vary. I think it's good if the dial is auditable, and I think it's also good if the dial corresponds to a very low alignment tax rate.

I interpret your comment as saying that the alignment tax rate doesn't matter because there will be enforcement, but I disagree with that. I would invoke an analogy to actual taxes. It is already required and enforced that individuals and companies pay (normal) taxes. But everyone knows that a 0.1% tax on Thing X will have a higher compliance rate than an 80% tax on Thing X, other things equal.

After all, everyone is making decisions about whether to pay the tax, versus not pay the tax. Not paying the tax has costs. It's a cost to hire lawyers that can do complicated accounting tricks. It's a cost to run the risk of getting fined or imprisoned. It's a cost to pack up your stuff and move to an anarchic war zone, or to a barge in the middle of the ocean, etc. It's a cost to get pilloried in the media for tax evasion. People will ask themselves: are these costs worth the benefits? If the tax is 0.1%, maybe it's not worth it, maybe it's just way better to avoid all that trouble by paying the tax. If the tax is 80%, maybe it is worth it to engage in tax evasion.

So anyway, I agree that "there will be good enforcement" is plausibly part of the answer. But good enforcement plus low tax will sum up to higher compliance than good enforcement by itself. Unless you think "perfect watertight enforcement" is easy, so that "willingness to comply" becomes completely irrelevant. That strikes me as overly optimistic. Perfect watertight enforcement of anything is practically nonexistent in this world. Perfect watertight enforcement of experimental AGI research would strike me as especially hard. After all, AGI research is feasible to do in a hidden basement / anarchic war zone / barge in the middle of the ocean / secret military base / etc. And there are already several billion GPUs untraceably dispersed all across the surface of Earth.

Replies from: Koen.Holtman
comment by Koen.Holtman · 2021-10-08T14:34:41.627Z · LW(p) · GW(p)

Based on what you say above, I do not think we fundamentally disagree. There are orthogonal dimensions to safety mechanism design which are all important.

I somewhat singled out your line of 'the lower the better' because I felt that your taxation framing was too one-dimensional.

There is another matter: in US/UK political discourse, it common that if someone wants to prevent the government from doing something useful, this something will be framed as a tax, or as interfering with economic efficiency. If someone does want the government to actually do a thing, in fact spend lavishly on doing it, the same thing will often be framed as enforcement. This observation says something about the quality of the political discourse. But as a continental European, it is not the quality of the discourse I want to examine here, only the rhetorical implications.

When you frame your safety dials as taxation, then rhetorically you are somewhat shooting yourself in the foot, if you want proceed by arguing that these dials should not be thrown out of the discussion.

When re-framed as enforcement, the cost of using these safety dials suddenly does not sound as problematic anymore.

But enforcement, in a way that limits freedom of action, is indeed a burden to those at the receiving end, and if enforcement is too heavy they might seek to escape it altogether. I agree that perfectly inescapable watertight enforcement is practically nonexistent in this world, in fact I consider its non-existence to be more of a desirable feature of society than it is a bug.

But to use your terminology, the level of enforcement applied to something is just one of these tradeoff dials that stink. That does not mean we should throw out the dial.

comment by adamShimi · 2021-10-22T08:25:01.987Z · LW(p) · GW(p)

Just posted [? · GW] an analysis of the epistemic strategies [AF · GW] used in this post, which helps making the reasoning more explicit IMO.