Six AI Risk/Strategy Ideas

post by Wei Dai (Wei_Dai) · 2019-08-27T00:40:38.672Z · LW · GW · 17 comments

Contents

  The "search engine" model of AGI development
  Coordination as an AGI service
  Multiple simultaneous DSAs under CAIS
  Logical vs physical risk aversion
  Combining oracles with human imitations
  "Generate evidence of difficulty" as a research purpose
None
17 comments

AI risk ideas are piling up in my head (and in my notebook) faster than I can write them down as full posts, so I'm going to condense multiple posts into one again [LW · GW]. I may expand some or all of these into full posts in the future. References to prior art are also welcome as I haven't done an extensive search myself yet.

The "search engine" model of AGI development

The current OpenAI/DeepMind model of AGI development (i.e., fund research using only investor / parent company money, without making significant profits) isn't likely to be sustainable, assuming a soft takeoff, but the "search engine" model very well could be. In the "search engine" model, a company (and eventually the AGI itself) funds AGI research and development by selling AI services, while keeping its technology secret. At some point it achieves DSA either by accumulating a big enough lead in AGI technology and other resources to win an open war against the rest of the world, or by being able to simultaneously subvert a large fraction of all cognition done on Earth (i.e., all the AI services that it is offering), causing that cognition to suddenly optimize for its own interests. (This was inspired by / a reply to Daniel Kokotajlo's Soft takeoff can still lead to decisive strategic advantage [LW · GW].)

Coordination as an AGI service

As a refinement of the above, to build a more impregnable monopoly via network effects, the AGI company could offer "coordination as a service", where it promises that any company that hires its AGI as CEO will efficiently coordinate in some fair way with all other companies that also hire its AGI as CEO. See also my AGI will drastically increase economies of scale [LW · GW].

Multiple simultaneous DSAs under CAIS

Suppose CAIS turns out to be a better model than AGI. Many AI services may be natural monopolies and have a large market share for its niche. Suppose many high level AI services all use one particular low level AI service, that lower level AI service (or rather the humans or higher level AI services that have write access to it) could achieve a decisive strategic advantage by subverting the service in a way that causes a large fraction of all cognition on Earth (i.e., all the higher level services that depend on it) to start optimizing for its own interests. Multiple different lower level services could simultaneously have this option. (This was inspired by a comment from ryan_b [LW(p) · GW(p)].)

Logical vs physical risk aversion

Some types of risks may be more concerning than others because they are "logical risks" or highly correlated between Everett branches. Suppose Omega appears and says he is appearing in all Everett branches where some version of you exists and offering you the same choice: If you choose option A he will destroy the universe if the trillionth digit of pi equals the trillionth digit of e, and if you choose option B he will destroy the universe if a quantum RNG returns 0 when generating a random digit. It seems to me that option B is better because it ensures that there's no risk of all Everett branches being wiped out. See The Moral Status of Independent Identical Copies for my intuitions behind this. (How much more risk should we accept under option B until we're indifferent between the two options?)

More realistic examples of logical risks:

  1. AI safety requires solving metaphilosophy.
  2. AI safety requires very difficult global coordination.
  3. Dangerous synthetic biology is easy.

Examples of physical risk:

  1. global nuclear war
  2. natural pandemic
  3. asteroid strike
  4. AI safety doesn't require very difficult global coordination but we fail to achieve sufficient coordination anyway for idiosyncratic reasons.

Combining oracles with human imitations

It seems very plausible that oracles/predictors [LW · GW] and human imitations [LW · GW] (which can be thought of as a specific kind of predictor) are safer (or more easily made safe) than utility maximizers or other kinds of artificial agents. Each of them has disadvantages though: oracles need a human in the loop to perform actions, which is slow and costly, leading to a competitive disadvantage versus AGI agents, and human imitations can be faster and cheaper than humans but not smarter, also leading to a competitive disadvantage versus AGI agents. Combining the two ideas can result in a more competitive (and still relatively easy to make safe) agent. (See this comment [LW(p) · GW(p)] for an example.) This is not a particularly novel idea, since arguably quantilizers and IDA already combine oracles/predictors and human imitations to achieve superintelligent agency, but it still seems worth writing down explicitly.

"Generate evidence of difficulty" as a research purpose

How to handle the problem of AI risk is one of, if not the most important and consequential strategic decisions facing humanity. If we err in the direction of too much caution, in the short run resources are diverted into AI safety projects that could instead go to other x-risk efforts, and in the long run, billions of people could unnecessarily die while we hold off on building "dangerous" AGI and wait for "safe" algorithms to come along. If we err in the opposite direction, well presumably everyone here already knows the downside there.

A crucial input into this decision is the difficulty of AI safety, and the obvious place for decision makers to obtain evidence about the difficulty of AI safety is from technical AI safety researchers (and AI researchers in general), but it seems that not many people have given much thought on how to optimize for the production and communication of such evidence (leading to communication gaps like this one [LW(p) · GW(p)]). (As another example, many people do not seem to consider that doing research on a seemingly intractably difficult problem can be valuable because it can at least generate evidence of difficulty of that particular line of research.)

The evidence can be in the form of:

  1. Official or semi-official consensus of the field
  2. Technical arguments about the difficulty of AI safety
  3. "AI Safety Experts" who can state or explain the difficulty of AI safety to a wider audience
  4. Amount of visible progress in AI safety per unit of resources expended
  5. How optimistic or pessimistic safety researchers seem when they talk to each other or to outside audiences

Bias about the difficulty of AI safety is costly/dangerous, so we should think about how to minimize this bias while producing evidence of difficulty. Some possible sources of bias:

  1. Personal bias (due to genetics, background, etc.)
  2. Selection effects (people who think AI safety is intractable because it's too easy or too hard tend to go into other fields)
  3. Incentives (e.g., your job or social status depends on AI safety not being too easy or too hard)

17 comments

Comments sorted by top scores.

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2019-08-27T03:19:01.984Z · LW(p) · GW(p)

I particularly like your "Logical vs. physical risk aversion" distinction, and agree that we should prioritize reducing logical risk. I think acausal trade makes this particularly concrete. If we make a misaligned superintelligence that "plays nice" in the acausal bargaining community I'd think that's better than making an aligned superintelligence that doesn't, because overall it matters far more that the community is nice than that it have a high population of people with our values.

I also really like your point about how providing evidence that AI safety is difficult may be one of the most important reasons to do AI safety research. I guess I'd like to see some empirically grounded analysis of how likely it is that the relevant policymakers and so forth will be swayed by such things. So far it seems like they've been swayed by direct arguments that the problem is hard, and not so much by our failures to make progress. If anything failure of AI safety researchers to make progress seems to encourage their critics.

comment by habryka (habryka4) · 2021-01-11T02:42:51.039Z · LW(p) · GW(p)

I have now linked at least 10 times to the heading on "'Generate evidence of difficulty' as a research purpose" section of this post. It was a thing that I kind of wanted to point to before this post came out, but felt confused about it, and this post finally gave me a pointer to it. 

I think that section was substantially more novel and valuable to me than the rest of this post, but it is also evidence that others might have also not had some of the other ideas on their map, and so they might found it similarly valuable because of a different section. 

comment by cousin_it · 2019-08-27T07:30:20.114Z · LW(p) · GW(p)

Multiple simultaneous DSAs under CAIS

Taking over the world is a big enough prize, compared to the wealth of a typical agent, that even a small chance of achieving it should already be enough to act. And waiting is dangerous if there's a chance of other agents outrunning you. So multiple agents having DSA but not acting for uncertainty reasons seems unlikely.

Logical vs physical risk aversion

Imagine you care about the welfare of two koalas living in separate rooms. Given a choice between both koalas dying with probability 1/2 or a randomly chosen koala dying with probability 1, why is the latter preferable?

You could say our situation is different because we're the koala. Fine. Imagine you're choosing between a 1/2 physical risk and a 1/2 logical risk to all humanity, but both of them will happen in 100 years when you're already dead, so the welfare of your copies isn't at question. Why is the physical risk preferable? How is that different from the koala situation?

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2019-08-27T08:05:57.899Z · LW(p) · GW(p)

Taking over the world is a big enough prize, compared to the wealth of a typical agent, that even a small chance of achieving it should already be enough to act.

In CAIS, AI services aren't agents themselves, especially the lower level ones. If they're controlled by humans, their owners/operators could well be risk verse enough (equivalently, not assign high enough utility to taking over the world) to not take advantage of a DSA given their uncertainty.

Imagine you’re choosing between a 1⁄2 physical risk and a 1⁄2 logical risk to all humanity, but both of them will happen in 100 years when you’re already dead, so the welfare of your copies isn’t at question. Why is the physical risk preferable?

I don't think it's possible for the welfare of my copies to not be at question. See this comment [LW(p) · GW(p)].

Another line of argument is that suppose [LW · GW] we'll end up getting most of our utility from escaping simulations and taking over much bigger/richer universes. In those bigger universes we might eventually meet up with copies of us from other Everett branches and have to divide up the universe with them. So physical risk isn't as concerning in that scenario because the surviving branches will end up with larger shares of the base universes.

A similar line of thought is that in an acausal trade scenario, each surviving branch of a physical risk could get a better deal because whatever thing of value they have to offer has become more scarce in the multiverse economy.

Replies from: cousin_it
comment by cousin_it · 2019-08-27T08:59:54.315Z · LW(p) · GW(p)

Many such intuitions seem to rely on "doors" between worlds. That makes sense - if we have two rooms of animals connected by a door, then killing all animals in one room will just lead to it getting repopulated from the other room, which is better than killing all animals in both rooms with probability 1/2. So in that case there's indeed a difference between the two kinds of risk.

The question is, how likely is a door between two Everett branches, vs. a door connecting a possible world with an impossible world? With current tech, both are impossible. With sci-fi tech, both could be possible, and based on the same principle (simulating whatever is on the other side of the door). But maybe "quantum doors" are more likely than "logical doors" for some reason?

Replies from: evhub, Wei_Dai
comment by evhub · 2019-08-28T19:07:24.255Z · LW(p) · GW(p)

Another argument that definitely doesn't rely on any sort of "doors" for why physical risk might be preferable to logical risk is just if you have diminishing returns on the total number of happy humans. As long as your returns to happy humans are sublinear (logarithmic is a standard approximation, though anything sublinear works), then you should prefer a guaranteed shot at the Everett branches having lots of happy humans to a chance of all the Everett branches having happy humans. To see this, suppose measures your returns to the total number of happy humans across all Everett branches. Let be the total number of happy humans in a good Everett branch and the total number of Everett branches. Then, in the physical risk situation, you get whereas, in the logical risk situation, you get which are only equal if is linear. Personally, I think my returns are sublinear, since I pretty strongly want there to at least be some humans—more strongly than I want there to be more humans, though I want that as well. Furthermore, if you believe there's a chance that the universe is infinite, then you should probably be using some sort of measure over happy humans rather than just counting the number, and my best guess for what such a measure might look like [LW · GW] seems to be at least somewhat locally sublinear.

comment by Wei Dai (Wei_Dai) · 2019-08-29T07:38:31.952Z · LW(p) · GW(p)

So you're saying that (for example) there could be a very large universe that is running simulations of both possible worlds and impossible worlds, and therefore even if we go extinct in all possible worlds, versions of us that live in the impossible worlds could escape into the base universe so the effect of a logical risk would be similar to a physical risk of equal magnitude (if we get most of our utility from controlling/influencing such base universes). Am I understanding you correctly?

If so, I have two objections to this. 1) Some impossible worlds seem impossible to simulate. For example suppose in the actual world AI safety requires solving metaphilosophy. How would you simulate an impossible world in which AI safety doesn't require solving metaphilosophy? 2) Even for the impossible worlds that maybe can be simulated (e.g., where the trillionth digit of pi is different from what it actually is) it seems that only a subset of reasons [LW · GW] for running simulations of possible worlds would apply to impossible worlds, so I'm a lot less sure that "logical doors" exist than I am that "quantum doors" exist.

Replies from: cousin_it
comment by cousin_it · 2019-08-29T14:07:15.416Z · LW(p) · GW(p)

It seems to me that AI will need to think about impossible worlds anyway - for counterfactuals, logical uncertainty, and logical updatelessness/trade. That includes worlds that are hard to simulate, e.g. "what if I try researching theory X and it turns out to be useless for goal Y?" So "logical doors" aren't that unlikely.

comment by Buck · 2019-08-29T05:02:46.939Z · LW(p) · GW(p)

Minor point: I think asteroid strikes are probably very highly correlated between Everett branches (though maybe the timing of spotting an asteroid on a collision course is variable).

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2019-08-29T05:29:39.219Z · LW(p) · GW(p)

I think if we could look at all the Everett branches that contain some version of you, we'd see "bundles" where the asteroid locations are the same within each bundle but different between bundles, because different bundles evolved from different starting conditions (and then converged in terms of having produced someone who is subjectively indistinguishable from you). So a big asteroid strike would wipe out humanity in an entire bundle but that would only constitute a small fraction of all the Everett branches that contain a version of you.

Hopefully that makes sense?

Replies from: Buck
comment by Buck · 2019-12-02T00:28:38.110Z · LW(p) · GW(p)

Ah yes this seems totally correct

comment by habryka (habryka4) · 2019-08-27T02:31:11.238Z · LW(p) · GW(p)

(Edit note: Fixed a broken link that went to https://www.lesswrong.com/posts/sM2sANArtSJE6duZZ/where-are-people-thinking-and-talking-about-global/answer/rgnRPkCkdqQmSNrg2. [? · GW] My guess is that this is some kind of Greaterwrong-specific syntax that doesn't work on general lesswrong)

Replies from: clone of saturn
comment by clone of saturn · 2019-08-28T01:14:25.955Z · LW(p) · GW(p)

Oops, these links should be translated properly now.

comment by Ben Pace (Benito) · 2020-12-15T05:24:18.302Z · LW(p) · GW(p)

The first three examples here have been pretty helpful to me in considering how DSAs and takeoffs will go and why they may be dangerous.

comment by habryka (habryka4) · 2020-12-15T05:22:31.002Z · LW(p) · GW(p)

I've referred specifically to the section on "Generate evidence of difficulty" as a research purpose many times since this post has come out, and while I have disagreements with it, I do really like it as a handle for a consideration that I hadn't previously seen written up, and does strike me as quite important.

comment by Rohin Shah (rohinmshah) · 2019-12-29T02:38:14.521Z · LW(p) · GW(p)

Planned summary for the Alignment Newsletter:

This post briefly presents three ways that power can become centralized in a world with <@Comprehensive AI Services@>(@Reframing Superintelligence: Comprehensive AI Services as General Intelligence@), argues that under risk aversion "logical" risks can be more concerning than physical risks because they are more correlated, proposes combining human imitations and oracles to remove the human in the loop and become competitive, and suggests doing research to generate evidence of difficulty of a particular strand of research.
comment by Gurkenglas · 2019-08-27T03:08:10.600Z · LW(p) · GW(p)

"Generate evidence of difficulty" as a research purpose

Yes requires the possibility of no. There is a possible world in which safety is easy unless everyone believes this. This unfortunate dynamic makes it awkward to point truth seekers anywhere near that direction.