In a multipolar scenario, how do people expect systems to be trained to interact with systems developed by other labs?

post by JesseClifton · 2020-12-01T20:04:18.197Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    10 Daniel Kokotajlo
    3 ofer
None
No comments

I haven't seen much discussion of this, but it seems like an important factor in how well AI systems deployed by actors with different goals manage to avoid conflict (cf. my discussion of equilibrium and prior selection problems here [AF · GW]).

For instance, would systems be trained

Answers

answer by Daniel Kokotajlo · 2020-12-07T09:51:41.963Z · LW(p) · GW(p)

My guess is that at some point we'll transition away from this "First we train, then we deploy" paradigm to a paradigm where systems are continually learning on the job. My guess is that insofar as powerful AIs play a role in a multipolar scenario, they'll be in this second paradigm. So in a sense they'll be learning from each other, though perhaps early in their training (i.e. prior to deployment) they were trained against copies of themselves or something. Unfortunately I doubt your case #1 will happen, unless we advocate strongly for it. I think by the time these agents are this powerful, their code will be closely guarded. These are all just guesses though, I think other scenarios are certainly plausible also.

comment by JesseClifton · 2020-12-07T18:29:25.088Z · LW(p) · GW(p)

Makes sense. Though you could have deliberate coordinated training even after deployment. For instance, I'm particularly interested in the question of "how will agents learn to interact in high stakes circumstances which they will rarely encounter?" One could imagine the overseers of AI systems coordinating to fine-tune their systems in simulations of such encounters even after deployment. Not sure how plausible that is though.

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-12-08T06:04:36.131Z · LW(p) · GW(p)

I totally agree it could be done, I'm just saying I think it won't happen without special effort on our part, probably. Rivals are suspicious of each other, and would probably be suspicious of a proposal like this coming from their rival. If they are even concerned about the problem it is trying to fix at all.

answer by Ofer (ofer) · 2020-12-10T11:41:19.652Z · LW(p) · GW(p)

Some off-the-cuff thoughts:

It seems plausible that transformative agents will be trained exclusively on real-world data (without using simulated environments) [EDIT: in "data" I mean to include the observation/reward signal from the real-world environment in an online RL setup]; including social media feed-creation algorithms, and algo-trading algorithms. In such cases, the researchers don't choose how to implement the "other agents" (the other agents are just part of the real-world environment that the researchers don't control).

Focusing on agents that are trained on simulated environments that involve multiple agents: For a lab to use copies of other labs' agents, the labs will probably need to cooperate (or some other process that involves additional actors may need to exist). In any case, using copies of the agent that is being trained (i.e. self-play) seems to me very plausible. (Like, I think both AlphaZero and OpenAI Five were trained via self-play and that self-play is generally considered to be a very prominent technique for RL-in-simulated-environments-that-involve-multiple-agents).

comment by Anthony DiGiovanni (antimonyanthony) · 2020-12-11T04:40:49.777Z · LW(p) · GW(p)

It seems plausible that transformative agents will be trained exclusively on real-world data (without using simulated environments); including social media feed-creation algorithms, and algo-trading algorithms. In such cases, the researchers don't choose how to implement the "other agents" (the other agents are just part of the real-world environment that the researchers don't control).

I have quite a different intuition on this, and I'm curious if you have a particular justification for expecting non-simulated training for multi-agent problems. Some reasons I expect otherwise:

  • At the very least, in the early days, you simply won't have much (accurate, up-to-date) data on the behavior of AI systems made by other labs because they're new. So you'll probably need to use some simulations and/or direct coordination with those other labs.
  • Even as the systems become more mature, if there's rapid improvement over time as seems likely, again the relevant data that would let you accurately predict the behavior of the other systems will be sparse.
  • Even if data are abundant, presumably for some strategic purposes developers won't want to design their AIs to behave so predictably that learning from such data will be sufficient. You'll probably need to use some combination of game theoretically informed models and augmentation of the data to account for distributional shifts, if you want to put your AI to use in any task that involves interacting with AIs you didn't create.
Replies from: ofer
comment by Ofer (ofer) · 2020-12-11T06:31:11.230Z · LW(p) · GW(p)

I have quite a different intuition on this, and I'm curious if you have a particular justification for expecting non-simulated training for multi-agent problems.

In certain domains, there are very strong economic incentives to train agents that will act in a real-world multi-agent environment, where the ability to simulate the environment is limited (e.g. trading in stock markets and choosing content for social media users).

No comments

Comments sorted by top scores.