AIs teams will probably be more superintelligent than individual AIs

post by Robert_AIZI · 2023-07-04T14:06:49.881Z · LW · GW · 0 comments

This is a link post for https://aizi.substack.com/p/ais-teams-will-probably-be-more-superintelligent

Contents

  Summary
  Teams of Humans are more Powerful and Intelligent than Individuals
  This Probably Applies to AIs Too
  A Future of AI Teams
None
No comments

Summary

Teams of humans (countries, corporations, governments, etc) are more powerful and intelligent than individual humans. Our prior should be the same for AIs. AI organizations may then face coordination problems like management overhead, the principal-agent problem, defection/mutiny, and general Moloch-y-ness. This doesn’t especially reduce risk of AI takeover.

Teams of Humans are more Powerful and Intelligent than Individuals

Human society is at such a scale that many goals can only be achieved by a team of many people working in concert. Such goals include winning a war, building a successful company, making a Hollywood movie, and realizing scientific achievements (e.g. building nuclear bombs, going to the moon, eradicating smallpox).

This is not a coincidence, because collaboration gives you a massive power boost. Even if the boost is subadditive (e.g. 100 people together are only 10x more powerful than 1 person), it is still a boost, and therefore the most powerful agents in a society will be teams rather than individuals.

This Probably Applies to AIs Too

I expect the same dynamic will apply to AIs as well: no matter how superintelligent an AI is, a team of such AIs working together would be even more superintelligent[1]. If this is true, we should expect that even after the creation of ASI, the most powerful agents in the world will be coordinated teams, not individual ASI.

Let me acknowledge one objection and provide two rebuttals:

Objection: AI can increase their power by A) coordinating with other AI or B) making themselves smarter, and these both cost the same resource (compute). If B) is always more cost-efficient, then A) will not happen.

Rebuttal A: Currently, LLM training runs are far more expensive than operation, by many orders of magnitude[2]. Even if an AI devoted most of its resources to training a successor system, spending 1% of its compute running copies of itself seems like a free roll to increase its power (including by finding optimizations for the training runs).

Rebuttal B: A single AI will simply be able to make far fewer decisions per second than a team of AI. If speed is ever highly valued (and it usually is), a team can be more powerful by making many good decisions rather than a few great decisions.

A Future of AI Teams

If teaming up is always advantageous, we can still be headed for AI takeover and human disempowerment [AF · GW]. However, the resulting world may not be fully under the control of a single AI or a singleton (in Bostrom’s sense of “a single decision-making agency”). Instead, you’d have AIs forming structures similar to corporations or governments, potentially facing the usual follies:

  1. ^

    I’m including multiple copies of a single AI as a possibility in “a team of such AIs”.

  2. ^

    I measured ChatGPT-3.5 as producing 60 tokens/second, and ChatGPT-4 as producing 12 tokens/second. At current pricing on the maximum context windows, that comes out to ~$300 and ~$4500 for a year, whereas Sam Altman said it cost $100M to train GPT-4. If GPT-4 suddenly became sentient, it could run 200 copies of itself for a year for <1% of what would costs to train a GPT-5.

0 comments

Comments sorted by top scores.