"Tech company singularities", and steering them to reduce x-risk

post by Andrew_Critch · 2022-05-13T17:24:00.417Z · LW · GW · 11 comments

Contents

  A tech company singularity as a point of coordination and leverage
    How to steer tech company singularities?
      How to help steer scientists away from AAGI: 
      How to convince the public that AAGI is bad: 
      How to convince regulators that AAGI is bad:
  Summary
None
12 comments

The purpose of this post (also available on the EA Forum [EA · GW]) is to share an alternative notion of “singularity” that I’ve found useful in timelining/forecasting.

Notice here that I’m focusing on a company’s ability to do anything another company can do, rather than an AI system's ability to do anything a human can do.  Here, I’m also focusing on what the company can do if it chooses rather than what it actually ends up choosing to do.  If a company has these capabilities and chooses not to use them — for example, to avoid heavy regulatory scrutiny or risks to public health and safety — it still qualifies as a fully general tech company.

This notion can be contrasted with the following:

Now, consider the following two types of phase changes in tech progress:

  1. A tech company singularity is a transition of a technology company into a fully general tech company.  This could be enabled by safe AGI (almost certainly not AAGI, which is unsafe), or it could be prevented by unsafe AGI destroying the company or the world.
  2. An AI singularity is a transition from having merely narrow AI technology to having AGI technology.

I think the tech company singularity concept, or some variant of it, is important for societal planning, and I’ve written predictions about it before, here:

A tech company singularity as a point of coordination and leverage

The reason I like this concept is that it gives an important point of coordination and leverage that is not AGI, but which interacts in important ways with AGI.  Observe that a tech company singularity could arrive

  1. before AGI, and could play a role in
    1. preventing AAGI, e.g., through supporting and enabling regulation;
    2. enabling AGI but not AAGI, such as if tech companies remain focussed on providing useful/controllable products (e.g., PaLM, DALL-E);
    3. enabling AAGI, such as if tech companies allow experiments training agents to fight and outthink each other to survive.
  2. after AGI, such as if the tech company develops safe AGI, but not AAGI (which is hard to control, doesn't enable the tech company to do stuff, and might just destroy it).

Points (1.1) and (1.2) are, I think, humanity’s best chance for survival.  Moreover, I think there is some chance that the first tech company singularity could come before the first AI singularity, if tech companies remain sufficiently oriented on building systems that are intended to be useful/usable, rather than systems intended to be flashy/scary.

How to steer tech company singularities?

The above suggests an intervention point for reducing existential risk: convincing a mix of

… to shame tech companies for building useless/flashy systems (e.g., autonomous agents trained in evolution-like environments to exhibit survival-oriented intelligence), so they remain focussed on building usable/useful systems (e.g., DALL-E, PaLM) preceding and during a tech company singularity.  In other words, we should try to steer tech company singularities toward developing comprehensive AI services [LW · GW] (CAIS) rather than AAGI.

How to help steer scientists away from AAGI: 

How to convince the public that AAGI is bad: 

How to convince regulators that AAGI is bad:

How to convince investors that AAGI is bad: point out

Speaking personally, I have found it fairly easy to make these points since around 2016.  Now, with the rapid advances in AI we’ll be seeing from 2022 onward, it should be easier.  And, as Adam Scherlis (sort of) points out [EA Forum comment [EA(p) · GW(p)]], we shouldn't assume that no one new will ever care about AI x-risk, especially as AI x-risk becomes more evidently real.  So, it makes sense to re-try making points like these from time to time as discourse evolves.

Summary

In this post, I introduced the notion of a "tech company singularity", discussed how the idea might be usable as an important coordination and leverage point for reducing x-risk, and gave some ideas for convincing others to help steer tech company singularities away from AAGI.

All of this isn't to say we'll be safe from AI risk, and far from it; e.g., see What Multipolar Failure Looks Like [LW · GW].  Efforts to maintain cooperation on safety across labs and jurisdictions remains paramount, IMHO.

In any case, try on the "tech company singularity" concept and see if does anything for you :)

11 comments

Comments sorted by top scores.

comment by ESRogs · 2022-05-13T23:35:24.015Z · LW(p) · GW(p)

fully general tech company is a technology company with the ability to become a world-leader in essentially any industry sector...

Notice here that I’m focusing on a company’s ability to do anything another company can do

To clarify, is this meant to refer to a fixed definition of sectors and what other companies can do as they existed prior to the TCS?

Or is it meant to include FGTCs being able to copy the output of other FGTCs?

I'd assume you mean something like the former, but I think it's worth being explicit about the fact that what sectors exist and what other companies can do will be moving targets.

Replies from: Andrew_Critch
comment by Andrew_Critch · 2022-05-14T03:53:12.278Z · LW(p) · GW(p)

Yep, you got it!  The definition is meant to be non-recursive and grounded in 2022-level industrial capabilities.  This definition is bit unsatisfying insofar as 2022 is a bit arbitrary, except that I don't think the definition would change much if we replaced 2022 by 2010.

I decided not to get into these details to avoid bogging down the post with definitions, but if a lot of people upvote you on this I will change the OP.

Thanks for raising this!

comment by trevor (TrevorWiesinger) · 2022-05-13T19:17:34.768Z · LW(p) · GW(p)

The main problem is that tech companies are much, much better at steering you than you are at steering them. So in the AI policy space, people mostly work on trying to explain AI risk to decisionmakers in an honest and persuasive way [LW · GW], not by relabelling tech companies (which can be interpreted or misinterpreted as pointing fingers).

Another very serious problem is that tech companies are not the friendly, peaceful, or technocratic behemoths that they appear to be to many of their employees and engineers. Autonomous weapons are now a central foundation for nuclear deterrence, and AI production is clearly recognized as critical to national security.

I highly recommend working with people who are already integrated into the international policy space since they already know the lay of the land and the pitfalls, since anyone capable of reinventing the wheel is capable of optimizing the wheel significantly further.

Replies from: Andrew_Critch
comment by Andrew_Critch · 2022-05-13T20:09:53.766Z · LW(p) · GW(p)

(I originally posted this reply to the wrong thread)

tech companies are much, much better at steering you than you are at steering them. So in the AI policy space, people mostly work on trying to explain AI risk to decisionmakers in an honest and persuasive way [LW · GW], not by relabelling tech companies (which can be interpreted or misinterpreted as pointing fingers).

I agree with this generally.

comment by Aryeh Englander (alenglander) · 2022-05-13T18:19:25.554Z · LW(p) · GW(p)

Quick thought: What counts as a "company" and what counts as "one year of effort"? If Alphabet's board and directors decided for some reason to divert 99% of the company's resources towards buying up coal companies and thereby becomes a world leader in the coal industry, does that count? What if Alphabet doesn't buy the companies outright but instead headhunts all of their employees and buys all the necessary hardware and infrastructure?

Similarly, you specified that it needs to be a "tech company", but what exactly differentiates a tech company from a regular company? (For this at least I'm guessing there's likely a standard definition, I just don't know what it is.)

It seems to me that the details here can make a huge difference for predictions at least.

Replies from: Andrew_Critch, Academian
comment by Andrew_Critch · 2022-05-13T21:01:44.093Z · LW(p) · GW(p)

I agree this is an important question.  From the post:

given the choice to do so — in the form of agreement among its Board and CEO — with around one year of effort following the choice. 

I.e., in the definition, the "company" is considered to have "chosen" once the Board and CEO have agreed to do it.  If the CEO and Board agree and make the choice but the company fails to do the thing — e.g., because the employees refuse to go along with the Board+CEO decision — then the company has failed to execute on its choice, despite "effort" (presumably, the CEO and Board telling their people and machines to do stuff that didn't end up getting done).

As for what is or is not a tech company, I don't think it matters to the definition or the post or predictions, because I think only things that would presently colloquially be considered "tech companies" have a reasonable chance at meeting the remainder of the conditions in the definition.

comment by Academian · 2022-05-13T19:22:24.400Z · LW(p) · GW(p)
comment by Jan_Kulveit · 2022-06-17T21:41:01.212Z · LW(p) · GW(p)

I broadly agree with this.

One point where I don't think it quite carves reality at its joints is "become a world-leader in essentially any industry sector" as a criterion.

In practice, if I think of GTC-A trying to become a world leader in for example mining, and, in contrast GTC-B selling AI-based R&D services, coordination services, and other types of "cognitive services" to existing miners, I would expect the GTC-B + existing miners to be more competitive, especially on a timescale of one year. One reason for this is inertia & interactions with slow systems: I would suppose part of comparative advantage of miners is in e.g. being good at negotiating with the government of Ruritania about mining permits, which is not a typical software problem. As a result, I'd expect more of "GTCs becoming larger fraction of the economy" - by sucking up profits from industrial sectors -  rather than GTsC becoming world-leaders in mining.


 

comment by steven0461 · 2022-05-13T19:30:21.483Z · LW(p) · GW(p)
  1. after a tech company singularity,

I think this was meant to read "2. after AGI,"

Replies from: Andrew_Critch
comment by Andrew_Critch · 2022-05-13T21:05:53.202Z · LW(p) · GW(p)

Yes, thanks!  Fixed.

Replies from: michaelkeenan
comment by michaelkeenan · 2022-05-14T15:23:54.684Z · LW(p) · GW(p)

Looks like it's fixed on the EA Forum version but not the LW version.

Replies from: Andrew_Critch
comment by Andrew_Critch · 2022-05-14T16:26:41.316Z · LW(p) · GW(p)

Now fixed here as well.