What’s going on with ‘crunch time’?

post by rosehadshar · 2023-01-20T09:42:53.215Z · LW · GW · 6 comments

Contents

  What does all of this imply?
None
6 comments

Also on EAForum here [EA · GW].

Imagine the set of decisions which impact TAI outcomes.

Some of the decisions are more important than others: they have a larger impact on TAI outcomes. We care more about a decision the more important it is.

Some of the decisions are also easier to influence than others, and we care more about a decision the easier it is to influence.[1] 

The influenceable decisions are distributed over time, but we don’t know the distribution:

It’s possible that in the future there might be a particularly concentrated period of important decisions, for example like this: 

People refer to this concentrated period of important decisions as ‘crunch time’. The distribution might or might not end up actually containing a concentrated period of important decisions - or in other words, crunch time may or may not happen.

Where the distribution does contain a concentrated period of important decisions (or at least is sufficiently likely to in expectation), crunch time might be a useful concept to have:

But we don’t know what the underlying distribution of decisions is. Here are just some of the ways the distribution might look: 

The fact that the underlying distribution is uncertain means that there are several ways in which crunch time might be an unhelpful concept:

What does all of this imply?

I think it’s helpful to imagine the distribution of decisions which impact TAI outcomes, and think about crunch time in that wider context. This helps to keep in frame that a) crunch time may not happen in any sense, and b) there are lots of different forms that crunch time could take.

Some other thoughts:

Thanks to Michael Aird, Adam Bales, Daniel Kokotajo, Jan Kulveit, Chana Messinger and Nicole Ross for variously helping me to think about this.

  1. ^

     Another thing which matters about the decisions is how predictable they are. I haven’t gone into that in this post, so predictability only features here at all to the extent that it’s hard to influence unpredictable things.

  2. ^

     I like this [LW · GW] post, which defines crunch time as “The period where it's relatively more important to optimize for direct effects rather than P2B [? · GW].”

6 comments

Comments sorted by top scores.

comment by Lone Pine (conor-sullivan) · 2023-01-22T05:17:51.448Z · LW(p) · GW(p)

In this post you avoided giving any concrete examples, but I wanted to brainstorm what some of the major decisions are.

  • The decision to run or deploy a particular model, on a particular day, in a particular way. (e.g. OpenAI released ChatGPT to the public on Nov 30, 2022.) This is a decision made by possibly a single engineer, or a team of engineers or by management.
  • The decision to pursue, or not pursue, a particular technology, idea or method. This is a decision made by engineers, researchers, management and grant-makers.
  • The decision to reveal, or not reveal, certain information to the public. The information could be source code, model weights, a whitepaper, or even the existence of a particular capability. This is a decision made by engineers, researchers and management.
  • The choice of a particular reward function or loss function, if that is part of the model. This is a decision made by engineers and researchers.
  • The choice of particular hardware, such as CPUs vs GPUs vs TPUs vs future neuromorphic hardware. Depending on your perspective, this is a decision made by researchers, by chip companies like NVidia and TSMC, by market forces (gaming & crypto), or by mother nature (some technologies are just more practical than others).
  • The tastes of the public & the market. For example, the public has responded strongly to AI art and chatbots in the last year, but in years past the public was not impressed enough by either technology to use them on a daily basis or consider them impactful. This is a kind of collective decision we all make, and it impacts how management makes choices. For another example, if the public strongly wanted ChatGPT to be completely uncensored and offensive, OpenAI would have made different choices when building their RLHF system.
  • The setting of laws and regulations related to AI. This is a decision made by politicians, clerks, lobbyists and activists.
  • The setting of business policy related to what the AI is "allowed" to do according to business policy. (eg ChatGPT will refuse to engage on certain topics, although this is highly hackable.) This is a decision made by management, under pressure from the public, politicians, activists, investors etc.
  • The setting of business policy related to who is allowed to access the AI (eg the general public) and what they are allowed to do with it.
  • The choice of reaction in the event that an AI is behaving badly - ie do they intervene, modify the model, shut it down, etc. This is a decision made by engineers and management, but their choices will likely be highly influenced by the alignment community, especially if there are prepared plans.
  • The decision to prepare a plan. Someone, perhaps from this community, might make plans in the event of certain circumstances, such as an AI that is clearly behaving badly. These plans might be helpful in an emergency. This is a decision made by the alignment community, management and researchers.
  • The decision by the alignment community to communicate in particular ways with particular people, eg having a long private conversation with Sam Altman, or publicly appearing on a podcast to discuss alignment. This decision will influence people's thinking, especially that of the most important decision makers.
  • The decision to research particular alignment concepts, eg Agent Foundations, Shard Theory, the stop button problem, etc. This is a decision made by the alignment community and researchers.

Feel free to add to this list.

comment by Program Den (program-den) · 2023-01-21T05:43:39.902Z · LW(p) · GW(p)

I'm guessing TAI doesn't stand for "International Atomic Time", and maybe has something to do with "AI", as it seems artificial intelligence has really captured folk's imagination. =]

It seems like there are more pressing things to be scared of than AI getting super smart (which almost by default seems to imply "and Evil"), but we (humans) don't really seem to care that much about these pressing issues, as I guess they're kinda boring at this point, and we need exciting.

If we had an unlimited amount of energy and focus, maybe it wouldn't matter, but as you kind of ponder here— how do we get people to stay on target?  The less time there is, the more people we need working to change things to address the issue (see Leaded Gas[1], or CFCs and the Ozone Layer, etc.), but there are a lot of problems a lot of people think are important and we're generally fragmented.

I guess I don't really have any answers, other than the obvious (leaded gas is gone, the ozone is recovering), but I can't help wishing we were more logical than emotional about what we worked towards.

Also, FWIW, I don't know that we know that we can't change the past, or if the universe is deterministic, or all kinds of weird ideas like "are we in a simulation right now/are we the AI"/etc.— which are hardcore axioms to still have "undecided" so to speak!  I better stop here before my imagination really runs wild…

  1. ^

    but like, not leaded pipes so much, as they're still 'round even tho we could have cleaned them up and every year say we will or whatnot, but I digress

Replies from: jimrandomh, ete
comment by jimrandomh · 2023-01-21T08:53:48.055Z · LW(p) · GW(p)

TAI = Transformative AI [? · GW]

I think you're missing too many prerequisites to follow this post, and that you're looking for something more introductory.

Replies from: program-den
comment by Program Den (program-den) · 2023-01-21T22:31:11.820Z · LW(p) · GW(p)

LOL!  Yeah I thought TAI meant 

TAI: Threat Artificial Intelligence

The acronym was the only thing I had trouble following, the rest is pretty old hat.

Unless folks think "crunch time" is something new having only to do with "the singularity" so to speak?

If you're serious about finding out if "crunch time" exists[1] or not, as it were, perhaps looking at existing examples might shed some light on it?

  1. ^

    even if only in regards to AGI

comment by plex (ete) · 2023-01-21T21:53:08.456Z · LW(p) · GW(p)

Agree with Jim, and suggest starting with some Rob Miles videos. The Computerphile ones, and those on his main channel, are a good intro.

Replies from: program-den
comment by Program Den (program-den) · 2023-01-21T22:53:46.990Z · LW(p) · GW(p)

I'm familiar with AGI, and the concepts herein (why the OP likes the proposed definition of CT better than PONR), it was just a curious post, what with having "decisions in the past cannot be changed" and "does X concept exist" and all.

I think maybe we shouldn't muddy the waters more than we already have with "AI" (like AGI is probably a better term for what was meant here— or was it?  Are we talking about losing millions of call center jobs to "AI" (not AGI) and how that will impact [LW · GW] the economy/whatnot?  I'm not sure if that's transformatively up there with the agricultural and industrial revolutions (as automation seems industrial-ish?).  But I digress.), by saying "maybe crunch time isn't a thing?  Or it's relative?".

I mean, yeah, time is relative, and doesn't "actually" exist, but if indeed we live in causal universe [LW · GW] (up for debate [LW · GW]) then indeed, "crunch time" exists, even if by nature it's fuzzy— as lots of things contribute to making Stuff Happen. (The butterfly effect, chaos theory, game theory &c.)

“The avalanche has already started. It is too late for the pebbles to vote."
- Ambassador Kosh