Views on when AGI comes and on strategy to reduce existential risk

post by TsviBT · 2023-07-08T09:00:19.735Z · LW · GW · 31 comments

Contents

  My views on when AGI comes
    AGI
    Conceptual capabilities progress
    Timelines
  Responses to some arguments for AGI soon
    The "inputs" argument
    The "big evolution" argument
    "I see how to do it"
    The "no blockers" intuition
    "We just need X" intuitions
    The bitter lesson and the success of scaling
    Large language models
  Other comments on AGI soon
  My views on strategy
    Things that might actually work
None
31 comments

Summary: AGI isn't super likely to come super soon. People should be working on stuff that saves humanity in worlds where AGI comes in 20 or 50 years, in addition to stuff that saves humanity in worlds where AGI comes in the next 10 years.

Thanks to Alexander Gietelink Oldenziel, Abram Demski, Daniel Kokotajlo, Cleo Nardo, Alex Zhu, and Sam Eisenstat for related conversations.

My views on when AGI comes

AGI

By "AGI" I mean the thing that has very large effects on the world (e.g., it kills everyone) via the same sort of route that humanity has large effects on the world. The route is where you figure out how to figure stuff out, and you figure a lot of stuff out using your figure-outers, and then the stuff you figured out says how to make powerful artifacts that move many atoms into very specific arrangements.

This isn't the only thing to worry about. There could be transformative AI that isn't AGI in this sense. E.g. a fairly-narrow AI that just searches configurations of atoms and finds ways to do atomically precise manufacturing would also be an existential threat and a possibility for an existential win.

Conceptual capabilities progress

The "conceptual AGI" view:

The first way humanity makes AGI is by combining some set of significant ideas about intelligence. Significant ideas are things like (the ideas of) gradient descent, recombination, probability distributions, universal computation, search, world-optimization. Significant ideas are to a significant extent bottlenecked on great natural philosophers doing great natural philosophy about intelligence, with sequential bottlenecks between many insights.

The conceptual AGI doesn't claim that humanity doesn't already have enough ideas to make AGI. I claim that——though not super strongly.

Timelines

Giving probabilities here doesn't feel great. For one thing, it seems to contribute to information cascades and to shallow coalition-forming. For another, it hides the useful models. For yet another thing: A probability bundles together a bunch of stuff I have models about, with a bunch of stuff I don't have models about. For example, how many people will be doing original AGI-relevant research in 15 years? I have no idea, and it seems like largely a social question. The answer to that question does affect when AGI comes, though, so a probability about when AGI comes would have to depend on that answer.

But ok. Here's some butt-numbers:

If I were trying to make a model with parts, I might try starting with a mixture of Erlang distributions of different shapes, and then stretching that according to some distribution about the number of people doing original AI research over time.

Again, this is all butt-numbers. I have almost no idea about how much more understanding is needed to make AGI, except that it doesn't seem like we're there yet.

Responses to some arguments for AGI soon

The "inputs" argument

At about 1:15 in this interview, Carl Shulman argues (quoting from the transcript):

We've been scaling [compute expended on ML] up four times as fast as was the case for most of the history of AI. We're running through the orders of magnitude of possible resource inputs you could need for AI much much more quickly than we were for most of the history of AI. That's why this is a period with a very elevated chance of AI per year because we're moving through so much of the space of inputs per year [...].

This isn't the complete argument Shulman gives, but on its own it's interesting. On its own, it's valid, but only if we're actually scaling up all the needed inputs.

On the conceptual AGI view, this isn't the case, because we aren't very greatly increasing the amount of great natural philosophers doing great natural philosophy about intelligence. That's a necessary input, and it's only being somewhat scaled up. For one thing, many new AI researchers are correlated with each other, and many are focused on scaling up, applying, and varying existing ideas. For another thing, sequential progress can barely be sped up with more bodies.

The "big evolution" argument

Carl goes on to argue that eventually, when we have enough compute, we'll be able to run a really big evolutionary process that finds AGIs (if we haven't already made AGI). This idea also appears in Ajeya Cotra's report [LW · GW] on the compute needed to create AGI.

I broadly agree with this. But I have two reasons that this argument doesn't make AGI seem very likely very soon.

The first reason is that running a big evolution actually seems kind of hard; it seems to take significant conceptual progress and massive engineering effort to make the big evolution work. What I'd expect to see when this is tried, is basically nothing; life doesn't get started, nothing interesting happens, the entities don't get far (beyond whatever primitives were built in). You can get around this by invoking more compute, e.g. by simulating physics more accurately at a more detailed level, or by doing hyperparameter search to find worlds that lead to cool stuff. But then you're invoking more compute. (I'd also expect a lot of the hacks that supposedly make our version of evolution much more efficient than real evolution, to actually result in our version being circumscribed, i.e. it peters out because the shortcut that saved compute also cut off some important dimensions of search.)

The second reason is that evolution seems to take a lot of serial time. There's probably lots of clever things one can do to shortcut this, but these would be significant conceptual progress.

"I see how to do it"

My (limited / filtered) experience with these ideas leads me to think that [ideas knowably sufficient to make an AGI in practice] aren't widespread or obvious. (Obviously it is somehow feasible to make an AGI, because evolution did it.)

The "no blockers" intuition

An intuition that I often encounter is something like this:

Previously, there were blockers to current systems being developed into AGI. But now those blockers have been solved, so AGI could happen any time now.

This sounds to my ears like: "I saw how to make AGI, but my design required X. Then someone made X, so now I have a design for an AGI that will work.". But I don't think that's what they think. I think they don't think they have to have a design for an AGI in order to make an AGI.

I kind of agree with some version of this——there's a lot of stuff you don't have to understand, in order to make something that can do some task. We observe this in modern ML. But current systems, though they impressively saturate some lower-dimensional submanifold of capability-space, don't permeate a full-dimensional submanifold. Intelligence is a positive thing. Most computer code doesn't put itself on an unbounded trajectory of gaining capabilities. To make it work you have to do engineering and science, at some level. Bridges don't hold weight just because there's nothing blocking them from holding weight.

Daniel Kokotajlo points out that for things that grow, it's kind of true that they'll succeed as long as there aren't blockers——and for example animal husbandry kind of just works, without the breeders understanding much of anything about the internals of why their selection pressures are met with adequate options to select. This is true, but it doesn't seem very relevant to AGI because we're not selecting from an existing pool of highly optimized "genomic" (that is, mental) content. If instead of tinkering with de novo gradient-searched circuits, we were tinkering with remixing and mutating whole-brain emulations, then I would think AGI comes substantially sooner.

Another regime where "things just work" is many mental contexts where a task is familiar enough in some way that you can expect to succeed at the task by default. For example, if you're designing a wadget, and you've previously designed similar wadgets to similar specifications, then it makes sense to treat a design idea as though it's going to work out——as though it can be fully fleshed out into a satisfactory, functioning design——unless you see something clearly wrong with it, a clear blocker like a demand for a metal with unphysical properties. Again, like the case of animal husbandry, the "things just work" comes from the (perhaps out of sight) preexisting store of optimized content that's competent to succeed at the task given a bit of selection and arrangement. In the case of AGI, no one's ever built anything like that, so the store of knowledge that would automatically flesh out blockerless AGI ideas is just not there.

Yet another such regime is markets, where the crowd of many agents can be expected to figure out how to do something as long as it's feasible. So, a version of this intuition goes:

There are a lot of people trying to make AGI. So either there's some strong blocker that makes it so that no one can make AGI, or else someone will make AGI.

This is kind of true, but it just goes back to the question of how much conceptual progress will people make towards AGI. It's not an argument that we already have the understanding needed to make AGI. If it's used as an argument that we already have the understanding, then it's an accounting mistake: it says "We already have the understanding. The reason we don't need more understanding, is that if there were more understanding needed, someone else will figure it out, and then we'll have it. Therefore no one needs to figure anything else out.".

Finally: I also see a fair number of specific "blockers", as well as some indications that existing things don't have properties that would scare me.

"We just need X" intuitions

Another intuition that I often encounter is something like this:

We just need X to get AGI. Once we have X, in combination with Y it will go all the way.

Some examples of Xs: memory, self-play, continual learning, curricula, AIs doing AI research, learning to learn, neural nets modifying their own weights, sparsity, learning with long time horizons.

For example: "Today's algorithms can learn anything given enough data. So far, data is limited, and we're using up what's available. But self-play generates infinite data, so our systems will be able to learn unboundedly. So we'll get AGI soon.".

This intuition is similar to the "no blockers" intuition, and my main response is the same: the reason bridges stand isn't that you don't see a blocker to them standing. See above.

A "we just need X" intuition can become a "no blockers" intuition if someone puts out an AI research paper that works out some version of X. That leads to another response: just because an idea is, at a high level, some kind of X, doesn't mean the idea is anything like the fully-fledged, generally applicable version of X that one imagines when describing X.

For example, suppose that X is "self-play". One important thing about self-play is that it's an infinite source of data, provided in a sort of curriculum of increasing difficulty and complexity. Since we have the idea of self-play, and we have some examples of self-play that are successful (e.g. AlphaZero), aren't we most of the way to having the full power of self-play? And isn't the full power of self-play quite powerful, since it's how evolution made AGI? I would say "doubtful". The self-play that evolution uses (and the self-play that human children use) is much richer, containing more structural ideas, than the idea of having an agent play a game against a copy of itself.

Most instances of a category are not the most powerful, most general instances of that category. So just because we have, or will soon have, some useful instances of a category, doesn't strongly imply that we can or will soon be able to harness most of the power of stuff in that category. I'm reminded of the politician's syllogism: "We must do something. This is something. Therefore, we must do this.".

The bitter lesson and the success of scaling

Sutton's bitter lesson, paraphrased:

AI researchers used to focus on coming up with complicated ideas for AI algorithms. They weren't very successful. Then we learned that what's successful is to leverage computation via general methods, as in deep learning and massive tree search.

Some add on:

And therefore what matters in AI is computing power, not clever algorithms.

This conclusion doesn't follow. Sutton's bitter lesson is that figuring out how to leverage computation using general methods that scale with more computation beats trying to perform a task by encoding human-learned specific knowledge about the task domain. You still have to come up with the general methods. It's a different sort of problem——trying to aim computing power at a task, rather than trying to work with limited computing power or trying to "do the task yourself"——but it's still a problem. To modify a famous quote: "In some ways we feel we are as bottlenecked on algorithmic ideas as ever, but we believe we are bottlenecked on a higher level and about more important things."

Large language models

Some say:

LLMs are already near-human and in many ways super-human general intelligences. There's very little left that they can't do, and they'll keep getting better. So AGI is near.

This is a hairy topic, and my conversations about it have often seemed not very productive. I'll just try to sketch my view:

Other comments on AGI soon

My views on strategy

Things that might actually work

Besides the standard stuff (AGI alignment research, moratoria on capabilities research, explaining why AGI is an existential risk), here are two key interventions:

31 comments

Comments sorted by top scores.

comment by Max H (Maxc) · 2023-07-08T15:42:29.185Z · LW(p) · GW(p)

(Obviously it is somehow feasible to make an AGI, because evolution did it.)

This parenthetical is one of the reasons why I think AGI is likely to come soon.

The example of human evolution provides a strict upper bound on the difficulty of creating (true, lethally dangerous) AGI, and of packing it into a 10 W, 1000 cm box.

That doesn't mean that recreating the method used by evolution (iterative mutation over millions of years at planet scale) is the only way to discover and learn general-purpose reasoning algorithms. Evolution had a lot of time and resources to run, but it is an extremely dumb optimization process that is subject to a bunch of constraints and quirks of biology, which human designers are already free of.

To me, LLMs and other recent AI capabilities breakthroughs are evidence that methods other than planet-scale iterative mutation can get you something, even if it's still pretty far from AGI. And I think it is likely that capabilities research will continue to lead to scaling and algorithms progress that will get you more and more something. But progress of this kind can't go on forever - eventually it will hit on human-level (or better) reasoning ability. 

The inference I make from observing both the history of human evolution and the spate of recent AI capabilities progress is that human-level intelligence can't be that special or difficult to create in an absolute sense, and that while evolutionary methods (or something isomorphic to them) at planet scale are sufficient to get to general intelligence, they're probably not necessary.

Or, put another way:
 

Finally: I also see a fair number of specific "blockers", as well as some indications that existing things don't have properties that would scare me.


I mostly agree with the point about existing systems, but I think there are only so many independent high-difficulty blockers which can "fit" inside the AGI-invention problem, since evolution somehow managed to solve them all through inefficient brute force. LLMs are evidence that at least some of the (perhaps easier) blockers can be solved via methods that are tractable to run on current-day hardware on far shorter timescales than evolution.

 

Replies from: TsviBT
comment by TsviBT · 2023-07-09T00:24:38.374Z · LW(p) · GW(p)

(Glib answers in place of no answers)

eventually it will hit on human-level (or better) reasoning ability.

Or it's limited to a submanifold of generators.

inefficient brute force

I don't think this is a good description of evolution.

Replies from: Maxc
comment by Max H (Maxc) · 2023-07-09T03:30:20.183Z · LW(p) · GW(p)

Or it's limited to a submanifold of generators.

"It" here refers to progress from human ingenuity, so I'm hesitant to put any limits whatsoever on what it will produce and how fast, let alone a limit below what evolution has already achieved.

I don't think this is a good description of evolution.

Hmm, yeah. The thing I am trying to get at is that evolution is very dumb and limited in some ways (in the sense of An Alien God [LW · GW], Evolutions Are Stupid (But Work Anyway) [LW · GW]), compared to human designers, but managed to design a general intelligence anyway, given enough time / energy / resources.
 

By "inefficient", I mean that human researchers with GPUs can (probably) design and create general intelligence OOM faster than evolution. Humans are likely to accomplish such a feat in decades or centuries at the most, so I think it is justified to call any process which takes millennia or longer inefficient, even if the human-based decades-long design process hasn't actually succeeded yet.

Replies from: TsviBT, TsviBT
comment by TsviBT · 2023-07-10T06:57:18.975Z · LW(p) · GW(p)

evolution is very dumb and limited in some ways

Ok. This makes sense. And I think about everyone agrees that evolution is very inefficient, in the sense that with some work (but vastly less time than evolution used) humans will be able to figure out how to make a thing that, using much less resources than evolution used, makes an AGI.

I was objecting to "brute force", not "inefficient". It's brute force in some sense, like it's "just physics" in the sense that you can just set up some particles and then run physics forward and get an AGI. But it also uses a lot of design ideas (stuff in the genome, and some ecological structure). It does a lot of search on a lot of dimensions of design. If you don't efficient-ify your big evolution, you're invoking a lot of compute; if you do efficient-ify, you might be cutting off those dimensions of search.

comment by TsviBT · 2023-07-10T06:06:20.683Z · LW(p) · GW(p)

"It" here refers to progress from human ingenuity, so I'm hesitant to put any limits whatsoever on what it will produce and how fast

There's a contingent fact which is how many people are doing how much great original natural philosophy about intelligence and machine learning. If I thought the influx of people were directed at that, rather than at other stuff, I'd think AGI was coming sooner.

Humans are likely to accomplish such a feat in decades or centuries at the most,

As I said in the post, I agree with this, but I think it requires a bunch of work that hasn't been done yet, some of it difficult / requires insights.

Replies from: Maxc, mateusz-baginski
comment by Max H (Maxc) · 2023-07-12T17:20:40.278Z · LW(p) · GW(p)

I actually think another lesson from both evolution and LLMs is that it might not require much or any novel philosophy or insight to create useful cognitive systems, including AGI. I expect high-quality explicit philosophy to be one way of making progress, but not the only one.

Evolution itself did not do any philosophy in the course of creating general intelligence, and humans themselves often manage to grow intellectually and get smarter without doing natural philosophy, explicit metacognition, or deep introspection.

So even if LLMs and other current DL paradigm methods plateau, I think it's plausible, even likely, that capabilities research like Voyager will continue making progress for a lot longer. Maybe Voyager-like approaches will scale all the way to AGI, but even if they also plateau, I expect that there are ways of getting unblocked other than doing explicit philosophy of intelligence research or massive evolutionary simulations.

In terms of responses to arguments in the post: it's not that there are no blockers, or that there's just one thing we need, or that big evolutionary simulations will work or be feasible any time soon. It's just that explicit philosophy isn't the only way of filling in the missing pieces, however large and many they may be.

Replies from: Darcy
comment by Dalcy (Darcy) · 2023-07-13T12:48:23.555Z · LW(p) · GW(p)

Related - "There are always many ways through the garden of forking paths, and something needs only one path to happen."

comment by Mateusz Bagiński (mateusz-baginski) · 2023-07-10T07:23:57.721Z · LW(p) · GW(p)

Don't you think that once scaling hits the wall (assuming it does) the influx of people will be redirected towards natural philosophy of Intelligence and ML?

Replies from: TsviBT
comment by TsviBT · 2023-07-10T07:47:23.070Z · LW(p) · GW(p)

Yep! To some extent. That's what I meant by "It also seems like people are distracted now.", above. I have a denser probability on AGI in 2037 than on AGI in 2027, for that reason.

Natural philosophy is hard, and somewhat has serial dependencies, and IMO it's unclear how close we are. (That uncertainty includes "plausibly we're very very close, just another insight about how to tie things together will open the floodgates".) Also there's other stuff for people to do. They can just quiesce into bullshit jobs; they can work on harvesting stuff; they can leave the field; they can work on incremental progress.

comment by Adele Lopez (adele-lopez-1) · 2023-07-09T14:30:28.025Z · LW(p) · GW(p)

Is there a specific thing you think LLMs won't be able to do soon, such that you would make a substantial update toward shorter timelines if there was an LLM able to do it within 3 years from now?

Replies from: TsviBT, Benito
comment by TsviBT · 2023-07-10T06:42:17.793Z · LW(p) · GW(p)

Well, making it pass people's "specific" bar seems frustrating, as I mentioned in the post, but: understand stuff deeply--such that it can find new analogies / instances of the thing, reshape its idea of the thing when given propositions about the thing taken as constraints, draw out relevant implications of new evidence for the ideas.

Like, someone's going to show me an example of an LLM applying modus ponens, or making an analogy. And I'm not going to care, unless there's more context; what I'm interested in is [that phenomenon which I understand at most pre-theoretically, certainly not explicitly, which I call "understanding", and which has as one of its sense-experience emanations the behavior of making certain "relevant" applications of modus ponens, and as another sense-experience emanation the behavior of making analogies in previously unseen domains that bring over rich stuff from the metaphier].

Replies from: adele-lopez-1, Roman Leventov
comment by Adele Lopez (adele-lopez-1) · 2023-07-10T06:59:31.051Z · LW(p) · GW(p)

Alright, to check if I understand, would these be the sorts of things that your model is surprised by?

  1. An LLM solves a mathematical problem by introducing a novel definition which humans can interpret as a compelling and useful concept.
  2. An LLM which can be introduced to a wide variety of new concepts not in its training data, and after a few examples and/or clarifying questions is able to correctly use the concept to reason about something.
  3. A image diffusion model which is shown to have a detailed understanding of anatomy and 3D space, such that you can use it to transform an photo of a person into an image of the same person in a novel pose (not in its training data) and angle with correct proportions and realistic joint angles for the person in the input photo.
Replies from: TsviBT
comment by TsviBT · 2023-07-10T07:21:31.031Z · LW(p) · GW(p)

Unfortunately, more context is needed.

An LLM solves a mathematical problem by introducing a novel definition which humans can interpret as a compelling and useful concept.

I mean, I could just write a python script that prints out a big list of definitions of the form

"A topological space where every subset with property P also has property Q"

and having P and Q be anything from a big list of properties of subsets of topological spaces. I'd guess some of these will be novel and useful. I'd guess LLMs + some scripting could already take advantage of some of this. I wouldn't be very impressed by that (though I think I would be pretty impressed by the LLM being able to actually tell the difference between valid proofs in reasonable generality). There are some versions of this I'd be impressed by, though. Like if an LLM had been the first to come up with one of the standard notions of curvature, or something, that would be pretty crazy.

An LLM which can be introduced to a wide variety of new concepts not in its training data, and after a few examples and/or clarifying questions is able to correctly use the concept to reason about something.

I haven't tried this, but I'd guess if you give an LLM two lists of things where list 1 is [things that are smaller than a microwave and also red] and list 2 is [things that are either bigger than a microwave, or not red], or something like that, it would (maybe with some prompt engineering to get it to reason things out?) pick up that "concept" and then use it, e.g. sorting a new item, or deducing from "X is in list 1" to "X is red". That's impressive (assuming it's true), but not that impressive.

On the other hand, if it hasn't been trained on a bunch of statements about angular momentum, and then it can--given some examples and time to think--correctly answer questions about angular momentum, that would be surprising and impressive. Maybe this could be experimentally tested, though I guess at great cost, by training a LLM on a dataset that's been scrubbed of all mention of stuff related to angular momentum (disallowing math about angular momentum, but allowing math and discussion about momentum and about rotation), and then trying to prompt it so that it can correctly answer questions about angular momentum. Like, the point here is that angular momentum is a "new thing under the sun" in a way that "red and smaller than microwave" is not a new thing under the sun.

Replies from: Roman Leventov
comment by Roman Leventov · 2023-07-10T08:50:28.958Z · LW(p) · GW(p)

On the other hand, if it hasn't been trained on a bunch of statements about angular momentum, and then it can--given some examples and time to think--correctly answer questions about angular momentum, that would be surprising and impressive. Maybe this could be experimentally tested, though I guess at great cost, by training a LLM on a dataset that's been scrubbed of all mention of stuff related to angular momentum (disallowing math about angular momentum, but allowing math and discussion about momentum and about rotation), and then trying to prompt it so that it can correctly answer questions about angular momentum. Like, the point here is that angular momentum is a "new thing under the sun" in a way that "red and smaller than microwave" is not a new thing under the sun.

Until recently, I thought that the fact that LLMs are not strong and efficient online (or quasi-online, i.e., need few examples) conceptual learners is a "big obstacle" for AGI or ASI. I no longer think so. Yes, humans evidently still have an edge in this, that is, humans can somehow relatively quickly and efficiently "surgeon" their world models to accommodate new concepts and use them efficiently in a far-ranging way. (Even though I suspect that we over-glorify this ability in humans and it more realistically takes weeks or even months for humans to fully integrate new conceptual frameworks in their thinking than hours, still, they should be able to do so without much external examples, which will be lacking if the concept is actually very new.)

I no longer think this handicaps LLMs much. New powerful concepts that permeate practical and strategic reasoning in the real world are rarely invented and are spread through the society slowly. Just being a skillful user of existing concepts that are amptly described in books and otherwise in the training corpus of LLMs should be well enough for gaining capacity for recursive self-improvement, and quite far superhuman intelligence/strategy/agency more generally.

Then, imagine that superhuman LLMs-based agents "won" and killed all humans. Even if they themselves don't (or couldn't!) invent ML paradigms for efficient online concept learning, they could still sort of hack through it, through experimenting with new concepts, trying to run a lot of simulations with them, checking these simulations against reality (filtering out incoherence/bad concepts), and then re-training themselves on the results of these simulations, and then giving text labels to the features found in their own DNNs to mark the corresponding concept.

Replies from: TsviBT
comment by TsviBT · 2023-07-10T22:54:46.582Z · LW(p) · GW(p)

Just being a skillful user of existing concepts

I don't think they're skilled users of existing concepts. I'm not saying it's an "obstacle", I'm saying that this behavior pattern would be a significant indicator to me that the system has properties that make it scary.

comment by Roman Leventov · 2023-07-10T08:36:13.191Z · LW(p) · GW(p)

Analogies: "Emergent Analogical Reasoning in Large Language Models [LW · GW]"

Replies from: TsviBT
comment by TsviBT · 2023-07-10T22:55:48.253Z · LW(p) · GW(p)

Not what I mean by analogies.

comment by Ben Pace (Benito) · 2023-10-18T00:00:11.081Z · LW(p) · GW(p)

I think the argument here basically implies that language models will not produce any novel, useful concepts in any existing industries or research fields that get substantial adoption (e.g. >10% of ppl use it, or a widely cited paper) in those industries, in the next 3 years, and if it did this, then the end would be nigh (or much nigher).

To be clear, you might get new concepts from language models about language if you nail some Chris Olah style transparency work, but the language model itself will not output ones that aren't about language in the text.

Replies from: TsviBT
comment by TsviBT · 2023-10-24T01:31:24.120Z · LW(p) · GW(p)

I roughly agree. As I mentioned to Adele, I think you could get sort of lame edge cases where the LLM kinda helped find a new concept. The thing that would make me think the end is substantially nigher is if you get a model that's making new concepts of comparable quality at a comparable rate to a human scientist in a domain in need of concepts.

if you nail some Chris Olah style transparency work

Yeah that seems right. I'm not sure what you mean by "about language". Sorta plausibly you could learn a little something new about some non-language domain that the LLM has seen a bunch of data about, if you got interpretability going pretty well. In other words, I would guess that LLMs already do lots of interesting compression in a different way than humans do it, and maybe you could extract some of that. My quasi-prediction would be that those concepts

  1. are created using way more data than humans use for many of their important concepts; and
  2. are weirdly flat, and aren't suitable out of the box for a big swath of the things that human concepts are suitable for.
comment by Vladimir_Nesov · 2023-07-08T13:14:03.650Z · LW(p) · GW(p)

When there is a simple enlightening experiment that can be constructed out of available parts (including theories that inform construction), it can be found with expert intuition, without clear understanding. When there are no new parts for a while, and many experiments have been tried, this is evidence that further blind search becomes less likely to produce results, that more complicated experiments are necessary that can only be designed with stronger understanding.

Recently, there are many new parts for AI tinkering, some themselves obtained from blind experimentation (scaling gives new capabilities that couldn't be predicted to result from particular scaling experiments). Not enough time and effort has passed to rule out further significant advancement by simple tinkering with these new parts, and scaling itself hasn't run out of steam yet, it by itself might deliver even more new parts for further tinkering.

So while it's true that there is no reason to expect specific advancements, there is still reason to expect advancements of unspecified character for at least a few years, more of them than usually. This wave of progress might run out of steam before AGI, or it might not, there is no clear theory to say which is true. Current capabilities seem sufficiently impressive that even modest unpredictable advancement might prove sufficient, which is an observation that distinguishes the current wave of AI progress from previous ones.

Replies from: TsviBT
comment by TsviBT · 2023-07-09T00:25:45.849Z · LW(p) · GW(p)

I think the current wave is special, but that's a very far cry from being clearly on the ramp up to AGI.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2023-07-09T07:00:48.785Z · LW(p) · GW(p)

The point is, it's still a matter of intuitively converting impressiveness of current capabilities and new parts available for tinkering that hasn't been done yet into probability of this wave petering out before AGI. The arguments for AGI "being overdetermined" can be amended to become arguments for particular (kinds of) sequences of experiments looking promising, shifting the estimate once taken into account. Since failure of such experiments is not independent, the estimate can start going down as soon as scaling stops producing novel capabilities, or reaches the limits of economic feasibility, or there is a year or two without significant breakthroughs.

Right now, it's looking grim, but a claim I agree with is that planning for the possibility of AGI taking 20+ years is still relevant, nobody actually knows it's inevitable. I think the following few years will change this estimate significantly either way.

Replies from: TsviBT
comment by TsviBT · 2023-07-10T06:19:17.496Z · LW(p) · GW(p)

I'm not really sure whether or not we disagree. I did put "3%-10% probability of AGI in the next 10-15ish years".

I think the following few years will change this estimate significantly either way.

Well, I hope that this is a one-time thing. I hope that if in a few years we're still around, people go "Damn! We maybe should have been putting a bit more juice into decades-long plans! And we should do so now, though a couple more years belatedly!", rather than going "This time for sure!" and continuing to not invest in the decades-long plans. My impression is that a lot of people used to work on decades-long plans and then shifted recently to 3-10 year plans, so it's not like everyone's being obviously incoherent. But I also have an impression that the investment in decades-plans is mistakenly low; when I propose decades-plans, pretty nearly everyone isn't interested, with their cited reason being that AGI comes within a decade.

comment by Max H (Maxc) · 2023-07-08T17:51:39.424Z · LW(p) · GW(p)

I suspect there's a type of deep, thorough, precise understanding that one person (the intervener) can have of another person (the intervened), which makes it so that the intervener can confront the intervened with something like "If you and people you know succeed at what you're trying to do, everyone will die.", and the intervened can hear this.

 

+1 to this being possible, but really really hard, even when the goal is to intervene on just one specific person.

A further complication is that enough people have to hear this message, such that there is not a large enough group of "holdouts" left with the means and inclination to press on anyway. The size and resources of such a holdout group required to pose an existential threat to humanity, even when most others have correctly understood the danger, gets back to the question of timelines, the absolute difficulty of inventing AGI, the difficulty of inventing AGI relative to aligning it, and the willingness / ability of non-holdouts to impose effective restrictions backed by credible enforcement.

comment by Roman Leventov · 2023-07-10T08:33:30.599Z · LW(p) · GW(p)

There are many more interventions that might work on decades-long timelines that you didn't mention:

  • Collective intelligence/sense-making/decision-making/governance/democracy innovation (and it's introduction in organisations, communities, and societies on larger scales), such as https://cip.org
  • Innovation in social network technology that fosters better epistemics and social cohesion rather than polarisation
  • Innovation in economic mechanisms to combat the deficiencies and blind spots of free markets and the modern money-on-money return financial system, such as various crypto projects, or https://digitalgaia.earth
  • Fixing other structural problems of the internet and money infrastructure that exacerbate risks: too much interconnectedness, too much centralisation of information storage, money is traceless, as I explained in this comment [LW(p) · GW(p)]. Possible innovations: https://www.inrupt.com/, https://trustoverip.org/ , other trust-based (cryptocurrency) systems.
  • Other infrastructure projects that might address certain risks, notably https://worldcoin.org, albeit this is a double-edged sword (could be used for surveillence?)
  • OTOH, fostering better interconnectedness between humans and humans to computers, primarily via brain-computer interfaces such as Neuralink. (Also, I think in mid- to long-term, human-AI merge is only viable "good" outcome for humanity at least.) However, this is a double-edged sword (could be used by AI to manipulate humans or quickly take over humans?)
comment by Richard_Ngo (ricraz) · 2023-07-10T05:26:57.238Z · LW(p) · GW(p)

FWIW I think that confrontation-worthy empathy and use of the phrase "everyone will die" to describe AI risk are approximately mutually exclusive with each other, because communication using the latter phrase results from a failure to understand communication norms [LW · GW].

(Separately I also think that "if we build AGI, everyone will die" is epistemically unjustifiable given current knowledge. But the point above still stands even if you disagree with that bit.)

Replies from: TsviBT
comment by TsviBT · 2023-07-10T06:49:51.962Z · LW(p) · GW(p)

What I mean by confrontation-worthy empathy is about that sort of phrase being usable. I mean, I'm not saying it's the best phrase, or a good phrase to start with, or whatever. I don't think inserting Knightian uncertainty is that helpful; the object-level stuff is usually the most important thing to be communicating.

This maybe isn't so related to what you're saying here, but I'd follow the policy of first making it common knowledge that you're reporting your inside views (which implies that you're not assuming that the other person would share those views); and then you state your inside views. In some scenarios you describe, I get the sense that Person 2 isn't actually wanting Person 1 to say more modest models, they're wanting common knowledge that they won't already share those views / won't already have the evidence that should make them share those views.

Replies from: ricraz
comment by Richard_Ngo (ricraz) · 2023-07-10T12:37:32.009Z · LW(p) · GW(p)

"I don't think inserting Knightian uncertainty is that helpful; the object-level stuff is usually the most important thing to be communicating."

The main point of my post is that accounting for disagreements about Knightian uncertainly is the best way to actually communicate object level things, since otherwise people get sidetracked by epistemological disagreements.

"I'd follow the policy of first making it common knowledge that you're reporting your inside views"

This is a good step, but one part of the epistemological disagreements I mention above is that most people consider inside views to be much a much less coherent category, and much less separable from other views, than most rationalists do. So I expect that more such steps are typically necessary.

"they're wanting common knowledge that they won't already share those views"

I think this is plausibly true for laypeople/non-ML-researchers, but for ML researchers it's much more jarring when someone is making very confident claims about their field of expertise, that they themselves strongly disagree with.

comment by Chipmonk · 2024-01-16T01:53:27.076Z · LW(p) · GW(p)

Just checking: are your timelines still this long?

Replies from: TsviBT
comment by TsviBT · 2024-01-18T23:00:36.625Z · LW(p) · GW(p)

Yes.

comment by Valerio · 2023-07-15T13:22:48.128Z · LW(p) · GW(p)

I like your arguments on AGI timelines, but the last section of your post feels like you are reflecting on something I would call "civilization improvement" rather than on a 20+ years plan for AGI alignment. 

I am a bit confused by the way you are conflating "civilization improvement" with a strategy for alignment (when you discuss enhanced humans solving alignment, or discuss empathy in communicating a message "If you and people you know succeed at what you're trying to do, everyone will die").  Yes, given longer timelines, civilization improvement can play a big role in reducing existential risk including AGI x-risk, but I would prefer to sell the broad merits of interventions on their own, rather than squeeze them into a strategy for alignment from today's limited viewpoint. When making a multi-decade plan for civilization improvement, I think it is also important to consider the possibility of AGI-driven "civilization improvement", i.e. interventions will not only influence AGI development, but they may also be critically influenced by it. 

Finally, when considering strategy for alignment under longer timelines, people can have useful non-standard insights, see for example this discussion on AGI paradigms [LW · GW]and this post on agent foundations research [LW · GW].