# Any rebuttals of Christiano and AI Impacts on takeoff speeds?

post by SoerenMind · 2019-04-21T20:39:51.076Z · score: 47 (17 votes) · LW · GW · 2 comments

This is a question post.

## Contents

  Answers
30 Rob Bensinger
23 Lukas_Gloor
15 habryka
14 Søren Elverlin
11 Raemon
7 Donald Hobson
3 SoerenMind
None


14 months ago, Paul Christiano and AI Impacts both published forceful and well-received take-downs of many arguments for fast (discontinuous) takeoff. I haven’t seen any rebuttals that are written by established researchers, longer than comments, or otherwise convincing. The longer there is no response, the less weight I put on the outside view that proponents of fast takeoff may be right.

Where are the rebuttals? Did I miss them? Is the debate decided? Did nobody have time or motivation to write something? Is the topic too hard to explain?

Why rebuttals would be useful:

-Give the community a sense of the extent of expert disagreement to form outside views.

-Prioritization in AI policy, and to a lesser extent safety, depends on the likelihood of discontinuous progress. We may have more leverage in such cases, but this could be overwhelmed if the probability is low.

-Motivate more people to work on MIRI’s research which seems more important to solve early if there is fast takeoff.

answer by Rob Bensinger · 2019-05-12T01:28:50.738Z · score: 30 (9 votes) · LW · GW

MIRI folks are the most prominent proponents of fast takeoff, and we unfortunately haven't had time to write up a thorough response. Oli already quoted the quick comments [LW · GW] I posted from Nate and Eliezer last year, and I'll chime in with some of the factors that I think are leading to disagreements about takeoff:

• Some MIRI people (Nate is one) suspect we might already be in hardware overhang mode, or closer to that point than some other researchers in the field believe.
• MIRI folks tend to have different views from Paul about AGI, some of which imply that AGI is more likely to be novel and dependent on new insights. (Unfair caricature: Imagine two people in the early 20th century who don't have a technical understanding of nuclear physics yet, trying to argue about how powerful a nuclear-chain-reaction-based bomb might be. If one side were to model that kind of bomb as "sort of like TNT 3.0" while the other is modeling it as "sort of like a small Sun", they're likely to disagree about whether nuclear weapons are going to be a small v. large improvement over TNT. Note I'm just using nuclear weapons as an analogy, not giving an outside-view argument "sometimes technologies are discontinuous, ergo AGI will be discontinuous".)

This list isn't at all intended to be sufficiently-detailed or exhaustive.

I'm hoping we have time to write up more thoughts on this before too long, because this is an important issue (even given that we're trying to minimize the researcher time we put into things other than object-level deconfusion research). I don't want MIRI to be a blocker on other researchers making progress on these issues, though — it would be bad if people put a pause on hashing out takeoff issues for themselves (or put a pause on alignment research that's related to takeoff views) until Eliezer had time to put out a blog post. I primarily wanted to make sure people know that the lack of a substantive response doesn't mean that Nate+Eliezer+Benya+etc. agree with Paul on takeoff issues now, or that we don't think this disagreement matters. Our tardiness is because of opportunity costs and because our views have a lot of pieces to articulate.

answer by Lukas_Gloor · 2019-04-25T21:26:31.088Z · score: 23 (9 votes) · LW · GW

I’m reluctant to reply because it sounds like you’re looking for rebuttals by explicit proponents of hard takeoff who have thought a great deal about takeoff speeds, and neither of that applies to me. But I could sketch some intuitions why reading the pieces by AI Impacts and by Christiano hasn't felt wholly convincing to me. (I’ve never run these intuitions past anyone and don’t know if they’re similar to cruxes held by proponents of hard takeoff who are more confident in hard takeoff than I am – therefore I hope people don't update much further against hard takeoff in case they find the sketch below unconvincing.) I found that it’s easiest for me to explain something if I can gesture towards some loosely related “themes” rather than go through a structured argument, so here are some of these themes and maybe people see underlying connections between them:

## Culture overhang

Shulman and Sandberg have argued that one way to get hard takeoff is via hardware overhang: when a new algorithmic insight can be used immediately to its full potential, because much more hardware is available than one would have needed to overtake state of the art performance metric with the new algorithms. I think there’s a similar dynamic at work with culture: If you placed an AGI into the stone age, it would be inefficient at taking over the world even with appropriately crafted output channels because stone age tools (which include stone age humans the AGI could manipulate) are neither very useful nor reliable. It would be easier for an AGI to achieve influence in 1995 when the environment contained a greater variety of increasingly far-reaching tools. But with the internet being new, particular strategies to attain power (or even just rapidly acquire knowledge) were not yet available. Today, it is arguably easier than ever for an AGI to quickly and more-or-less single-handedly transform the world.

## Snapshot intelligence versus intelligence as learning potential

There’s a sense in which cavemen are similarly intelligent as modern-day humans. If we time-traveled back into the stone age, found the couples with the best predictors for having gifted children, gave these couples access to 21st century nutrition and childbearing assistance, and then took their newborns back into today’s world where they’d grow up in a loving foster family with access to high-quality personalized education, there’s a good chance some of those babies would grow up to be relatively ordinary people of close to average intelligence. Those former(?) cavemen and cavewomen would presumably be capable of dealing with many if not most aspects of contemporary life and modern technology.

However, there’s also a sense in which cavemen are very unintelligent compared to modern-day humans. Culture, education, possibly even things like the Flynn effect, etc. – these really do change the way people think and act in the world. Cavemen are incredibly uneducated and untrained concerning knowledge and skills that are useful in modern, tool-rich environments.

We can think of this difference as the difference between the snapshot of someone’s intelligence at the peak of their development and their (initial) learning potential. Caveman and modern-day humans might be relatively close to each other in terms of the latter, but when considering their abilities at the peak of their personal development, the modern humans are much better at achieving goals in tool-rich environments. I sometimes get the impression that proponents of soft takeoffs underappreciate this difference when addressing comparisons between, for instance, early humans and chimpanzees (this is just a vague general impression which doesn’t apply to the arguments presented by AI impacts or by Paul Christiano).

## How to make use of culture: The importance of distinguishing good ideas from bad ones

Both for productive engineers and creative geniuses, it holds that they could only have developed their full potential because they picked up useful pieces of insight from other people. But some people cannot tell the difference between high-quality information and low-quality information, or might make wrong use even of high-quality information, reasoning themselves into biased conclusions. An AI system capable of absorbing the entire internet but terrible at telling good ideas from bad ideas won't make too much of a splash (at least not in terms of being able to take over the world). But what about an AI system just slightly above some cleverness threshold for adopting an increasingly efficient information diet? Couldn’t it absorb the internet in a highly systematic way rather than just soaking in everything indiscriminately, learning many essential meta-skills on its way, improving how it goes about the task of further learning?

## Small differences in learning potential have compounded benefits over time

If the child in the chair next to me in fifth grade was slightly more intellectually curious, somewhat more productive, and marginally better dispositioned to adopt a truth-seeking approach and self-image than I am, this could initially mean they score 100%, and I score 95% on fifth-grade tests – no big difference. But as time goes on, their productivity gets them to read more books, their intellectual curiosity and good judgment get them to read more unusually useful books, and their cleverness gets them to integrate all this knowledge in better and increasingly more creative ways. I’ll reach a point where I’m just sort of skimming things because I’m not motivated enough to understand complicated ideas deeply, whereas they find it rewarding to comprehend everything that gives them a better sense of where to go next on their intellectual journey. By the time we graduate university, my intellectual skills are mostly useless, while they have technical expertise in several topics, can match or even exceed my thinking even on areas I specialized in, and get hired by some leading AI company. The point being: an initially small difference in dispositions becomes almost incomprehensibly vast over time.

## Knowing how to learn strategically: A candidate for secret sauce??

(I realized that in this title/paragraph, the word "knowing" is meant both in the sense of "knowing how to do x" and "being capable of executing x very well." It might be useful to try to disentangle this some more.) The standard AI foom narrative sounds a bit unrealistic when discussed in terms of some AI system inspecting itself and remodeling its inner architecture in a very deliberate way driven by architectural self-understanding. But what about the framing of being good at learning how to learn? There’s at least a plausible-sounding story we can tell where such an ability might qualify as the “secret sauce" that gives rise to a discontinuity in the returns of increased AI capabilities. In humans – and admittedly this might be too anthropomorphic – I'd think about it in this way: If my 12-year-old self had been brain-uploaded to a suitable virtual reality, made copies of, and given the task of devouring the entire internet in 1,000 years of subjective time (with no aging) to acquire enough knowledge and skill to produce novel and for-the-world useful intellectual contributions, the result probably wouldn’t be much of a success. If we imagined the same with my 19-year-old self, there’s a high chance the result wouldn’t be useful either – but also some chance it would be extremely useful. Assuming, for the sake of the comparison, that a copy clan of 19-year olds can produce highly beneficial research outputs this way, and a copy clan of 12-year olds can’t, what does the landscape look like in between? I don’t find it evident that the in-between is gradual. I think it’s at least plausible that there’s a jump once the copies reach a level of intellectual maturity to make plans which are flexible enough at the meta-level and divide labor sensibly enough to stay open to reassessing their approach as time goes on and they learn new things. Maybe all of that is gradual, and there are degrees of dividing labor sensibly or of staying open to reassessing one’s approach – but that doesn’t seem evident to me. Maybe this works more as an on/off thing.

## How could natural selection produce on/off abilities?

It makes sense to be somewhat suspicious about any hypotheses according to which the evolution of general intelligence made a radical jump in Homo sapiens, creating thinking that is "discontinuous" from what came before. If knowing how to learn is an on/off ability that plays a vital role in the ways I described above, how could it evolve?
We're certainly also talking culture, not just genes. And via the Baldwin effect, natural selection can move individuals closer towards picking up surprisingly complex strategies via learning from their environment. At this point at latest, my thinking becomes highly speculative. But here's one hypothesis: In its generalization, this effect is about learning how to learn. And maybe there is something like a "broad basin of attraction" (inspired by Christiano's broad basin of attraction for corrigibility) for robustly good reasoning / knowing how to learn. Picking up some of the right ideas initially and early on, combined with being good at picking up things in general, produces in people an increasingly better sense of how to order and structure other ideas, and over time, the best human learners start to increasingly resemble each other, having honed in on the best general strategies.

---

## Hard takeoff without a discontinuity

(This argument is separate from all the other arguments above.) Here’s something I never really understood about the framing of the hard vs. soft takeoff discussion. Let’s imagine a graph with inputs such as algorithmic insights and compute/hardware on the x-axis, and general intelligence (it doesn’t matter for my purposes whether we use learning potential or snapshot intelligence) on the y-axis. Typically, the framing is that proponents of hard takeoff believe that this graph contains a discontinuity where the growth mode changes, and suddenly the returns (for inputs such as compute) are vastly higher than the outside view would have predicted, meaning that the graph makes a jump upwards in the y-axis. But what about hard takeoff without such a discontinuity? If our graph starts to be steep enough at the point where AI systems reach human-level research capabilities and beyond, then that could in itself allow for some hard (or "quasi-hard") takeoff. After all, we are not going to be sampling points (in the sense of deploying cutting-edge AI systems) from that curve every day – that simply wouldn't work logistically even granted all the pressures to be cutting-edge competitive. If we assume that we only sample points from the curve every two months, for instance, is it possible that for whatever increase in compute and algorithmic insights we’d get in those two months, the differential on the y-axis (some measure of general intelligence) could be vast enough to allow for attaining a decisive strategic advantage (DSA) from being first? I don’t have strong intuitions about what the offense-defense balance will shift to once we are close to AGI, but it at least seems plausible that it turns more towards offense, in which case arguably a lower differential is needed for attaining a DSA. In addition, based on the classical arguments put forward by researchers such as Bostrom and Yudkowsky, it also seems at least plausible to me that we are potentially dealing with a curve that is very steep around the human level. So, if one AGI project is two months ahead of another project, and we for the sake of argument assume that there are no inherent discontinuities in the graph in question, it’s still not evident to me that this couldn’t lead to something that very much looks like hard takeoff, just without an underlying discontinuity in the graph.

answer by habryka · 2019-05-03T04:07:57.709Z · score: 15 (4 votes) · LW · GW

Robby made this post with short perspectives from Nate and Eliezer: https://www.lesswrong.com/posts/X5zmEvFQunxiEcxHn [LW · GW]

Copied here to make it easier to read (full text of the post):

This isn't a proper response to Paul Christiano [LW · GW] or Katja Grace's recent writings about takeoff speed, but I wanted to cross-post Eliezer's first quick comments on Katja's piece someplace more linkable than Twitter:

There's a lot of steps in this argument that need to be spelled out in more detail. Hopefully I get a chance to write that up soon. But it already raises the level of debate by a lot, for which I am grateful.
E.g. it is not intuitive to me that "But evolution wasn't trying to optimize for STEM ability" is a rejoinder to "Gosh hominids sure got better at that quickly." I can imagine one detailed argument that this might be trying to gesture at, but I don't know if I'm imagining right.
Similarly it's hard to pin down which arguments say '"Average tech progress rates tell us something about an underlying step of inputs and returns with this type signature" and which say "I want to put the larger process in this reference class and demand big proof burdens."

I also wanted to caveat: Nate's experience is that the label "discontinuity" is usually assigned to misinterpretations of his position on AGI, so I don't want to endorse this particular framing of what the key question is. Quoting Nate from a conversation I recently had with him (not responding to these particular posts):

On my model, the key point is not "some AI systems will undergo discontinuous leaps in their intelligence as they learn," but rather, "different people will try to build AI systems in different ways, and each will have some path of construction and some path of learning that can be modeled relatively well by some curve, and some of those curves will be very, very steep early on (e.g., when the system is first coming online, in the same way that the curve 'how good is Google’s search engine' was super steep in the region between 'it doesn’t work' and 'it works at least a little'), and sometimes a new system will blow past the entire edifice of human knowledge in an afternoon shortly after it finishes coming online." Like, no one is saying that Alpha Zero had massive discontinuities in its learning curve, but it also wasn't just AlphaGo Lee Sedol but with marginally more training: the architecture was pulled apart, restructured, and put back together, and the reassembled system was on a qualitatively steeper learning curve.
My point here isn't to throw "AGI will undergo discontinuous leaps as they learn" under the bus. Self-rewriting systems likely will (on my models) gain intelligence in leaps and bounds. What I’m trying to say is that I don’t think this disagreement is the central disagreement. I think the key disagreement is instead about where the main force of improvement in early human-designed AGI systems comes from — is it from existing systems progressing up their improvement curves, or from new systems coming online on qualitatively steeper improvement curves?

Katja replied on Facebook: "FWIW, whenever I am talking about discontinuities, I am usually thinking of e.g. one system doing much better than a previous system, not discontinuities in the training of one particular system—if a discontinuity in training one system does not make the new system discontinuously better than the previous system, then I don't see why it would be important, and if it does, then it seems more relevant to talk about that."

answer by Søren Elverlin · 2019-04-25T10:58:01.771Z · score: 14 (6 votes) · LW · GW

There is a recording of my presentation here: https://youtu.be/7ogJuXNmAIw

My notes from the discussion are reproduced below:

We liked the article quite a lot. There was a surprising number of new insights for an article purporting to just collect standard arguments.

The definition of fast takeoff seemed somewhat non-standard, conflating 3 things: Speed as measured in clock-time, continuity/smoothness around the threshold where AGI reaches human baseline, and locality. These 3 questions are closely related, but not identical, and some precision would be appreciated. In fairness, the article was posted on Paul Christianos "popular" blog, not his "formal" blog.

The degree to which we can build universal / general AIs right now was a point of contention. Our (limited) understanding is that most AI researchers would disagree with Paul Christianos about whether we can build a universal or general AI right now. Paul Christianos argument seem to rest on our ability to trade off universality against other factors, but if (as we believe) universality is still mysterious, this tradeoff is not possible.

There was some confusion about the relationship between "Universality" and "Generality". Possibly, a "village idiot" is above the level of generality (passes Turing test, can make coffee) whereas he would not be at the "Universality" level (unable to self-improve to Superintelligence, even given infinite time). It is unclear if Paul Christiano would agree to this.

The comparison between Humans and Chimpanzees was discussed, and related to the argument from Human Variation, which seems to be stronger. The difference between a village idiot and Einstein is also large, and the counter-argument about what evolution cares about seem to not hold here.

Paul Christiano asked for a canonical example of a key insight enabling an unsolvable problem to be solved. An example would be my Matrix Multiplication example (https://youtu.be/5DDdBHsDI-Y). Here, a series of 4 key insights turn the problem from requiring a decade, to a year, to a day, to a second. While the example is not canonical nor precisely what Paul Christiano asks for, it does point to a way get intution about the "key insight": Grab a paper and a pen, and try to do matrix multiplication faster than O(n^3). It is possible, but far from trivial.

For the deployment lag ("Sonic Boom") argument, a factor that can complicate the tradeoff is "secrecy". If deployment cause you to lose the advantages of secrecy, the tradeoffs described could look much worse.

A number of the arguments for a fast takeoff did seem to aggregate, in one specific way: If our prior is for a "quite fast" takeoff, the arguments push us towards expecting a "very fast" takeoff. This is my personal interpretation, and I have not really formalized it. I should get around to that some day.

comment by paulfchristiano · 2019-06-04T02:15:16.195Z · score: 6 (3 votes) · LW · GW
An example would be my Matrix Multiplication example (https://youtu.be/5DDdBHsDI-Y [LW · GW]). Here, a series of 4 key insights turn the problem from requiring a decade, to a year, to a day, to a second.

In fact Strassen's algorithm is worse than textbook matrix multiplication for most reasonably sized matrices, including all matrices that could be multiplied in the 70s. Even many decades later the gains are still pretty small (and it's only worth doing for unusually giant matrix multiplies). As far as I am aware nothing more complicated than Strassen's algorithm is ever used in practice. So it doesn't seem like an example of a key insight enabling a problem to be solved.

We could imagine an alternate reality in which large matrix multiplications became possible only after we discovered Strassen's algorithm. But I think there is a reason that reality is alternate.

Overall I think difficult theory and clever insights are sometimes critical, perhaps often enough to more than justify our society's tiny investment in them, but it's worth having a sense of how exceptional these cases are.

comment by Søren Elverlin (soren-elverlin-1) · 2019-06-09T19:25:32.512Z · score: 9 (3 votes) · LW · GW

Wikipedia claims that "it is faster in cases where n > 100 or so" https://en.wikipedia.org/wiki/Matrix_multiplication_algorithm
The introduction of this Wikipedia article seems to describe these improvements as practically useful.

In my video, I describe one of the breakthroughs in matrix multplication after Strassen as "Efficient parallelization, like MapReduce, in the nineties". This insight is used in practice, though some of the other improvements I mention are not practical.

In the section "Finding the secret sauce", you asked for a canonical historical example of an insight having immediate dramatic effects. The canonical example is "nuclear weapons", but this does not seem to precisely satisfy your requirements. While this example is commonly used, I'm not too fond of it, which is why I substituted my own.

My video "If AGI was Matrix Multiplication" does not claim that that fast matrix multiplication is a particular impressive intellectual breakthrough. It is a moderate improvement, but I show that such moderate improvement are sufficient to trigger an intelligence explosion.

If we wish to predict the trajectory of improvements to the first AGI algorithm (hypothetically), we might choose as reference class "Trajectories of improvements to all problems". With this reference class, it looks like most improvement happens slowly, continuously and with a greater emphasis on experience rather than insights.

We might instead choose the reference class "Trajectories of improvement to algorithms", which is far narrower, but still rich in examples. Here a book on the history of algorithms will provide many examples of improvements due to difficult theory and clever insights, with matrix multiplication not standing out as particular impressive. Presumably, most of these trajectories are sufficient for an intelligence explosion, if the trajectory were to be followed by the first AGI algorithm. However, a history book is a highly biased view of the past, as it will tend to focus on the most impressive trajectories. I am unsure about how to overcome this problem.

An even narrower reference class would be "Trajectories of improvement to AI algorithms", where training artificial neural networks is an example of a trajectory that would surely be explosive. I intuitively feel that this reference class is too narrow, as the first AGI algorithm could be substantially different from previous AI algorithms.

answer by Raemon · 2019-04-21T20:57:12.022Z · score: 11 (7 votes) · LW · GW

[edit: no longer endorse the original phrasing of my opening paragraph, but still seems useful to link to past discussion]

One key thing is that AFAICT, when Paul says 'slow takeoff' what he actually means is 'even faster takeoff, but without a sharp discontinuity', or something like that. So be careful about how you interpret the debate.

(I also think there's been fairly continuous debate throughout many other threads. Importantly, I don't think this is a single concrete disagreement, it's more like a bunch of subtle disagreements interwoven with each other. Many posts and threads (in LW and in other channels) seem to me to be about disentangling those disagreements.

I think the discussion of Paul's Research Agenda [LW · GW] FAQ (NOT written by Paul), including the comment by Eliezer, is one of the more accessible instances of that, although I'm not sure who if it directly bears on your question)

comment by rohinmshah · 2019-04-22T16:31:50.189Z · score: 15 (7 votes) · LW · GW

I just read through those comments, and didn't really find any rebuttals. Most of them seemed like clarifications, terminology disagreements, and intuitions without supporting arguments. I would be hard-pressed to distill that discussion into anything close to a response.

One key thing is that AFAICT, when Paul says 'slow takeoff' what he actually means is 'even faster takeoff, but without a sharp discontinuity', or something like that.

Yes, but nonetheless these are extremely different views with large implications for what we should do.

Fwiw, my epistemic state is similar to SoerenMind's. I basically believe the arguments for slow/continuous takeoff, haven't fully updated towards them because I know many people still believe in fast takeoff, but am surprised not to have seen a response in over a year. Most of my work now takes continuous takeoff as a premise (because it is not a good idea to premise on fast takeoff when I don't have any inside-view model that predicts fast takeoff).

comment by Raemon · 2019-04-22T20:12:52.361Z · score: 10 (5 votes) · LW · GW

Yeah. Rereading the thread I agree it's not as relevant to this as I thought.

I think a dedicated response would be good.

I do think, when/if such a response comes, it would be valuable to take the opportunity to frame the debate more in terms of "sharp vs smooth takeoff" or "discontinuous vs continuous".

comment by Raemon · 2019-04-25T02:40:28.448Z · score: 14 (4 votes) · LW · GW

BTW, I had an interesting meta-experience with this thread, where at first when I was called out for making a false/irrelevant claim, I felt bad (in particular since I saw I had gotten downvoted for it), and felt an impulse to justify the original claim

Then I bucked up, edited the original comment, and wrote the followup comment acknowledging the mistake. But a short while later felt good that the followup comment was upvoted.

This made me overall feel good about LessWrong culture. Admitting mistakes even in small places naturally hurts, and I'm glad that we have good systems to incentivize it. :)

[then I made this self congratulatory meta comment which ummmm ]

comment by SoerenMind · 2019-04-22T11:00:36.676Z · score: 5 (3 votes) · LW · GW

Thanks. IIRC the comments didn't feature that much disagreement and little engagement from established researchers. I didn't find too much of these in other threads either. I'm not sure if I should infer that little disagreement exists.

Re Paul's definition, he expects there will be years between 50% and 100% GDP growth rates. I think a lot of people here would disagree but I'm not sure.

answer by Donald Hobson · 2019-04-22T19:35:35.775Z · score: 7 (3 votes) · LW · GW

When an intelligence builds another intelligence, in a single direct step, the output intelligence is a function of the input intelligence , and the resources used . . This function is clearly increasing in both and . Set to be a reasonably large level of resources, eg flops, 20 years to think about it. A low input intelligence, eg a dog, would be unable to make something smarter than itself. . A team of experts (by assumption that ASI is made), can make something smarter than themselves. . So there must be a fixed point. . The questions then become, how powerful is a pre fixed point AI. Clearly less good at AI research than a team of experts. As there is no reason to think that AI research is uniquely hard to AI, and there are some reasons to think it might be easier, or more prioritized, if it can't beat our AI researchers, it can't beat our other researchers. It is unlikely to make any major science or technology breakthroughs.

I recon that is large (>10) because on an absolute scale, the difference between an IQ 90 and an IQ120 human is quite small, but I would expect any attempt at AI made by the latter to be much better. In a world where the limiting factor is researcher talent, not compute, the AI can get the compute it needs for in hours (seconds? milliseconds??) As the lumpiness of innovation puts the first post fixed point AI a non-exponentially tiny distance ahead, (most innovations are at least 0.1% that state of the art better in a fast moving field) then a handful of cycles or recursive self improvement (<1 day) is enough to get the AI into the seriously overpowered range.

The question of economic doubling times would depend on how fast an economy can grow when tech breakthroughs are limited by human researchers. If we happen to have cracked self replication at about this point, it could be very fast.

comment by rohinmshah · 2019-04-23T05:51:41.133Z · score: 4 (2 votes) · LW · GW

Humans are already capable of self-improvement. This argument would suggest that the smartest human (or the one who was best at self-improvement, if you prefer) should have undergone fast takeoff and become seriously overpowered, but this doesn't seem to have happened.

In a world where the limiting factor is researcher talent, not compute

Compute is definitely a limiting factor currently. Why would that change?

comment by Donald Hobson (donald-hobson) · 2019-04-23T20:13:21.772Z · score: 4 (3 votes) · LW · GW

Humans are not currently capable of self improvement in the understanding your our own source code sense. The "self improvement" section in bookstores doesn't change the hardware or the operating system, it basically adds more data.

Of course talent and compute both make a difference, in the sense that and . I was talking about the subset of worlds where research talent was by far the most important. .

In a world where researchers have little idea what they are doing, and are running a new AI every hour hoping to stumble across something that works, the result holds.

In a world where research involves months thinking about maths, then a day writing code, then an hour running it, this result holds.

In a world where everyone knows the right algorithm, but it takes a lot of compute, so AI research consists of building custom hardware and super-computing clusters, this result fails.

Currently, we are somewhere in the middle. I don't know which of these options future research will look like, although if its the first one, friendly AI seems unlikely.

In most of the scenarios where the first smarter than human AI, is orders of magnitude faster than a human, I would expect a hard takeoff. As we went from having no algorithms that could say (tell a cat from a dog) straight to having algorithms superhumanly fast at doing so, there was no algorithm that worked, but took supercomputer hours, this seems like a plausible assumption.

comment by rohinmshah · 2019-04-23T21:59:31.781Z · score: 2 (1 votes) · LW · GW
Humans are not currently capable of self improvement in the understanding your o. I was talking about the subset of worlds where research talent ense. The "self improvement" section in bookstores doesn't change the hardware or the operating system, it basically adds more data.

I'm not sure I understand this. Are you claiming is not positive for humans?

In most of the scenarios where the first smarter than human AI, is orders of magnitude faster than a human, I would expect a hard takeoff.

This sounds like "conditioned on a hard takeoff, I expect a hard takeoff". It's not exactly saying that, since speed could be different from intelligence, but you need to argue for the premise too: nearly all of the arguments in the linked post could be applied to your premise as well.

In a world where researchers have little idea what they are doing, and are running a new AI every hour hoping to stumble across something that works, the result holds.
In a world where research involves months thinking about maths, then a day writing code, then an hour running it, this result holds.

Agreed on both counts, and again I think the arguments in the linked posts suggest that the premises are not true.

As we went from having no algorithms that could say (tell a cat from a dog) straight to having algorithms superhumanly fast at doing so, there was no algorithm that worked, but took supercomputer hours, this seems like a plausible assumption.

This seems false to me. At what point would you say that we had AI systems that could tell a cat from a dog? I don't know the history of object recognition, but I would guess that depending on how you operationalize it, I think the answer could be anywhere between the 60s and "we still can't do it". (Though it's also possible that people didn't care about object recognition until the 21st century, and only did other types of computer vision in the 60s-90s. It's quite strange that object recognition is an "interesting" task, given how little information you get from it.)

comment by Donald Hobson (donald-hobson) · 2019-04-24T13:00:25.546Z · score: 1 (1 votes) · LW · GW

My claim at the start had a typo in it. I am claiming that you can't make a human seriously superhuman with a good education. Much like you can't get a chimp up to human level with lots of education and "self improvement". Serious genetic modification is another story, but at that point, your building an AI out of protien.

It does depend where you draw the line, but the for a wide range of performance levels, we went from no algorithm at that level, to a fast algorithm at that level. You couldn't get much better results just by throwing more compute at it.

comment by rohinmshah · 2019-04-24T15:40:35.747Z · score: 2 (1 votes) · LW · GW
I am claiming that you can't make a human seriously superhuman with a good education.

Is the claim that for humans goes down over time so that eventually hits an asymptote? If so, why won't that apply to AI?

Serious genetic modification is another story, but at that point, your building an AI out of protien.

But it seems quite relevant that we haven't successfully done that yet.

You couldn't get much better results just by throwing more compute at it.

Okay, so my new story for this argument is:

• For every task T, there are bottlenecks that limit its performance, which could be compute, data, algorithms, etc.
• For the task of "AI research", compute will not be the bottleneck.
• So, once we get human-level performance on "AI research", we can apply more compute to get exponential recursive self-improvement.

Is that your argument? If so, I think my question would be "why didn't the bottleneck in point 2 vanish in point 3?" I think the only way this would be true would be if the bottleneck was algorithms, and there was a discontinuous jump in the capability of algorithms. I agree that in that world you would see a hard/fast/discontinuous takeoff, but I don't see why we should expect that (again, the arguments in the linked posts argue against that premise).

answer by SoerenMind · 2019-04-22T11:15:55.802Z · score: 3 (2 votes) · LW · GW

AFAICT Paul's definition of slow (I prefer gradual) takeoff basically implies that local takeoff and immediate unipolar outcomes are pretty unlikely. Many people still seem to put stock in local takeoff. E.g. Scott Garrabrant [LW · GW]. Zvi and Eliezer have said they would like to write rebuttals. So I'm surprised by the scarcity of disagreement that's written up.