What considerations influence whether I have more influence over short or long timelines?

post by Daniel Kokotajlo (daniel-kokotajlo) · 2020-11-05T19:56:12.147Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    12 johnswentworth
    4 Noa Nabeshima
    4 Daniel Kokotajlo
    4 Daniel Kokotajlo
    3 Daniel Kokotajlo
    3 Daniel Kokotajlo
    3 Daniel Kokotajlo
    3 Dagon
None
No comments

As my timelines have been shortening, I've been rethinking my priorities. As have many of my colleagues. It occurs to us that there are probably general considerations that should cause us to weight towards short-timelines plans, or long-timelines plans. (Besides, of course, the probability of short and long timelines) For example, if timelines are short then maybe AI safety is more neglected, and therefore higher EV for me to work on, so maybe I should be systematically more inclined to act as if timelines are short.

We are at this point very unsure what the most important considerations are, and how they balance. So I'm polling the hive mind!

Answers

answer by johnswentworth · 2020-11-07T19:51:36.170Z · LW(p) · GW(p)

Setting aside AI specifically, here are some considerations relevant to short-term vs long-term influence in general.

In general, we should expect to have more influence further in the future, just because a longer timescale means there's more possible things we can do. However, the longer the timescale, the harder it is to know what specifically to do, and the more generic resource acquisition is probably a good strategy. Two conceptual models here:

  • In a chaotic system, a small change now can drive the system into many possible regions of the state-space on a long timescale, but it's extremely difficult to calculate which small change now will drive the system into a particular region later on, or to achieve the necessary precision. On short timescales, it's easier to calculate the impact of a decision, but the decision can't actually shift the system all that much.
  • In systems where instrumental convergence applies, the impact of our decisions now on our action space much later in time is mostly mediated by resource acquisition. The longer the timescale, the more time instrumental convergence has to kick in, so the more we should probably focus on generic resource acquisition.

Note that "resource acquisition" in this context does not necessarily mean money - this is definitely an area where knowledge is the real wealth. Rather, it would mean building general-purpose models and understanding the world, rather than something more specific to whatever AI trajectory we expect.

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-11-07T20:46:08.478Z · LW(p) · GW(p)

Thanks. While it's true that shorter timescales means less ability to shift the system, what I'm talking about is shorter timelines, in which we have plenty of ability to shift the system, because all the important stuff is happening in the next few years.

Roughly, I was thinking that conditional on long timelines, the thing to do is acquire resources (especially knowledge, as you say) and conditional on short timelines, the thing to do is... well, also a lot of that, but with a good deal more direct action of various sorts as well. And so I'm doing a bit of both strategies, weighted by my credences. But I'm thinking I should also weight by other things, and in particular I'm currently thinking I should weight short timelines a bit more than credences alone would suggest.

Replies from: johnswentworth
comment by johnswentworth · 2020-11-07T21:32:34.218Z · LW(p) · GW(p)

Frankly, in a short timelines scenario, the extent to which "we" have any ability to shift things is debatable. If there's an economic incentive to build the thing, and the technical pieces are basically public knowledge, that's pretty hard to stop.

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-11-12T11:43:51.083Z · LW(p) · GW(p)

See, this is an important consideration for me! Currently I am unsure what the balance is. Here are some reasons to think "we" have more influence over short timelines:

--I think takeoff is more likely to be fast the longer the timelines, because there's more hardware overhang and more probability that it was some new paradigm shift or insight that precipitated the advance in AI capabilities. And I think on fast takeoff we will have fewer warning shots and warning shots are our best hope I think.

--The longer it takes for TAI to arrive, the higher the chance that it gets built in Asia rather than the West, I think. And I for one have much more influence over the West.

If you can convince me that the balance of considerations favors me working on long-timelines plans (relative to my credences) I would be very grateful.

Replies from: johnswentworth
comment by johnswentworth · 2020-11-12T17:55:06.644Z · LW(p) · GW(p)

I think it's time to commit to a particular "we". Let's talk about you, and I'll throw in some of my personal considerations which may or may not generalize.

And I think on fast takeoff we will have fewer warning shots and warning shots are our best hope I think.

The existence/nonexistence of warning shots is probably not in your control, unless I'm missing something. What is the thing within your control which is different in these two worlds?

For me, I think I'm a hell of a lot better at insights and new paradigms than the deep learning crowd, so far and away the most influence I'm likely to have is in the scenario where a new insight leads to fast takeoff, or at least a large advantage in a slow-takeoff world. I expect that finding the magic insight first is more tractable than moving a social/economic equilibrium.

(More generally, I think solving technical problems is a lot easier than moving social/economic equilibria, and "transform the social problem into a technical one" is a useful general-purpose technique.)

The longer it takes for TAI to arrive, the higher the chance that it gets built in Asia rather than the West, I think. And I for one have much more influence over the West.

I'm gonna have to be a little bit rude here, so apologies in advance.

Unless you have some large social media following that I didn't know about, your social influence over both Asia and the West seems pretty negligible, including in most deep learning researcher circles. At least from where you are now, the only way your social influence is likely to matter much is if people in this specific community end up with disproportionate influence over AI. That is the major variable which matters, if we're asking how your current social influence will impact AI. So the question is: will this specific community end up with more influence in a short-timeline or long-timeline world? And, given that this community ends up with disproportionate influence, how does you influence in the community impact the outcome?

(Of course, it's also possible that your influence will grow/shift over time, possibly over different dimensions, and that would change the calculation.)

I would also add that, more generally, the path-by-which most of your influence will operate is nonobvious, and figuring that out (as well as which actions change that path) seems useful. Value of information is high.

If you can convince me that the balance of considerations favors me working on long-timelines plans (relative to my credences) I would be very grateful.

TBH, I'm not really trying to convince you of anything in particular. You work on different sorts of things than I do. I'm relaying parts of my own reasoning, but I do not expect my own conclusions to apply to everyone, even given similar reasoning.

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-11-13T06:27:41.810Z · LW(p) · GW(p)

Thanks. The nonexistence of warning shots is not in my control, but neither is the existence of a black hole headed for earth. I'm justified in acting as if there isn't a black hole, because if there is, we're pretty screwed anyway. I feel like maybe something similar is true (though to a lesser extent) of warning shots, but I'm not sure. If we have a 1% chance of success without warning shots and a 10% chance with warning shots, then I probably increase our overall chance of success more if I focus on warning shot scenarios.

Rudeness no problem; did I come across as arrogant or something?

I agree that that's the major variable. And that's what I had in mind when I said what I said: It seems to me that this community has more influence in short-timeline worlds than long-timeline worlds. Significantly more. Because long-timeline worlds involve AI being made by the CCP or something. But maybe I'm wrong about that! You seem to think that long-timeline worlds involve someone like you coming up with a new paradigm, and if that's true, then yeah maybe it'll still happen in the Bay after all. Seems somewhat plausible to me.

I agree that value of information is huge.

Replies from: johnswentworth
comment by johnswentworth · 2020-11-13T18:19:34.232Z · LW(p) · GW(p)

Rudeness no problem; did I come across as arrogant or something?

No not at all, it's just that the criticism was almost directly "your status is not high enough for this". It's like I took the underlying implication which most commonly causes offense and said it directly. It was awkward because it did not feel like you were over-reaching in terms of status, even in appearance, but you happened to be reasoning in a way which (subtly) only made sense for a version of Daniel with much more public following. So I somehow needed to convey that without the subtext which such a thing would almost always carry.

That was kind of long-winded, but this was an unusually interesting case of word-usage.

It seems to me that this community has more influence in short-timeline worlds than long-timeline worlds. Significantly more.

Ah interesting. I haven't thought much about the influence of the community as a whole (as opposed to myself); I find this plausible, though I'm definitely not convinced yet. Off the top of my head, seems like it largely depends on the extent to which the rationalist community project succeeds in the long run (even in the weak sense of individual people going their separate ways and having outsized impact) or reverts back to the mean. Note that that is itself something which you and I probably do have an outsized impact on!

When I look at the rationalist community as a bunch of people who invest heavily in experimentation and knowledge and learning about the world, that looks to me like a group which is playing the long game and should have a growing advantage over time. On the other hand, if I look at the rationalist community as a bunch of plurality-software-developers with a disproportionate chunk of AI researchers... yeah, I can see where that would look like influence on AI in the short term.

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-11-13T20:38:01.590Z · LW(p) · GW(p)

OK, cool. Well, I'm still a bit confused about why my status matters for this--it's relative influence that matters, not absolute influence. Even though my absolute influence may be low, it seems higher in the US than in Asia, and thus higher in short-timelines scenarios than long-timelines scenarios. Or so I'm thinking. (Because, as you say, my influence flows through the community.)

You might be right about the long game thing. I agree that we'll learn more and grow more in size and wealth over time. However, I think (a) the levers of the world will shift away from the USA, (b) the levers of the world will shift away from OpenAI and DeepMind and towards more distributed giant tech companies and government projects advised by prestigious academics (in other words, the usual centers of power and status will have more control over time; the current situation is an anomaly) and (c) various other things might happen that effectively impose a discount rate.

So I don't think the two ways of looking at the rationalist community are in conflict. They are both true. It's just that I think considerations a+b+c outweigh the improvement in knowledge, wealth, size etc. consideration.

Replies from: johnswentworth
comment by johnswentworth · 2020-11-13T21:32:58.211Z · LW(p) · GW(p)

Even though my absolute influence may be low, it seems higher in the US than in Asia, and thus higher in short-timelines scenarios than long-timelines scenarios. Or so I'm thinking.

Lemme sketch out a model here. We start with all the people who have influence on the direction of AI. We then break out two subgroups - US and Asia - and hypothesize that total influence of US goes down over time, and total influence of Asia goes up over time. Then we observe that you are in the US group, so this bodes poorly for your own personal influence. However, your own influence is small, which means that your contribution to the US' total influence is small. This means your own influence can vary more-or-less independently of the US total; a delta in your influence is not large enough to significantly cause a delta in the US total influence. Now, if there was some reason to think that your influence were strongly correlated with the US total, then the US total would matter. And there are certainly things we could think of which might make that true, but "US total influence" does not seem likely to be a stronger predictor of "Daniel's influence" than any of 50 other variables we could think of. The full pool of US AI researchers/influencers does not seem like all that great a reference class for Daniel Kokotajlo - and as long as your own influence is small relative to the total, a reference class is basically all it is.

An analogy: GDP is only very weakly correlated with my own income. If I had dramatically more wealth - like hundreds of millions or billions - then my own fortunes would probably become more tied to GDP. But as it is, using GDP to predict my income is effectively treating the whole US population as a reference class for me, and it's not a very good reference class.

Anyway, the more interesting part...

I apparently have very different models of how the people working on AI are likely to shift over time. If everything were primarily resource-constrained, then I'd largely agree with your predictions. But even going by current trends, algorithmic/architectural improvements matter at least as much raw resources. Giant organizations - especially governments - are not good at letting lots of people try their clever ideas and then quickly integrating the successful tricks into the main product. Big organizations/governments are all about coordinating everyone around one main plan, with the plan itself subject to lots of political negotiation and compromise, and then executing that plan. That's good for deploying lots of resources, but bad for rapid innovation.

Along similar lines, I don't think the primary world seat of innovation is going to shift from the US to China any time soon. China has the advantage in terms of raw population, but it's only a factor of 4 advantage; really not that dramatic a difference in the scheme of things. On the other hand, Western culture seems dramatically and unambiguously superior in terms of producing innovation, from an outside view. China just doesn't produce breakthrough research nearly as often. 20 years ago that could easily have been attributed to less overall wealth, but that becomes less and less plausible over time - maybe I'm just not reading the right news sources, but China does not actually seem to be catching up in this regard. (That said, this is all mainly based on my own intuitions, and I could imagine data which would change my mind.)

That said, I also don't think a US/China shift is all that relevant here either way; it's only weakly correlated with influence of this particular community. This particular community is a relatively small share of US AI work, so a large-scale shift would be dominated by the rest of the field, and the rationalist community in particular has many channels to grow/shrink in influence independent of the US AI community. It's essentially the same argument I made about your influence earlier, but this time applied to the community as a whole.

I do think "various other things might happen that effectively impose a discount rate" is highly relevant here. That does cut both ways, though: where there's a discount rate, there's a rate of return on investment, and the big question is whether rationalists have a systematic advantage in that game.

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-11-14T14:01:19.411Z · LW(p) · GW(p)

I think I mostly agree with you about innovation, but (a) I think that building AI will increasingly be more like building a bigger airport or dam, rather than like inventing something new (resources are the main constraint; ideas are not, happy to discuss this further), and (b) I think that things in the USA could deteriorate, eating away at the advantage the USA has, and (c) I think algorithmic innovations created in the USA will make their way to China in less than a year on average, through various means.

Your model of influence is interesting, and different from mine. Mine is something like: "For me to positively influence the world, I need to produce ideas which then spread through a chain of people to someone important (e.g. someone building AI, or deciding whether to deploy AI). I am separated from important people in the USA by fewer degrees of separation, and moreover the links are much stronger (e.g. my former boss lives in the same house as a top researcher at OpenAI), compared to important people in China. Moreover it's just inherently more likely that my ideas will spread in the US network than in the Chinese network because my ideas are in English, etc. So I'm orders of magnitude more likely to have a positive effect in the USA than in China. (But, in the long run, there'll be fewer important people in the USA, and they'll be more degrees of separation away from me, and a greater number of poseurs will be competing for their attention, so this difference will diminish). Mine seems more intuitive/accurate to me so far.

Replies from: johnswentworth
comment by johnswentworth · 2020-11-15T17:07:47.459Z · LW(p) · GW(p)

I'd be interested to hear more about why you think resources are likely to be the main constraint, especially in light of that OpenAI report earlier this year.

Replies from: daniel-kokotajlo
answer by Noa Nabeshima · 2021-01-22T20:01:06.014Z · LW(p) · GW(p)

How much influence and ability you expect to have as an individual in that timeline.

For example, I don't expect to have much influence/ability in extremely short timelines, so I should focus on timelines longer than 4 years, with more weight to longer timelines and some tapering off starting around when I expect to die.

How relevant thoughts and planning now will be.

If timelines are late in my life or after my death, thoughts, research, and planning now will be much less relevant to the trajectory of AI going well, so at this moment in time I should weight timelines in the 4-25 year range more.

answer by Daniel Kokotajlo · 2020-11-06T16:56:28.767Z · LW(p) · GW(p)

There are various things that could happen that cause extinction or catastrophe prior to TAI, various things that massively reduce our ability to steer the world like a breakdown of collective epistemology or a new world war, etc. Things that push us past the point of no return. [LW · GW]And probably a bunch of them are unknowns.

This effectively works as a discount rate, and is a reason to favor short timelines.

answer by Daniel Kokotajlo · 2020-11-06T16:42:31.236Z · LW(p) · GW(p)

Neglectedness is probably correlated with short and very long timelines. In medium-timelines scenarios AI will be a bigger deal and AI safety will have built up a lot more research and researchers. In long-timelines scenarios there will have been an AI winter and people will have stopped thinking about AI and AI safety may be discredited as doomsayers or something.

answer by Daniel Kokotajlo · 2020-11-06T17:02:40.123Z · LW(p) · GW(p)

Maybe money is really important. We'll probably have more money the longer we wait, as our savings accounts accumulate, our salaries rise, and our communities grow. This is a reason to favor long timelines... but a weak one IMO since I don't think we are bottlenecked by money. [LW · GW]

Maybe we are bottlenecked by knowledge though! Knowledge is clearly very important, and we'll probably have more of it the longer we wait.

However, there are some tricky knots to untangle here. It's true that we'll know more about how to make TAI go well the closer we are to TAI, and thus no matter what our timelines are, we'll be improving our knowledge the longer we wait. However, I feel like there is something fishy about this... On short timelines, TAI is closer, and so we have more knowledge of what it'll be like, whereas on long timelines TAI is farther, so our current level of knowledge is less, and we'd need to wait a while just to catch up to where we would be if timelines were short.

I feel like these considerations roughly cancel out, but I'm not sure.

answer by Daniel Kokotajlo · 2020-11-06T16:53:43.751Z · LW(p) · GW(p)

Tractibility is correlated with how much influence and status we have in AI projects that are making TAI. This consideration favors short timelines, because (1) We have a good idea which AI projects will make TAI conditional on short timelines, and (2) Some of us already work there, they seem already at least somewhat concerned about safety, etc. In the longer term, TAI could be built by a less sympathetic corporation or by a national government. In both cases we'd have much less influence.

comment by Ofer (ofer) · 2020-11-06T17:26:42.846Z · LW(p) · GW(p)

This consideration favors short timelines, because (1) We have a good idea which AI projects will make TAI conditional on short timelines, and (2) Some of us already work there, they seem already at least somewhat concerned about safety, etc.

I don't see how we can have a good idea which project whether a certain small set of projects will make TAI first conditional on short timelines (or whether the first project will be one in which people are "already at least somewhat concerned about safety"). Like, why not some arbitrary team at Facebook/Alphabet/Amazon or any other well-resourced company? There are probably many well-resourced companies (including algo-trading companies) that are incentivized to throw a lot of money at novel, large scale ML research.

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-11-07T08:29:28.991Z · LW(p) · GW(p)

The frontrunners right now are OpenAI and DeepMind. OpenAI in particular seems to believe in short timelines and is acting accordingly. The other places have the money, but less talent, and more importantly don't seem to be acting as if they think short timelines are possible. Off the top of my head my probability distribution over who does it conditional on it being done in the next seven years is 40% OpenAI, 20% DeepMind, 40% Other. But IDK.

Replies from: ofer
comment by Ofer (ofer) · 2020-11-07T11:58:10.499Z · LW(p) · GW(p)

The frontrunners right now are OpenAI and DeepMind.

I'm not sure about this. Note that not all companies are equally incentivized to publish their ML research (some companies may be incentivized to be secretive about their ML work and capabilities due to competition/regulation dynamics). I don't see how we can know whether GPT-3 is further along on the route to AGI than FB's feed-creation algorithm, or the most impressive algo-trading system etc.

The other places have the money, but less talent

I don't know where the "less talent" estimate is coming from. I won't be surprised if there are AI teams with a much larger salary budget than any team at OpenAI/DeepMind, and I expect the "amount of talent" to correlate with salary budget (among prestigious AI labs).

and more importantly don't seem to be acting as if they think short timelines are possible.

I'm not sure how well we can estimate the beliefs and motivations of all well-resourced AI teams in the world. Also, a team need not be trying to create AGI (or believe they can) in order to create AGI. It's sufficient that they are incentivized to create systems that model the world as well as possible; which is the case for many teams, including ones working on feed-creation in social media services and algo-trading systems. (The ability to plan and find solutions to arbitrary problems in the real world naturally arises [LW(p) · GW(p)] from the ability to model it, in the limit.)

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-11-07T12:16:25.383Z · LW(p) · GW(p)

Fair points. I don't have the expertise to evaluate this myself; my thoughts above were mostly based on what I'd heard other people say. That said, I'd be surprised if the feed-creation algorithm had as many parameters as GPT-3, considering how often it has to be run per day... Not sure about the trading algos... yeah I wish I knew more about those examples, they are both good.

Replies from: ofer
comment by Ofer (ofer) · 2020-11-07T13:20:17.113Z · LW(p) · GW(p)

That said, I'd be surprised if the feed-creation algorithm had as many parameters as GPT-3, considering how often it has to be run per day...

The relevant quantities here are the compute cost of each model usage (inference)—e.g. the cost of compute for choosing the next post to place on a feed—and the impact of such a potential usage on FB's revenue.

This post by Gwern suggests that OpenAI was able to run a single GPT-3 inference (i.e. generate a single token) at a cost of $0.00006 (6 cents for 1,000 tokens) or less. I'm sure it's worth to FB much more than $0.00006 to choose well the next post that a random user sees.

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-11-07T14:33:59.026Z · LW(p) · GW(p)

OK, but how big is the "context window" for post selection? Probably the algorithm reviews thousands of potential posts rather than just a dozen. So that's 2 OOMs more, so 6 cents per 10 things in your feed... yeah maybe that's doable but that seems like a lot to me. Let's see, suppose 2 billion people go on FB each day for an average of an hour, seeing an average of 500 things... that's a trillion things, so six hundred billion cents, or six billion dollars per day... this feels like probably more than FB makes in ad revenue? Even if I'm wrong and it's only as expensive to choose a post as GPT-3 is to choose a token, then that's still sixty million dollars a day. This feels like a lot to me. Idk. Maybe I should go look it up, haha.

Replies from: ofer
comment by Ofer (ofer) · 2020-11-07T15:24:44.613Z · LW(p) · GW(p)

I didn't follow this. FB doesn't need to run a model inference for each possible post that it considers showing (just like OpenAI doesn't need to run a GPT-3 inference for each possible token that can come next).

(BTW, I think the phrase "context window" would correspond to the model's input.)

FB's revenue from advertising in 2019 was $69.7 billion, or $191 million per day. So yea, it seems possible that in 2019 they used a model with an inference cost similar to GPT-3's, though not one that is 10x more expensive [EDIT: under this analysis' assumptions]; so I was overconfident in my previous comment.

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-11-07T21:04:25.436Z · LW(p) · GW(p)

Yeah maybe I was confused. FB does need to read all the posts it is considering though, and if it has thousands of posts to choose from, that's probably a lot more than can fit in GPT-3's context window, so FB's algorithm needs to be bigger than GPT-3... at least, that's what I was thinking. But yeah that's not the right way of thinking about it. Better to just think about how much budget FB can possibly have for model inference, which as you say must be something like $100mil per day tops. That means that maybe it's GPT-3 sized but can't be much bigger, and IMO is probably smaller.

Replies from: ofer
comment by Ofer (ofer) · 2020-11-07T21:50:36.582Z · LW(p) · GW(p)

(They may spend more on inference compute if doing so would sufficiently increase their revenue. They may train such a more-expensive model just to try it out for a short while, to see whether they're better off using it.)

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-11-08T06:58:10.128Z · LW(p) · GW(p)

Good points, especially the second one.

answer by Daniel Kokotajlo · 2020-11-06T16:45:22.929Z · LW(p) · GW(p)

Tractibility is correlated with whether we use prosaic AI methods (hard to make safe) or more principled, transparent architectures (not as hard.) Maybe we are more likely to use prosaic AI methods the shorter the timelines. OTOH, on long timelines we'll have awesome amounts of compute at our disposal and it'll be easier to brute-force the solution by evolving AI etc.

I think this is overall a weak consideration in favor of longer timelines being more tractible.

answer by Dagon · 2020-11-05T23:53:48.827Z · LW(p) · GW(p)

The long run is strictly the sum of the sequence of short runs that it comprises.  The way to influence long timelines is to have influence over pivotal sections of the shorter timelines.

comment by AnthonyC · 2020-11-06T15:48:43.583Z · LW(p) · GW(p)

That's true, but I'm not sure it's always useful to frame things that way. "To have influence over pivotal sections of the shorter timelines" you need to know which sections those are, know what type of influence is useful, and be in a position to exert influence when they arrive. If you don't have that knowledge and can't guarantee you'll have that power, and don't know how to change those things, then what you need right now is a short term plan to fix those shortcomings. However, if you are in a position to influence the short-term but not long term future, you can pursue a general strategy of making sure more people with the requisite knowledge will exist and have sufficient influence when pivotal moments arise. Depending on circumstances, skills, talent, and so on, this might have higher expected payoff than trying to optimize for personally being that influential individual in the future.

IOW I think this question is closely tied to the ideas in another recent post, When Money Is Abundant, Knowledge Is The Real Wealth. [LW · GW]

No comments

Comments sorted by top scores.