Soft takeoff can still lead to decisive strategic advantage
post by Daniel Kokotajlo (daniel-kokotajlo) · 2019-08-23T16:39:31.317Z · LW · GW · 47 commentsContents
National ELO ratings during the industrial revolution and the modern era The "surely not faster than the rest of the world combined" argument What a "Paul Slow" soft takeoff might look like according to me None 47 comments
[Epistemic status: Argument by analogy to historical cases. Best case scenario it's just one argument among many. Edit: Also, thanks to feedback from others, especially Paul, I intend to write a significantly improved version of this post in the next two weeks. Edit: I never did, because in the course of writing my response I realized the original argument made a big mistake. See this review. [LW · GW]]
I have on several occasions heard people say things like this:
The original Bostrom/Yudkowsky paradigm envisioned a single AI built by a single AI project, undergoing intelligence explosion all by itself and attaining a decisive strategic advantage as a result. However, this is very unrealistic. Discontinuous jumps in technological capability are very rare, and it is very implausible that one project could produce more innovations than the rest of the world combined. Instead we should expect something more like the Industrial Revolution: Continuous growth, spread among many projects and factions, shared via a combination of trade and technology stealing. We should not expect any one project or AI to attain a decisive strategic advantage, because there will always be other projects and other AI that are only slightly less powerful, and coalitions will act to counterbalance the technological advantage of the frontrunner. (paraphrased)
Proponents of this view often cite Paul Christiano [LW · GW] in support. Last week I heard him say he thinks the future will be "like the Industrial Revolution but 10x-100x faster."
In this post, I assume that Paul's slogan for the future is correct and then nevertheless push back against the view above. Basically, I will argue that even if the future is like the industrial revolution only 10x-100x faster, there is a 30%+ chance that it will involve a single AI project (or a single AI) with the ability to gain a decisive strategic advantage, if they so choose. (Whether or not they exercise that ability is another matter.)
Why am I interested in this? Do I expect some human group to take over the world? No; instead what I think is that (1) an unaligned AI in the leading project might take over the world, and (2) A human project that successfully aligns their AI might refrain from taking over the world even if they have the ability to do so, and instead use their capabilities to e.g. help the United Nations enforce a ban on unauthorized AGI projects.
National ELO ratings during the industrial revolution and the modern era
In chess (and some other games) ELO rankings are used to compare players. An average club player might be rank 1500; the world chess champion might be 2800; computer chess programs are even better. If one player has 400 points more than another, it means the first player would win with ~90% probability.
We could apply this system to compare the warmaking abilities of nation-states and coalitions of nation-states. For example, in 1941 perhaps we could say that the ELO rank of the Axis powers was ~300 points lower than the ELO rank of the rest of the world combined (because what in fact happened was the rest of the world combining to defeat them, but it wasn't a guaranteed victory). We could add that in 1939 the ELO rank of Germany was ~400 points higher than that of Poland, and that the ELO rank of Poland was probably 400+ points higher than that of Luxembourg.
We could make cross-temporal fantasy comparisons too. The ELO ranking of Germany in 1939 was probably ~400 points greater than that of the entire world circa 1910, for example. (Visualize the entirety of 1939 Germany teleporting back in time to 1910, and then imagine the havoc it would wreak.)
Claim 1A: If we were to estimate the ELO rankings of all nation-states and sets of nation-states (potential alliances) over the last 300 years, the rank of the most powerful nation-state at at a given year would on several occasions be 400+ points greater than the rank of the entire world combined 30 years prior.
Claim 1B: Over the last 300 years there have been several occasions in which one nation-state had the capability to take over the entire world of 30 years prior.
I'm no historian, but I feel fairly confident in these claims.
- In naval history, the best fleets in the world in 1850 were obsolete by 1860 thanks to the introduction of iron-hulled steamships, and said steamships were themselves obsolete a decade or so later, and then those ships were obsoleted by the Dreadnought, and so on... This process continued into the modern era. By "Obsoleted" I mean something like "A single ship of the new type could defeat the entire combined fleet of vessels of the old type."
- A similar story could be told about air power. In a dogfight between planes of year 19XX and year 19XX+30, the second group of planes will be limited only by how much ammunition they can carry.
- Small technologically advanced nations have regularly beaten huge sprawling empires and coalitions. (See: Colonialism)
- The entire world has been basically carved up between the small handful of most-technologically advanced nations for two centuries now. For example, any of the Great Powers of 1910 (plus the USA) could have taken over all of Africa, Asia, South America, etc. if not for the resistance that the other great powers would put up. The same was true 40 years later and 40 years earlier.
I conclude from this that if some great power in the era kicked off by the industrial revolution had managed to "pull ahead" of the rest of the world more effectively than it actually did--30 years more effectively, in particular--it really would have been able to take over the world.
Claim 2: If the future is like the Industrial Revolution but 10x-100x faster, then correspondingly the technological and economic power granted by being 3 - 0.3 years ahead of the rest of the world should be enough to enable a decisive strategic advantage.
The question is, how likely is it that one nation/project/AI could get that far ahead of everyone else? After all, it didn't happen in the era of the Industrial Revolution. While we did see a massive concentration of power into a few nations on the leading edge of technological capability, there were always at least a few such nations and they kept each other in check.
The "surely not faster than the rest of the world combined" argument
Sometimes I have exchanges like this:
- Me: Decisive strategic advantage is plausible!
- Interlocutor: What? That means one entity must have more innovation power than the rest of the world combined, to be able to take over the rest of the world!
- Me: Yeah, and that's possible after intelligence explosion. A superintelligence would totally have that property.
- Interlocutor: Well yeah, if we dropped a superintelligence into a world full of humans. But realistically the rest of the world will be undergoing intelligence explosion too. And indeed the world as a whole will undergo a faster intelligence explosion than any particular project could; to think that one project could pull ahead of everyone else is to think that, prior to intelligence explosion, there would be a single project innovating faster than the rest of the world combined!
This section responds to that by way of sketching how one nation/project/AI might get 3 - 0.3 years ahead of everyone else.
Toy model: There are projects which research technology, each with their own "innovation rate" at which they produce innovations from some latent tech tree. When they produce innovations, they choose whether to make them public or private. They have access to their private innovations + all the public innovations.
It follows from the above that the project with access to the most innovations at any given time will be the project that has the most hoarded innovations, even though the set of other projects has a higher combined innovation rate and also a larger combined pool of accessible innovations. Moreover, the gap between the leading project and the second-best project will increase over time, since the leading project has a slightly higher rate of production of hoarded innovations, but both projects have access to the same public innovations
This model leaves out several important things. First, it leaves out the whole "intelligence explosion" idea: A project's innovation rate should increase as some function of how many innovations they have access to. Adding this in will make the situation more extreme and make the gap between the leading project and everyone else grow even bigger very quickly.
Second, it leaves out reasons why innovations might be made public. Realistically there are three reasons: Leaks, spies, and selling/using-in-a-way-that-makes-it-easy-to-copy.
Claim 3: Leaks & Spies: I claim that the 10x-100x speedup Paul prophecies will not come with an associated 10x-100x increase in the rate of leaks and successful spying. Instead the rate of leaks and successful spying will be only a bit higher than it currently is.
This is because humans are still humans even in this soft takeoff future, still in human institutions like companies and governments, still using more or less the same internet infrastructure, etc. New AI-related technologies might make leaking and spying easier than it currently is, but they also might make it harder. I'd love to see an in-depth exploration of this question because I don't feel particularly confident.
But anyhow, if it doesn't get much easier than it currently is, then going 3 years to 0.3 years without a leak is possible, and more generally it's possible for the world's leading project to build up a 0.3-3 year lead over the second-place project. For example, the USSR had spies embedded in the Manhattan Project but it still took them 4 more years to make their first bomb.
Claim 4: Selling etc. I claim that the 10x-100x speedup Paul prophecies will not come with an associated 10x-100x increase in the budget pressure on projects to make money fast. Again, today AI companies regularly go years without turning a profit -- DeepMind, for example, has never turned a profit and is losing something like a billion dollars a year for its parent company -- and I don't see any particularly good reason to expect that to change much.
So yeah, it seems to me that it's totally possible for the leading AI project to survive off investor money and parent company money (or government money, for that matter!) for five years or so, while also keeping the rate of leaks and spies low enough that the distance between them and their nearest competitor increases rather than decreases. (Note how this doesn't involve them "innovating faster than the rest of the world combined.")
Suppose they could get a 3-year lead this way, at the peak of their lead. Is that enough?
Well, yes. A 3-year lead during a time 10x-100x faster than the Industrial Revolution would be like a 30-300 year lead during the era of the Industrial Revolution. As I argued in the previous section, even the low end of that range is probably enough to get a decisive strategic advantage.
If this is so, why didn't nations during the Industrial Revolution try to hoard their innovations and gain decisive strategic advantage?
England actually did, if I recall correctly. They passed laws and stuff to prevent their early Industrial Revolution technology from spreading outside their borders. They were unsuccessful--spies and entrepreneurs dodged the customs officials and snuck blueprints and expertise out of the country. It's not surprising that they weren't able to successfully hoard innovations for 30+ years! Entire economies are a lot more leaky than AI projects.
What a "Paul Slow" soft takeoff might look like according to me
At some point early in the transition to much faster innovation rates, the leading AI companies "go quiet." Several of them either get huge investments or are nationalized and given effectively unlimited funding. The world as a whole continues to innovate, and the leading companies benefit from this public research, but they hoard their own innovations to themselves. Meanwhile the benefits of these AI innovations are starting to be felt; all projects have significantly increased (and constantly increasing) rates of innovation. But the fastest increases go to the leading project, which is one year ahead of the second-best project. (This sort of gap is normal for tech projects today, especially the rare massively-funded ones, I think.) Perhaps via a combination of spying, selling, and leaks, that lead narrows to six months midway through the process. But by that time things are moving so quickly that a six months' lead is like a 15-150 year lead during the era of the Industrial Revolution. It's not guaranteed and perhaps still not probable, but at least it's reasonably likely that the leading project will be able to take over the world if it chooses to.
Objection: What about coalitions? During the industrial revolution, if one country did successfully avoid all leaks, the other countries could unite against them and make the "public" technology inaccessible to them. (Trade does something like this automatically, since refusing to sell your technology also lowers your income which lowers your innovation rate as a nation.)
Reply: Coalitions to share AI research progress will be harder than free-trade / embargo coalitions. This is because AI research progress is much more the result of rare smart individuals talking face-to-face with each other and much less the result of a zillion different actions of millions of different people, as the economy is. Besides, a successful coalition can be thought of as just another project, and so it's still true that one project could get a decisive strategic advantage. (Is it fair to call "The entire world economy" a project with a decisive strategic advantage today? Well, maybe... but it feels a lot less accurate since almost everyone is part of the economy but only a few people would have control of even a broad coalition AI project.)
Anyhow, those are my thoughts. Not super confident in all this, but it does feel right to me. Again, the conclusion is not that one project will take over the world even in Paul's future, but rather that such a thing might still happen even in Paul's future.
Thanks to Magnus Vinding for helpful conversation.
47 comments
Comments sorted by top scores.
comment by paulfchristiano · 2019-08-24T15:31:06.762Z · LW(p) · GW(p)
Why would a coalition look very different from the world economy, be controlled by a few people, and be hard to form? My default expectation is that it would look much like the world economy. (With the most obvious changes being a fall in the labor share of income / increasing wage inequality.) A few big underlying disagreements:
- I don't think I agree that most progress in AI is driven by rare smart individuals talking to each other---I think it's not very accurate as a description of current AI progress, that it will be even less true as AI progress becomes a larger share of the world economy, and that most "AI" progress is driven by compute/energy/data/other software rather than stuff that most looks like insight-driven AI progress.
- Your toy model seems wrong: most projects make extensive use of other people's private innovations, by trading with them. So the project that hoards the most innovations can still only be competitive if it trades with others (in order to get access to their hoarded innovations).
- I think my more basic complaint is with the "pressure to make a profit over some timescale" model. It think it's more like: you need inputs from the rest of the economy and so you trade with them. Right now deep learning moonshots don't trade with the rest of the world because they don't make anything of much value, but if they were creating really impactful technology then the projects which traded would be radically faster than the projects which just used their innovations in house. This is true even if all innovations are public, since they need access to physical capital.
I think any of these would be enough to carry my objection. (Though if you reject my first claim, and thought that rare smart individuals drove AI progress even then AI progress was overwhelmingly economically important, then you could imagine a sufficiently well-coordinated cartel of those rare smart individuals having a DSA.)
Replies from: daniel-kokotajlo, ryan_greenblatt↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2019-08-24T23:56:54.015Z · LW(p) · GW(p)
Thanks for the detailed reply! I should say at this point that I'm not particularly confident in my views on this topic; just trying to put forth my take on things in the spirit of improvement. So I wouldn't be surprised if I end up thinking you are right after more discussion.
A coalition strong enough to prevent the world's leading project from maintaining and lengthening its lead would need to have some way of preventing the leading project from accessing the innovations of the coalition. Otherwise the leading project will free-ride off the research done by the coalition. For this reason I think that a coalition would look very different from the world economy; in order to prevent the leading project from accessing innovations deployed in the world economy you would need to have an enforced universal embargo on them pretty much, and if you have that much political power, why stop there? Why not just annex them or shut them down?
A successful coalition (that isn't politically powerful enough to embargo or annex or stop their rival) would need to be capable of preventing information from leaking out to the rival project, and that suggests to me that they would need to concentrate power in the hands of a few individuals (the CEOs and boards of the companies in the coalitions, for example). A distributed, more anarchic architecture would not be able to prevent leaks and spies.
And of course, even this hypothetical coalition would be hard to form, for all the usual reasons. Sure, the 2nd through 10th projects could gang up to decisively beat the leading project. But it's also true that the 2nd through 10th most powerful nation-states could gang up to decisively beat the most powerful nation-state. Yet this isn't the norm. More concretely, I expect some projects to ally with each other, but the result to be two or three coalitions of almost equal strength rather than many.
I agree that we seem to disagree about the importance of compute/energy/data vs. smart people talking to each other, and that this disagreement seems relevant. If AI progress was just a matter of compute, for example, then... well actually mightn't there still be a decisive strategic advantage in that case? Wouldn't one project have more compute than the others, and thus pull ahead so long as funds lasted?
This gets us into the toy model & its problems. I don't think I understand your alternative model. I maybe don't get what you mean by trading. Does one party giving money to another party in return for access to their technology or products count? If so, then I think my original model still stands: The leading project will be able to hoard technology/innovation and lengthen its lead over the rest of the world so long as it still has funding to buy the necessary stuff. I agree that it will be burning money fast if it doesn't sell/trade its innovations and instead tries to hoard them, but empirically it seems that it's quite possible for leading projects to go several years in this state.
"Right now deep learning moonshots don't trade with the rest of the world because they don't make anything of much value, but if they were creating really impactful technology then the projects which traded would be radically faster than the projects which just used their innovations in house."
I think it depends not on how impactful their technology is but on how impactful their technology is relative to the perceived impact of hoarding it and going for a decisive strategic advantage. Technology hoarding happens sometimes even in very competitive industries for this reason, and it is the norm among militaries. It seems very possible to me that companies which are producing truly astounding AI technologies--stuff that seems plausibly only a few years away from human-level AGI--will have no problem finding deep-pocketed investors willing to throw money at them for a few years. Again, maybe governments will get involved, in which case this is almost trivial.
I think overall your model (as I understand it) is that people won't be (successfully) moonshotting for AGI until AI progress is already making the economy grow very fast, and also at this point progress towards AGI will mostly be a matter of how much money you have to buy compute and stuff. So even a deep-pocketed funder like Google wouldn't be able to compete for two years with shallow-pocketed, more short-sighted projects that sell their tech and reinvest the profits. Is this your view?
Replies from: paulfchristiano, paulfchristiano, paulfchristiano↑ comment by paulfchristiano · 2019-08-25T03:12:00.367Z · LW(p) · GW(p)
Wouldn't one project have more compute than the others, and thus pull ahead so long as funds lasted?
To have "more compute than all the others" seems to require already being a large fraction of all the world's spending (since a large fraction of spending is on computers---or whatever bundle of inputs is about to let this project take over the world---unless you are positing a really bad mispricing). At that point we are talking "coalition of states" rather than "project."
I totally agree that it wouldn't be crazy for a major world power to pull ahead of others technologically and eventually be able to win a war handily, and that will tend happen over shorter and shorter timescales if economic and technological progress accelerate.
(Or you might think the project is a small fraction of world compute but larger than any other project, but if economies of scale are in fact this critical, then you are again suggesting a really gigantic market failure. That's not beyond the pale, but we should be focusing on why this crazy market failure is happening.)
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2019-08-26T04:48:40.941Z · LW(p) · GW(p)
OK, so you agree with me about major world powers (nation-states) but still disagree about companies? I think this means we are closer together than it seemed, because I also think that decisive strategic advantage is significantly more likely to happen if a nation-state gets involved than if it's just some private company.
I didn't say "more compute than all the others," I said "more compute than the others," by which I meant more compute than any particular other project, yeah. This is consistent with a large fraction of the world's spending being on compute already. For example, today Deepmind (citation needed) has the largest compute budget of any AI project, but their compute budget is a tiny fraction of the world's total.
I'm not sure whether or not I'm positing a gigantic market failure. Your claim is that if compute is so important for AI technology and AI technology is so useful, the market will either fail or find ways to get a large fraction of its budget spent on a single AI project? This single project would then be a potential source of DSA but it would also be so big already that it could take over the world by selling products instead? I'm putting question marks not out of sarcasm or anything, just genuine uncertainty about what your claim is. Before I can respond to it I need to understand it.
↑ comment by paulfchristiano · 2019-08-25T03:06:05.478Z · LW(p) · GW(p)
This gets us into the toy model & its problems. I don't think I understand your alternative model. I maybe don't get what you mean by trading. Does one party giving money to another party in return for access to their technology or products count? If so, then I think my original model still stands: The leading project will be able to hoard technology/innovation and lengthen its lead over the rest of the world so long as it still has funding to buy the necessary stuff.
The reason I let other people use my IP is because they pay me money, with which I can develop even more IP. If the leading project declines to do this, then it will have less IP than any of its normal competitors. If the leading project's IP allows it to be significantly more productive than everyone else, then they could have just taken over the world through the normal mechanism of selling products. (Modulo leaks/spying.) As far as I can tell, until you are a large fraction of the world, the revenue you get from selling lets you grow faster, and I don't think the toy model really undermines that typical argument (which has to go through leaks/spying, market frictions, etc.).
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2019-08-26T18:10:21.392Z · LW(p) · GW(p)
I am skeptical that selling products is sufficient to take over the world, no matter how good the products are. Eventually you raise too much attention and get nationalized or taxed or copied.
In light of your critiques I intend to write a much better version of this post in the future. Thanks! I wrote this one during MSFP as part of their blog post day event, so it was kinda rushed and has lots of room for improvement. I'm very glad to see so much engagement though; it inspires me to make said improvements. Perhaps in the course of doing so I'll change my mind.
↑ comment by paulfchristiano · 2019-08-25T02:51:20.495Z · LW(p) · GW(p)
A coalition strong enough to prevent the world's leading project from maintaining and lengthening its lead would need to have some way of preventing the leading project from accessing the innovations of the coalition. Otherwise the leading project will free-ride off the research done by the coalition. For this reason I think that a coalition would look very different from the world economy; in order to prevent the leading project from accessing innovations deployed in the world economy you would need to have an enforced universal embargo on them pretty much, and if you have that much political power, why stop there? Why not just annex them or shut them down?
Are you saying that the leading project can easily spy on other projects, but other projects can't spy on it? Is this because the rest of the world is trading with each other, and trading opens up opportunities for spying? Some other reason I missed? I don't think it's usually the case that gains from rabbit-holing, in terms of protection from spying, are large enough to outweigh the costs from not trading. It seems weird to expect AI to change that, since you are arguing that the proportional importance of spying will go down, not up, because it won't be accelerated as much.
If the leading project can't spy on everyone else, then how does it differ from all of the other companies who are developing technology, keeping it private, and charging other people to use it? The leading project can use others' technology when it pays them, just like they use each other's technology when they pay each other. The leading project can choose not to sell its technology, but then it just has less money and so falls further and further behind in terms of compute etc. (and at any rate, it needs to be selling something to the other people in order to even be able to afford to use their technology).
(I may just be missing something about your model.)
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2019-08-26T04:39:26.866Z · LW(p) · GW(p)
Yes, if the coalition is a large fraction of the world then I am saying there is an asymmetry in that the leading project can more easily spy on that large fraction of the world than the other way round. This is because large fractions of the world contain many different people and groups, some of which will leak secrets (or sell secrets) to the leading project, unless extremely and unprecedentedly effective anti-leaking-and-spying measures are implemented across a large fraction of the world. It's hard but doable for one corporation to keep trade secrets from the rest of the economy; how on earth can the rest of the economy keep trade secrets from a powerful corporation?
I don't see how I'm arguing that the proportional importance of spying will go down. The proportional importance of spying will go up precisely because it won't be accelerated as much as AI technology in general will be. (Why don't I think spying will be accelerated as much as AI technology in general? I certainly agree that spying technology will be accelerated as much or more as AI technology. However I think that spying is a function of several things, only one of which is spying technology, the others being non-technology things like having literal human spies climb through ranks of enemy orgs and also having anti-spying technology.) I envision a future where spying is way more rewarding than any time in history, and yet nevertheless the actual amount of successful spying is less than 10x-100x more than in the past, due to the factors mentioned in the parenthesis.
"The leading project can choose not to sell its technology, but then it just has less money and so falls further and further behind in terms of compute etc. (and at any rate, it needs to be selling something to the other people in order to even be able to afford to use their technology)."
Again, my whole point is that this is only true in the long run. Yes, in the long run a project which relies on other sources of income to buy the things it needs to buy will lose money to projects which sell their innovations. But in the short run empirically it seems that projects can go for years on funding raised from investors and wealthy parent companies. I guess your point is that in a world where the economy is growing super fast due to AI, this won't be true: any parent company or group of investors capable of funding the leading project at year X will be relative paupers by year X+3 unless their project has been selling its tech. Am I right in understanding you here?
(Miscellanous: I don't think the leading project differs from all the other projects that develop tech and keep it private. Like Wei Dai said, insofar as a company can charge people to use tech without letting the secrets of how to build that tech escape, they will obviously do so. I think our disagreement is about your last two sentences, which I quoted above.)
↑ comment by ryan_greenblatt · 2023-10-17T03:11:11.137Z · LW(p) · GW(p)
It seems like there are strong reasons to expect that the post AI coalitions will look very different from the current world economy, though I agree that they might look like a world economy. For instance, imagine world GDP grows by 100x. It seems totally plausible that Google/TSMC/OpenAI revenue grows by 50x relative to typical other companies which only 2x revenue.
Then, power structures might be dramatically different from current power structures. (Even if the US Government is effectively co-running AI lab(s), I still expect that the power structures within AI labs could become considerably more powerful than any current corporate coalition. E.g., maybe 1 board member on the OpenAI board is more powerful than any current corporate board member today in terms of % control over the future of the world.)
comment by Wei Dai (Wei_Dai) · 2019-08-23T17:56:50.447Z · LW(p) · GW(p)
Again, today AI companies regularly go years without turning a profit—DeepMind, for example, has never turned a profit and is losing something like a billion dollars a year for its parent company—and I don’t see any particularly good reason to expect that to change much.
Not sure this makes sense. I'm just based this on my intuitions rather any detailed analysis, but if AI research was causing 10-100x faster economic growth than the Industrial Revolution, wouldn't at least 10% of total economic output be going into AI research (maybe just building hardware to power ML) in which case I don't see how a company, even Google, could afford to invest that much without trying to make some short term profits. It seems pretty plausible though that it could do this in a way that doesn't leak its technology, e.g., by just selling AI services (with some controls for not using it to do AI research) rather than any hardware or software that people could copy.
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2019-08-23T18:08:20.665Z · LW(p) · GW(p)
Hmm, good point. So the idea is that faster GDP growth will put more pressure on companies (and governments?) to make lots of profit quickly or else go obsolete? Yeah that seems somewhat plausible... I'd like to see someone analyze this in more detail.
Replies from: daniel-kokotajlo
↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2019-08-23T18:09:28.213Z · LW(p) · GW(p)
What if it's not actual GDP growth though, but potential GDP growth? As in, innovations in AI technology leading to more and faster innovation in AI technology... but the wider economy as a whole not being affected that much initially, just as how the whole deep learning revolution of the past 5 years hasn't really changed the economy much yet.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2019-08-24T18:17:22.051Z · LW(p) · GW(p)
That seems possible if (1) AI progress depends more on insights rather than "compute/energy/data/other software" as Paul suggests and (2) leading AI projects are farsighted enough to forgo short-term profits. I'm unsure about (1) but (2) seems false at least.
It seems like a stronger argument for your position is that (as I suggested earlier) instead of companies not selling access to their AI innovations at all, they sell them in a way that doesn't cause their innovations to leak out. This is how Google currently sells access to innovations in search engine algorithms, for example.
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2019-08-25T00:07:03.061Z · LW(p) · GW(p)
Hmm, OK. I like your point about making profits without giving away secrets.
And yeah I think you (and Paul's comment below) is helping me to get the picture a bit better--because the economy is growing so fast, moonshot projects that don't turn a profit for a while just won't work because the people capable of affording them one year will be paupers by comparison to the people capable of funding AI research the next year (due to the general boom). And so while a tech-hoarding project will still technically have more insights than the economy as a whole, its lead will shrink as its relative funding shrinks.
Another way I think my scenario could happen, though, is if governments get involved. Because governments have the power to tax. Suppose we have a pool of insights that is publicly available, and from it we get this rapidly growing economy fueled by publicly available AI technologies. But then we have a government that taxes this entire economy and funnels the revenue into an AGI project that hoards all its insights. Won't this AGI project have access to more insights than anyone else? If there is an intelligence explosion, won't it happen first (and/or faster) inside the project than outside? We don't have to worry about getting outcompeted by other parts of the economy, since those parts are getting taxed. The funding for our AGI project will rise in proportion to the growth in the AI sector of the economy, even though our AGI project is hoarding all its secrets.
comment by Lukas Finnveden (Lanrian) · 2019-08-23T18:29:41.145Z · LW(p) · GW(p)
Great post!
One thing I noticed is that claim 1 speak about nationstates while most of the AI-bits speak about companies/projects. I don't think this is a huge problem, but it seems worth looking into.
It seems true that it'll be necessary to localize the secret bits into single projects, in order to keep things secret. It also seems true that such projects could keep a lead on the order of months/years.
However, note that this does no longer correspond to having a country that's 30 years ahead of the rest of the world. Instead, it corresponds to having a country with a single company 30 years ahead of the world. The equivalent analogy is: could a company transported 30 years back in time gain a decisive strategic advantage for itself / whatever country it landed in?
A few arguments:
- A single company might have been able to bring back a single military technology, which may or may not have been sufficient to turn the world, alone. However, I think one can argue that AI is more multipurpose than most technologies.
- If the company wanted to cooperate with its country, there would be an implementation lag after the technology was shared. In old times, this would perhaps correspond to building the new ships/planes. Today, it might involve taking AI architectures and training them for particular purposes, which could be more or less easy depending on the generality of the tech. (Maybe also scaling up hardware?) During this time, it would be easier for other projects and countries to steal the technology (though of course, they would have implementation lags of their own).
- In the historical case, one might worry that a modern airplane company couldn't produce much useful things 30 years back in time, because they relied on new materials and products from other companies. Translated to when AI-companies develops along with the world, this would highlight that the AI-company could develop a 30-year-lead-equivalent in AI-software, but that might not correspond to a 30-year-lead-equivalent in AI-technology, insofar as progress is largely driven by improvements to hardware or other public inputs to the process. (Unless the secret AI-project is also developing hardware.) I don't think this is very problematic: hardware progress seems to be slowing down, while software is speeding up (?), so if everything went faster things would probably be more software driven?
- Perhaps one could also argue that a 3-year lead would translate to an even greater lead, because of recursive self-improvement, in which case the company would have an even greater lead over the rest of the world.
Overall, these points don't seem too important, and I think your claims still go through.
comment by Adele Lopez (adele-lopez-1) · 2019-08-23T17:22:54.477Z · LW(p) · GW(p)
This post has caused me to update my probability of this kind of scenario!
Another issue related to the information leakage: in the industrial revolution era, 30 years was plenty of time for people to understand and replicate leaked or stolen knowledge. But if the slower team managed to obtain the leading team's source code, it seems plausible that 3 years, or especially 0.3 years, would not be enough time to learn how to use that information as skillfully as the leading team can.
Replies from: Lanrian↑ comment by Lukas Finnveden (Lanrian) · 2019-08-23T19:35:42.608Z · LW(p) · GW(p)
Hm, my prior is that speed of learning how stolen code works would scale along with general innovation speed, though I haven't thought about it a lot. On the one hand, learning the basics of how the code works would scale well with more automated testing, and a lot of finetuning could presumably be automated without intimate knowledge. On the other hand, we might be in a paradigm where AI tech allows us to generate lots of architectures to test, anyway, and the bottleneck is for engineers to develop an intuition for them, which seems like the thing that you're pointing at.
Replies from: Hoagy, adele-lopez-1↑ comment by Hoagy · 2019-08-23T22:22:30.485Z · LW(p) · GW(p)
I think this this points to the strategic supremacy of relevant infrastructure in these scenarios. From what I remember of the battleship era, having an advantage in design didn't seem to be a particularly large advantage - once a new era was entered, everyone with sufficient infrastructure switches to the new technology and an arms race starts from scratch.
This feels similar to the AI scenario, where technology seems likely to spread quickly through a combination of high financial incentive, interconnected social networks, state-sponsored espionage etc. The way in which a serious differential emerges is likely to be more through a gap in the infrastructure to implement the new technology. It seems that the current world is tilted towards infrastructure ability diffusing fast enough to, but it seems possible that if we have a massive increase in economic growth then this balance is altered and infrastructure gaps emerge, creating differentials that can't easily be reversed by a few algorithm leaks.
↑ comment by Adele Lopez (adele-lopez-1) · 2019-08-23T20:10:24.811Z · LW(p) · GW(p)
Yeah, I think the engineer intuition is the bottleneck I'm pointing at here.
comment by avturchin · 2019-08-23T19:13:04.524Z · LW(p) · GW(p)
One possible way to the decisive strategic advantage is to combine rather mediocre AI with some also mediocre but rare real world capability.
Toy example: An AI is created with is capable to win in nuclear war by choosing right targets and other elements of nuclear strategy. The AI itself is not a superintelligence and maybe like something like AlphaZero for nukes. Many companies and people are capable to create such AI. However, only a nuclear power with a large nuclear arsenal could actually get any advantage of it, which could be only US, Russia and China. Lets assume that such AI gives +1000 in nuclear ELO rating between nuclear superpowers. Now the first of three countries which will get it, will have temporary decisive strategic advantage. This example is a toy example as it is unlikely that the first country which would get such "nuclear AI decisive advantage" will take a risk of first strike.
There are several other real world capabilities which could be combines with mediocre AI to get decisive strategic advantage: access to a very large training data, access to large surveillance capabilities like Prizm, access to large untapped computing power, to funds, to pool of scientists, to some other secret military capabilities, to some drone manufacturing capabilities.
All these capabilities are centered around largest military powers and their intelligence and military services. Thus, combining rather mediocre AI with a whole capabilities of a nuclear superpower could create a temporary strategic advantage. Assuming that we have around 3 nuclear superpowers, one of them could get temporary strategic advantage via AI. But each of them has some internal problems in implementing such project.
Replies from: hamnox, Gavin
↑ comment by hamnox · 2020-12-31T14:51:18.933Z · LW(p) · GW(p)
Hm.. It occurs to me that AI itself does not have to be capable of winning a nuclear war. The leaders just have to be convinced they have enough of a decisive advantage to start it.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2020-12-31T16:27:05.460Z · LW(p) · GW(p)
More broadly, an AI only needs to think that starting a nuclear war has higher expected utility than not starting it.
E.g. if an AI thinks it is about to be destroyed by default, but that starting a nuclear war (which it expects to lose) will distract its enemies and maybe give it the chance to survive and continue pursuing its objectives, then the nuclear war may be the better bet. (I discuss this kind of thing in "Disjunctive Scenarios of Catastrophic AI Risk [LW · GW]".)
Replies from: hamnox↑ comment by hamnox · 2020-12-31T17:02:26.171Z · LW(p) · GW(p)
Not more broadly, different class. I'm thinking of, like, witch doctors making warriors bulletproof. If they believe its power will protect them, then breaking MAD becomes an option.
The AI in this scenario doesn't need to think at all. It could actually just be a magic 8 ball.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2020-12-31T17:33:43.186Z · LW(p) · GW(p)
Ah, right, that's indeed a different class. I guess I was too happy to pattern-match someone else's thought to my great idea. :-)
↑ comment by Gavin · 2019-08-26T21:47:13.386Z · LW(p) · GW(p)
A few plausible limited abilities that could provide decisive first move advantages:
- The ability remotely take control of any networked computer
- The ability to defeat all conventional cryptography would provide a decisive advantage in the type of conflict we're currently seeing.
- The ability to reliably market price movements
comment by ryan_b · 2019-08-23T17:58:16.394Z · LW(p) · GW(p)
I broadly agree that Decisive Strategic Advantage is still plausible under a slow takeoff scenario. That being said:
Objection to Claim 1A: transporting 1939 Germany back in time to 1910 is likely to cause a sudden and near-total collapse of their warmaking ability because 1910 lacked the international trade and logistical infrastructure upon which 1939 Germany relied. Consider the Blockade of Germany, and that Czarist Russia would not be able to provide the same trade goods as the Soviet Union did until 1941 (nor could they be invaded for them, like 1941-1945). In general I expect this objection to hold for any industrialized country or other entity.
The intuition I am pointing to with this objection is that strategic advantage, including Decisive Strategic Advantage, is fully contextual; what appear to be reasonable simplifying assumptions are really deep changes to the nature of the thing being discussed.
To reinforce this, consider that the US invasion of Afghanistan is a very close approximation of the 30 year gap you propose. At the time the invasion began, the major source of serious weapons in the country was the Soviet-Afghan War which ended in 1989, being either provided by the US covert alliance or captured from the Soviets. You would expect at least local strategic advantage vis-a-vis Afghanistan. Despite this, and despite the otherwise overwhelming disparities between the US and Afghanistan, the invasion was a political defeat for the US.
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2019-08-23T22:33:47.300Z · LW(p) · GW(p)
I disagree about 1939 Germany--Sure, their economy would collapse, but they'd be able to conquer western europe before it collapsed, and use the resources and industry set up there. Even if they couldn't do that they would be able to reorient their economy in a year or two and then conquer the world.
I agree about the Afghanistan case but I'm not sure what lessons to draw from it for the AGI scenario in particular.
Replies from: ryan_b↑ comment by ryan_b · 2019-08-26T15:06:33.129Z · LW(p) · GW(p)
I claim that 1939 Germany would not be able to conquer western Europe. There are two reasons for this: first, 1939 Germany did not have reserves in fuel, munitions, or other key industrial inputs to complete the conquest when they began (even allowing for the technical disparities); second, the industrial base of 1910 Europe wasn't able to provide the volume or quality of inputs (particularly fuel and steel) needed to keep the warmachine running. Europe would fall as fast as 1939 German tanks arrived - but I expect those tanks to literally run out of gas. Of course if I am wrong about either of those two core arguments I would have to update.
I am not sure what lessons to draw about the AGI scenario in particular either; mostly I am making the case for extreme caution in the assumptions we make for modelling the problem. The Afghanistan example shows that capability and goals can't be disentangled the way we usually assume. Another particularly common one is the perfect information assumption. As an example, my current expectation in a slow takeoff scenario is multiple AGIs which each have Decisive Strategic Advantage windows at different times but do not execute it for uncertainty reasons. Strictly speaking, I don't see any reason why two different entities could not have Decisive Strategic Advantage simultaneously, in the same way the United States and Soviet Union both had extinction-grade nuclear arsenals.
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2019-08-26T18:27:26.247Z · LW(p) · GW(p)
Hmmm, well maybe you are right. I am not a historian, just an armchair general. I look forward to thinking and learning more about this in the future.
I like your point about DSA being potentially multiple & simultaneous.
Replies from: ryan_b↑ comment by ryan_b · 2019-08-27T13:51:16.091Z · LW(p) · GW(p)
A book that completely changed my way of thinking about this sort of thing is Supplying War, by Martin Van Creveld. It is a history of logistics from the Napoleonic Era to WW2, mostly in Europe.
One startling revelation (to me) is that WW1 was the first war where supply lines became really important, because everything from the bores of the artillery to the gauge of the rail lines was sufficiently differentiated that you could no longer simply take the enemy's stuff and use it. At the same time, the presence of rail finally meant it was actually feasible to transport enough supplies from an industrial core to the border to make a consistent difference.
All prior conflicts in Europe relied on forage and capture of enemy equipment for the supply of armies.
comment by Raemon · 2019-08-24T16:55:53.087Z · LW(p) · GW(p)
I think I used to think things like this, and... I dunno maybe I still do.
But I feel quite confused about the fact that, historically, economic and military advantage seems temporary – both for literal empires and companies.
And on one hand, I can think of reasons why this is a special case for humans (who have particular kinds of coordination failures that occur on the timescale of generations), and might be less true for AI. On the other hand, if I were reasoning from first principles at the time, I might have assumed Rome would never fall because it had such an initial advantage.
Replies from: ryan_b↑ comment by ryan_b · 2019-08-27T15:23:00.585Z · LW(p) · GW(p)
I am strongly convinced this boils down to bad decisions.
There's a post over at the Scholar's Stage about national strategy for the United States [low politics warning], and largely it addresses the lack of regional expertise. The part about Rome:
None of these men received any special training in foreign languages, cultures, diplomacy, or statecraft before attaining high rank. Men were more likely to be chosen for their social status than proven experience or familiarity with the region they were assigned to govern. The education of these officials was in literature, grammar, rhetoric, and philosophy, and their ability to govern was often judged on their literary merits. The historian Susan Mattern discusses one example of this in her masterful study of Roman strategy, Rome and the Enemy. The key passage comes from Tacitus, who reports that the Emperor Nero was better placed to deal with Parthian shenanigans in Armenia than Cladius, for he was advised by Burrus and Seneca, "men known for their expertise in such matters" (Annals 13.6).
And later:
This had significant strategic implications. Between the conquest of Dalmatia in the early days of the Principate and the arrival of the Huns in the days of Late Antiquity, it is difficult to find an enemy on Rome's northern borders that was not created by Rome itself. Rehman has already noted that the greatest defeat of the Principate, that of Teutoburg Forest, was the work of man in Rome's employ. Teutoburg is but one point in a pattern that repeated for centuries. Most Germanic barbarian groups did not live in oppida, as the Celts did, and had little political hierarchy to speak of. When Romans selected local leaders to negotiate with, favor with trade or other boons, and use as auxiliary allies in war they were transforming petty chiefs into kings. Roman diplomatic norms, combined with unrelenting Roman military pressure, created the very military threats they were hoping to forestall.
The same pattern happened to China with the steppe, and (in my opinion) it matches pretty closely what is happening with the United States in Central Asia and the Middle East.
comment by orthonormal · 2021-01-13T03:57:34.236Z · LW(p) · GW(p)
It's hard to know how to judge a post that deems itself superseded by a post from a later year, but I lean toward taking Daniel at his word and hoping we survive until the 2021 Review comes around.
comment by ryan_b · 2021-01-11T22:48:46.714Z · LW(p) · GW(p)
Last minute review. Daniel Kokotajlo, the author of this post, has written a review as a separate post [LW · GW], within which he identifies a flawed argument here and recommends against this post's inclusion in the review on that basis.
I disagree with that recommendation [LW(p) · GW(p)]. The flaw Daniel identifies and improves does not invalidate the core claim of the post. It does appear to significantly shift the conclusion within the post, but:
- I still feel that this still falls within the scope of the title and purpose of the post.
- I feel the shifted conclusion falls within the scope of the uncertainty expressed within the post.
- The details of the original uncertain conclusion were not the focus of conversation in the comments; we spent much more time on claims, examples, and intuitions. Further, the new conclusion doesn't invalidate the later work done over the course of 2020 that I can see, so it wouldn't make the post accidentally misleading.
I take a strong view of how much editing is acceptable in a post before it goes in the book; I am sympathetic to Raemon's ideal world [LW(p) · GW(p)].
Lastly, I am obligated to point out that Daniel and I were essentially mid-conversation when I started this review. This is pretty sub-optimal, but the deadline for reviews was fast approaching and I needed to get in under the wire. Any misrepresentation of Daniel's position is my mistake.
comment by habryka (habryka4) · 2020-12-14T03:27:12.378Z · LW(p) · GW(p)
As Robby said, this post isn't perfect, but it felt like it opened up a conversation on LessWrong that I think is really crucial, and was followed up in a substantial number of further posts by Daniel Kokotajlo that I have found really useful. Many of those were written in 2020, but of the ones written in 2019, this strikes me as the one I remember most.
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-12-14T09:51:37.422Z · LW(p) · GW(p)
Thanks. I agree it isn't perfect... in the event that it gets chosen, I'd get to revise it, right? I never did get around to posting the better version I promised.
Replies from: habryka4↑ comment by habryka (habryka4) · 2020-12-17T02:40:11.730Z · LW(p) · GW(p)
Yep, you can revise it any time before we actually publish the book, though ideally you can revise it before the vote so people can be compelled by your amazing updates!
comment by Rob Bensinger (RobbBB) · 2020-12-09T14:46:10.216Z · LW(p) · GW(p)
I'm not a slow-takeoff proponent, and I don't agree with everything in this post; but I think it's asking a lot of the right questions and introducing some useful framings.
comment by KatjaGrace · 2020-02-18T06:56:45.040Z · LW(p) · GW(p)
The time it takes to get a DSA by growing bigger depends on how big you are to begin with. If I understand, you take your 30 years from considering the largest countries, which are not far from being the size of the world, and then use it when talking about AI projects that are much smaller (e.g. a billion dollars a year suggests about 1/100,000 of the world). If you start from a situation of an AI project being three doublings from taking over the world say, then most of the question of how it came to have a DSA seems to be the question of how it grew the other seventeen doublings. (Perhaps you are thinking of an initially large country growing fast via AI? Do we then have to imagine that all of the country's resources are going into AI?)
Replies from: daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-02-18T16:27:55.411Z · LW(p) · GW(p)
I was thinking of an initially large country growing fast via AI, yes. Still counts; it is soft takeoff leading to DSA. However I am also making much stronger claims than that--I think it could happen with a corporation or rogue AGI.
I don't think annual income is at all a good measure of how close an entity is to taking over the world. When Cortez landed in Mexico he had less than 1/100,000th of the income, population, etc. of the region, yet he ruled the whole place three years later. Then a few years after that Pizarro repeated the feat in Peru, good evidence that it wasn't just an amazing streak of luck.
Replies from: KatjaGrace↑ comment by KatjaGrace · 2020-02-19T01:11:50.545Z · LW(p) · GW(p)
1) Even if it counts as a DSA, I claim that it is not very interesting in the context of AI. DSAs of something already almost as large as the world are commonplace. For instance, in the extreme, the world minus any particular person could take over the world if they wanted to. The concern with AI is that an initially tiny entity might take over the world.
2) My important point is rather that your '30 year' number is specific to the starting size of the thing, and not just a general number for getting a DSA. In particular, it does not apply to smaller things.
3) Agree income doesn't equal taking over, though in the modern world where much purchasing occurs, it is closer. Not clear to me that AI companies do better as a fraction of the world in terms of military power than they do in terms of spending.
Replies from: matthew-barnett, daniel-kokotajlo↑ comment by Matthew Barnett (matthew-barnett) · 2020-02-19T01:42:56.648Z · LW(p) · GW(p)
The concern with AI is that an initially tiny entity might take over the world.
This is a concern with AI, but why is it the concern. If eg. the United States could take over the world because they had some AI enabled growth, why would that not be a big deal? I'm imagining you saying, "It's not unique to AI" but why does it need to be unique? If AI is the root cause of something on the order of Britain colonizing the world in the 19th century, this still seems like it could be concerning if there weren't any good governing principles established beforehand.
↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-02-19T01:54:23.485Z · LW(p) · GW(p)
I like your point #2; I should think more about how the 30 year number changes with size. Obviously it's smaller for bigger entities and bigger for smaller entities, but how much? E.g. if we teleported 2020 Estonia back into 1920, would it be able to take over the world? Probably. What about 1970 though? Less clear.
Military power isn't what I'm getting at either, at least not if measured in the way that would result in AI companies having little of it. Cortez had, maybe, 1/10,000th of the military power of Mexico when he got started. At least if you measure in ways like "What would happen if X fought Y." Probably 1/10,000th of Mexico's military could have defeated Cortez' initial band.
If we try to model Cortez' takeover as him having more of some metric than all of Mexico had, then presumably Spain had several orders of magnitude more of that metric than Cortez did, and Western Europe as a whole had at least an order of magnitude more than that. So Western Europe had *many* orders of magnitude more of this stuff, whatever it is, than Mexico, even though Mexico had a similar population and GDP. So they must have been growing much faster than Mexico for quite some time to build up such a lead--and this was before the industrial revolution! More generally, this metric that is used for predicting takeovers seems to be the sort of thing that can grow and/or shrink orders of magnitude very quickly, as illustrated by the various cases throughout history of small groups from backwater regions taking over rich empires.
(Warning: I'm pulling these claims out of my ass, I'm not a historian, I might be totally wrong. I should look up these numbers.)
comment by hamnox · 2020-12-31T16:07:07.102Z · LW(p) · GW(p)
May I just say: Aaaaaa!
This post did not update my explicit model much, but it sure did give my intuition a concrete picture to freak out about. Claim 2 especially. I greatly look forward to the rewrite. Can I interest you in sending an outline/draft to me to beta read?
Given your nomination was for later work building on this post and spinning off discussion, you can likely condense this piece and summarize the later work / responses. (Unless you are hoping they get separately nominated for 2020?)
Your "See: Colonialism" as a casual aside had me cracking up a little. Between lackluster history education and political renarratization, I don't think you can assume a shared context for what that even means. You expand the points much more concretely in
Cortés, Pizarro, and Afonso as Precedents for Takeover.
(The colonizers do not love you, nor do they hate you, but you live on resources they can exploit.)
In the what a soft takeoff might look like section, you paint a picture of one company gaining quiet advantage and address coalitions as a not-very-possible alternative. You can expand on this with examples from "The date of AI Takeover is not the day the AI takes over" and "Against GDP as a metric for timelines and takeoff speeds", comparing other models that similarly obscure when the decisive strategic advantage of an AI hits a point of no return.
Good skill!
Replies from: daniel-kokotajlo, daniel-kokotajlo↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-01-10T14:13:49.974Z · LW(p) · GW(p)
Thanks! Yeah I like my later work more and hope it gets nominated for 2020. I've compiled it all into a sequence now.
↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-01-10T14:11:14.839Z · LW(p) · GW(p)
Oh damn, didn't see this till now! Sending you the gdoc of my review now.
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-01-10T18:14:50.352Z · LW(p) · GW(p)
I've written up a review here [LW · GW], which I made into a separate post because it's long.
Now that I read the instructions more carefully, I realize that I maybe should have just put it here and waited for mods to promote it if they wanted to. Oops, sorry, happy to undo if you like.