Good point about the colonization. One thing I was surprised to learn when I researched the conquistadors stuff is that Muslim merchants, fleets, armies, and rulers had penetrated into India, Indonesia, all around the indian ocean, and even into China I think by the time the Portuguese showed up. Malacca was ruled by a Muslim for example. And yeah, no doubt this led to a lot of resources flowing back towards the middle east.
How much damage did the Mongols do to Muslim science? My vague guess would be, quite a lot? Perhaps this is also relevant.
I'm surprised this post didn't get more comments and spark more further research. Rereading it, I think it's both an excellent overview/distillation, and also a piece of strategy research in its own right. I wish there were more things like this. I think this post deserves to be expanded into a book or website and continually updated and refined.
I'm no ML expert, but thanks to this post I feel like I have a basic grasp of some important ML theory. (It's clearly written and has great graphs.) This is a big deal because this understanding of deep double descent has shaped my AI timelines to a noticeable degree.
Is there something similar for The Codex? On Amazon I see a physical collection of slatestarcodex essays, but it has poor reviews, saying it's just a scrape of the website without images. Is it even official?
See, if they both had engineering cultures and proto-capitalism, that seems like evidence for the "Because colonialism" hypothesis.
But I do think the "never really unified" hypothesis is intriguing. After all, the Chinese not only destroyed their own treasure fleet but basically banned maritime trade and sent the army to depopulate their own coastline for 20km or so inland, IIRC, because of the misguided policy decisions of the central government. No central government, no misguided policy decisions applied to entire civilizations.
Yeah, good point, maybe it was something like "Will to explore and colonize" that was the most important variable, even more important than the ships+navigation tech. Or maybe it was a more generic tech advantage, that made it cheaper and more profitable for Europeans to do it than for the Chinese or Arabs to do it.
I think the ships+navigation tech are definitely worth mentioning at least, because they were necessary, and not easy to acquire. And Europeans were certainly disproportionately good at it at the time, as far as I can tell. I know their ships were (in the relevant ways) slightly superior to the ships in the Indian Ocean in 1500, and while I haven't looked this up, I'd be willing to bet that their navigation tech (and therefore, their ability to cross the Pacific and Atlantic) was superior to the Chinese. The Polynesians had excellent navigation tech, but tiny ships and insufficient military or economic tech to exploit this advantage. No one else comes close to those groups as far as I know.
OK, thanks. I find it hard to take seriously the idea that the IR caused colonialism, since colonialism happened first (just look at the world in 1750!). Maybe the idea is that there was some underlying advantage Europe had which caused both colonialism and the IR?
I agree that maybe western culture is part of the explanation for why colonialism happened in the West more than it did elsewhere. But I think having good ships and navigation tech is a bigger part of the explanation.
I like the point that resources in general don't seem to cause technological growth. Russia, North Korea, etc. Vaniver mentions China below, maybe a better example would be Mongolia, which suddenly ruled almost the entire world after Ghenghis Khan but didn't spark an IR. (Though maybe it did spark a bunch of new tech developments? Idk, would be interested to hear.)
FWIW, my current view is something like "WEIRD culture helped there be science and market institutions to a mildly strong extent, though not dramatically more than other places like China; then the Europeans lucked into some really good ships & navigation tech (and kings eager to use them, unlike some emperors I could mention) and started sailing around a lot, and then this spurred more market institutions and more science, creating a feedback loop / snowball effect. In this story, WEIRD culture is important, but it's the ships+navigation+kings that's the most important thing. I'm no historian though and would love to hear criticisms of this take.
The question is whether the IR would have happened in China if they, and not the Europeans, controlled the world's oceans, plantations, and mines. (And by that I mean, imagine if in the 1700s the Americas were all Chinese colonies, if the Indian and Pacific and Atlantic oceans were controlled by Chinese fleets and port-forts, if kingdoms all along the coast of India and Arabia and Africa and Europe swore fealty to China instead of Europe... In this scenario, would the IR still have happened in Britain?)
Yeah, china was rich, but plenty of rich places failed to generate IRs. The unique thing about Europe prior to the IR may have been its WEIRDness... but it also may have been the fact that it controlled so much of the world at the time. Why would this help? Well, maybe having a glut of resources for a relatively fixed labor pool raised GDP per capita a bunch and incentivised labor-saving devices like windmills and watermills and eventually steam engines. Maybe all the oceanic trade made for a robust free-ish market and spurred the development of good financial instruments and institutions (in other words, maybe what makes capitalism work so well was particularly present due to all the oceanic trade).
Yes, this is further support for the "Because colonialism" theory. Maybe there's a nearby possible world where the Emperor got really excited about exploration and colonization, and sent the fleet out again and again instead of burning it, and then historians in the 2000's write big books about why the Industrial Revolution happened first in the East because of Confucian values.
I'm surprised that none of the books on the list of books explaining why the IR happened in the West and not China said "Because colonialism." Look at the world in 1700, just prior to the IR: maybe China was economically and technologically advanced, but they didn't control the world's oceans, plantations, and mines. Surely there are books out there arguing for this theory. Have you read any of them? What do you think about this theory? Do the books you mention consider it and rebut it?
(I've heard people say that GDP per capita was higher in the West before colonialism even began, and use this as a rebuttal of the idea that colonialism was the cause of the IR. Is this it? To be really convincing, I'd like to see some sort of analysis of how the IR started in Britain but Britain hadn't begun to benefit much from colonialism by the time the IR started. Or something like that.)
(Or is the idea that colonialism did cause the IR, but colonialism was in turn caused by technological advantages that were the result of WEIRDness?)
Nah. Once it's clear we are all doomed, I'll get by just fine probably--it's unlikely that I'll have spent literally all my money and social capital by then. And if I don't, it won't matter much anyway.
Thanks. I don't think that condition holds, alas. I'm trying to optimize for making the singularity go well, and don't care much (relatively speaking) about my level of influence afterwards. If you would like to give me some of your influence now, in return for me giving you some of my influence afterwards, perhaps we can strike a deal!
Facebook is a single system and therefore not subject to moloch. Concretely, yeah the algorithm could start manipulating election results to get more power, but if that happened it would be an ordinary AI alignment failure rather than Moloch.
The bitcoin example seems more like moloch to me, but no more so than (and in same way as) the market economy already is: People who build more infrastructure get more money, etc. We already know that in the long run the market economy leads to terrible outcomes due to Moloch.
I'm confused about why you only updated mildly away from slow takeoff. It seems that you've got a pretty good argument against slow takeoff here:
Are there simple changes to chimps (or other animals) that would make them much better at accumulating culture?
Will humans continually pursue all simple yet powerful changes to our AIs?
Seems like if the answer to the first question is No, then there really is some relatively sharp transition to much more powerful culture-accumulating capabilities, that humans crossed when they evolved from chimp-like creatures. Thus, our default assumption should be that as we train bigger and bigger neural nets on more and more data, there will also be some relatively sharp transition. In other words, Yudkowsky's argument is correct.
Seems like if the answer to the second question is No, then Paul's disanalogy between evolution and AI researchers is also wrong; both evolution and AI researchers are shoddy optimizers that sometimes miss things etc. So Yudkowsky's argument is correct.
Now, you put 50% on the first answer being No and 70% on the second answer being No. So shouldn't you have something like 85% credence that Paul is wrong and Yudkowsky's argument is correct? And isn't that a fairly big update against slow takeoff?
Maybe the idea is that you are meta-uncertain, unsure you are reasoning about this correctly, etc.? Or maybe the idea is that Yudkowsky's argument could easily be wrong for other reasons than the ones Paul gave? Fair enough.
Yeah, probably not. It would need to be an international agreement I guess. But this is true for lots of proposals. On the bright side, you could maybe tax the chip manufacturers instead of the AI projects? Idk.
Maybe one way it could be avoided is if it came packaged with loads of extra funding for safe AGI research, so that overall it is still cheapest to work from the US.
FWIW, I made these judgments quickly and intuitively and thus could easily have just made a silly mistake. Thank you for pointing this out.
So, what do I think now, reflecting a bit more?
--The 7% judgment still seems correct to me. I feel pretty screwed in a world where our entire community stops thinking about this stuff. I think it's because of Yudkowskian pessimism combined with the heavy-tailed nature of impact and research. A world without this community would still be a world where people put some effort into solving the problem, but there would be less effort, by less capable people, and it would be more half-hearted/not directed at actually solving the problem/not actually taking the problem seriously.
--The other judgment? Maybe I'm too optimistic about the world where we continue working. But idk, I am rather impressed by our community and I think we've been making steady progress on all our goals over the last few years. Moreover, OpenAI and DeepMind seem to be taking safety concerns mildly seriously due to having people in our community working there. This makes me optimistic that if we keep at it, they'll take it very seriously, and that would be great.
Thank you for asking this question and for giving that break-down. I was wondering something similar. I am not an AI scientist but DL seems like a very big deal to me, and thus I was surprised that so many people seemed to think we need more insights on that level. My charitable interpretation is that they don't think DL is a big deal.
I think I mostly agree with you about the long run, but I think we have more short-term hurdles that we need to overcome before we even make it to that point, probably. I will say that I'm optimistic that we haven't yet thought of all the ways advances in tech will help collective epistemology rather than hinder it. I notice you didn't mention debate; I am not confident debate will work but it seems like maybe it will.
In the short run, well, there's also debate I guess. And the internet having conversations being recorded by default and easily findable by everyone was probably something that worked in favor of collective epistemology. Plus there is wikipedia, etc. I think the internet in general has lots of things in it that help collective epistemology... it just also has things that hurt, and recently I think the balance is shifting in a negative direction. But I'm optimistic that maybe the balance will shift back. Maybe.
Another cool thing about this tax is that it would automatically counteract decreases in the cost of compute. Say we make the tax 10% of the current cost of compute. Then when the next generation of chips comes online, and the price drops by an order of magnitude, automatically the tax will be 100% of the cost. Then when the next generation comes online, the tax will be 1000%.
This means that we could make the tax basically nothing even for major corporations today, and only start to pinch them later.
Maybe a tax on compute would be a good and feasible idea?
--Currently the AI community is mostly resource-poor academics struggling to compete with a minority of corporate researchers at places like DeepMind and OpenAI with huge compute budgets. So maybe the community would mostly support this tax, as it levels the playing field. The revenue from the tax could be earmarked to fund "AI for good" research projects. Perhaps we could package the tax with additional spending for such grants, so that overall money flows into the AI community, whilst reducing compute usage. This will hopefully make the proposal acceptable and therefore feasible.
--The tax could be set so that it is basically 0 for everything except for AI projects above a certain threshold of size, and then it's prohibitive. To some extent this happens naturally since compute is normally measured on a log scale: If we have a tax that is 1000% of the cost of compute, this won't be a big deal for academic researchers spending $100 or so per experiment (Oh no! Now I have to spend $1,000! No big deal, I'll fill out an expense form and bill it to the university) but it would be prohibitive for a corporation trying to spend a billion dollars to make GPT-5. And the tax can also have a threshold such that only big-budget training runs get taxed at all, so that academics are completely untouched by the tax, as are small businesses, and big businesses making AI without the use of massive scale.
--The AI corporations and most of all the chip manufacturers would probably be against this. But maybe this opposition can be overcome.
The other day I heard this anecdote: Someone's friend was several years ago dismissive of AI risk concerns, thinking that AGI was very far in the future. When pressed about what it would take to change their mind, they said their fire alarm would be AI solving Montezuma's Revenge. Well, now it's solved, what do they say? Nothing; if they noticed they didn't say. Probably if they were pressed on it they would say they were wrong before to call that their fire alarm.
This story fits with the worldview expressed in "There's No Fire Alarm for AGI." I expect this sort of thing to keep happening well past the point of no return.
Yes, though as a nitpick I don't think the black line is the singularity that got cancelled; that one was supposed to happen in 2020 or so, and as you can see the black line diverges from history well before 1950.
I think I mostly agree with you about innovation, but (a) I think that building AI will increasingly be more like building a bigger airport or dam, rather than like inventing something new (resources are the main constraint; ideas are not, happy to discuss this further), and (b) I think that things in the USA could deteriorate, eating away at the advantage the USA has, and (c) I think algorithmic innovations created in the USA will make their way to China in less than a year on average, through various means.
Your model of influence is interesting, and different from mine. Mine is something like: "For me to positively influence the world, I need to produce ideas which then spread through a chain of people to someone important (e.g. someone building AI, or deciding whether to deploy AI). I am separated from important people in the USA by fewer degrees of separation, and moreover the links are much stronger (e.g. my former boss lives in the same house as a top researcher at OpenAI), compared to important people in China. Moreover it's just inherently more likely that my ideas will spread in the US network than in the Chinese network because my ideas are in English, etc. So I'm orders of magnitude more likely to have a positive effect in the USA than in China. (But, in the long run, there'll be fewer important people in the USA, and they'll be more degrees of separation away from me, and a greater number of poseurs will be competing for their attention, so this difference will diminish). Mine seems more intuitive/accurate to me so far.
I'm not sure either, but it seems true to me. Here goes intuition-conveying attempt... First, the question of what counts as your data seems like a parameter that must be pinned down one way or another, and as you mention there are clearly wrong ways to do it, and meanwhile it's an open philosophical controversy, so on those grounds alone it seems plausibly relevant to building an aligned AI, at least if we are doing it in a principled way rather than through prosaic (i.e. we do an automated search for it) methods. Second, one's views on what sorts of theories fit the data depend on what you think your data is. Disputes about consciousness often come down to this, I think. If you want your AI to be physicalist rather than idealist or cartesian dualist, you need to give it the corresponding notion of data. And what kind of physicalist? Etc. Or you might want it to be uncertain and engage in philosophical reasoning about what counts as its data... which sounds like also something one has to think about, it doesn't come for free when building an AI. (It does come for free if you are searching for an AI)
OK, cool. Well, I'm still a bit confused about why my status matters for this--it's relative influence that matters, not absolute influence. Even though my absolute influence may be low, it seems higher in the US than in Asia, and thus higher in short-timelines scenarios than long-timelines scenarios. Or so I'm thinking. (Because, as you say, my influence flows through the community.)
You might be right about the long game thing. I agree that we'll learn more and grow more in size and wealth over time. However, I think (a) the levers of the world will shift away from the USA, (b) the levers of the world will shift away from OpenAI and DeepMind and towards more distributed giant tech companies and government projects advised by prestigious academics (in other words, the usual centers of power and status will have more control over time; the current situation is an anomaly) and (c) various other things might happen that effectively impose a discount rate.
So I don't think the two ways of looking at the rationalist community are in conflict. They are both true. It's just that I think considerations a+b+c outweigh the improvement in knowledge, wealth, size etc. consideration.
On policy implications: I think that the new theory almost always generates at least some policy implications. For example, relativity vs. newton changes how we design rockets and satellites. Closer to home, multiverse theory opens up the possibility of (some kinds of) acausal trade. I think "it all adds up to normality" is something that shouldn't be used to convince yourself that a new theory probably has the same implications; rather, it's something that should be used to convince yourself that the new theory is incorrect, if it seems to add up to something extremely far from normal, like paralysis or fanaticism. If it adds up to something non-normal but not that non-normal, then it's fine.
I brought up those people as an example of someone you probably disagree with. My purpose was to highlight that choices need to be made about what your data is, and different people make them differently. (For an example closer to home, solomonoff induction makes it differently than you do, I predict) This seems to me like the sort of thing one should think about when desigining an AI one hopes to align. Obviously if you are just going for capabilities rather than alignment you can probably get away with not thinking hard about this question.
Thanks. The nonexistence of warning shots is not in my control, but neither is the existence of a black hole headed for earth. I'm justified in acting as if there isn't a black hole, because if there is, we're pretty screwed anyway. I feel like maybe something similar is true (though to a lesser extent) of warning shots, but I'm not sure. If we have a 1% chance of success without warning shots and a 10% chance with warning shots, then I probably increase our overall chance of success more if I focus on warning shot scenarios.
Rudeness no problem; did I come across as arrogant or something?
I agree that that's the major variable. And that's what I had in mind when I said what I said: It seems to me that this community has more influence in short-timeline worlds than long-timeline worlds. Significantly more. Because long-timeline worlds involve AI being made by the CCP or something. But maybe I'm wrong about that! You seem to think that long-timeline worlds involve someone like you coming up with a new paradigm, and if that's true, then yeah maybe it'll still happen in the Bay after all. Seems somewhat plausible to me.
See, this is an important consideration for me! Currently I am unsure what the balance is. Here are some reasons to think "we" have more influence over short timelines:
--I think takeoff is more likely to be fast the longer the timelines, because there's more hardware overhang and more probability that it was some new paradigm shift or insight that precipitated the advance in AI capabilities. And I think on fast takeoff we will have fewer warning shots and warning shots are our best hope I think.
--The longer it takes for TAI to arrive, the higher the chance that it gets built in Asia rather than the West, I think. And I for one have much more influence over the West.
If you can convince me that the balance of considerations favors me working on long-timelines plans (relative to my credences) I would be very grateful.
This might be a good time to talk about different ways "it all adds up to normality" is interpreted.
I sometimes hear people use it in a stronger sense, to mean not just that the new theory must make the same successful predictions but that also the policy implications are mostly the same. E.g. "Many worlds has to add up to normality, so one way or another it still makes sense for us to worry about death, try to prevent suffering, etc." Correct me if I'm wrong, but this sort of thing isn't entailed by your proof, right?
There's also the issue of what counts as the data that the new theory needs to correctly predict. Some people think that "This is a table, damn it! Not a simulated table!" is part of their data that theories need to account for. What do you say to them?