AGI will drastically increase economies of scale
post by Wei Dai (Wei_Dai) · 2019-06-07T23:17:38.694Z · LW · GW · 26 commentsContents
26 comments
In Strategic implications of AIs’ ability to coordinate at low cost [LW · GW], I talked about the possibility that different AGIs can coordinate with each other much more easily than humans can, by doing something like merging their utility functions together. It now occurs to me that another way for AGIs to greatly reduce coordination costs in an economy is by having each AGI or copies of each AGI profitably take over much larger chunks of the economy (than companies currently own), and this can be done with AGIs that don't even have explicit utility functions, such as copies of an AGI that are all corrigible/intent-aligned to a single person.
Today, there are many industries with large economies of scale, due to things like fixed costs, network effects, and reduced deadweight loss when monopolies in different industries merge (because they can internally charge each other prices that equal marginal costs), but because coordination costs among humans increase super-linearly with the number of people involved (see Moral Mazes and Short Termism [LW · GW] for a related recent discussion), that creates diseconomies of scale which counterbalance the economies of scale, so companies tend to grow to a certain size and then stop. But an AGI-operated company, where for example all the workers are AGIs that are intent-aligned to the CEO, would eliminate almost all of the internal coordination costs (i.e., all of the coordination costs that are caused by value differences, such as all the things described in Moral Mazes, "market for lemons" or lost opportunities for trade due to asymmetric information, principal-agent problems, monitoring/auditing costs, costly signaling, and suboptimal Nash equilibria in general), allowing such companies to grow much bigger. In fact, from purely the perspective of maximizing the efficiency/output of an economy, I don't see why it wouldn't be best to have (copies of) one AGI control everything.
If I'm right about this, it seems quite plausible that some countries will foresee it too, and as soon as it can feasibly be done, nationalize all of their productive resources and place them under the control of one AGI (perhaps intent-aligned to a supreme leader or to a small, highly coordinated group of humans), which would allow them to out-compete any other countries that are not willing to do this (and don't have some other competitive advantage to compensate for this disadvantage). This seems to be an important consideration that is missing from many people's pictures of what will happen after (e.g., intent-aligned) AGI is developed in a slow-takeoff scenario.
26 comments
Comments sorted by top scores.
comment by Rohin Shah (rohinmshah) · 2019-06-09T16:48:27.601Z · LW(p) · GW(p)
Planned summary:
Economies of scale would normally mean that companies would keep growing larger and larger. With human employees, the coordination costs grow superlinearly, which ends up limiting the size to which a company can grow. However, with the advent of AGI, many of these coordination costs will be removed. If we can align AGIs to particular humans, then a corporation run by AGIs aligned to a single human would at least avoid principal-agent costs. As a result, the economies of scale would dominate, and companies would grow much larger, leading to more centralization.
Planned opinion:
This argument is quite compelling to me under the assumption of human-level AGI systems that can be intent-aligned. Note though that while the development of AGI systems removes principal-agent problems, it doesn't remove issues that arise due to information asymmetry.
It does seem like this doesn't hold with something like CAIS, where each AI service is optimized for a particular task, since there likely will be principal-agent problems between services.
It seems like the argument should mainly make us more worried about stable authoritarian regimes: the main effect based on this argument is a centralization of power in the hands of the AGI's overseers. This won't happen with companies, because we already have institutions that prevent companies from gaining too much power, and there doesn't seem to be a strong reason to expect that to stop. It could happen with government, but if long-term governmental power still rests with the people via democracy, that seems okay. So the risky situation seems to be when the government gains power, and the people no longer have effective control over government. (This would include scenarios with e.g. a government that has sufficiently good AI-fueled propaganda that they always win elections, regardless of whether their governing is actually good.)
Replies from: Wei_Dai, Raemon, Raemon↑ comment by Wei Dai (Wei_Dai) · 2019-06-09T19:33:01.401Z · LW(p) · GW(p)
Thanks, I appreciate the chance to clarify/discuss before your newsletter goes out!
Note though that while the development of AGI systems removes principal-agent problems, it doesn’t remove issues that arise due to information asymmetry.
By "asymmetric information" I was referring to some specific ideas in economics, the short version of which is that if two people have different values, they often can't just ask each other to honestly tell them their private information. For example, part of the reason why monopolies are inefficient is that they can't do perfect price discrimination. If a monopolist knew exactly how much each potential buyer values their product, they could just charge that buyer one penny less than that value (plus the buyer's transaction cost) and the buyer would still make the purchase. There would be no deadweight loss to the economy because all positive-value transactions would go through. But because the buyer won't tell them how much they really value the product (if the monopolist charges one penny less than whatever they say, they'd just all give a low value), the monopolist ends up trying to maximize profit by charging a price that causes some potentially valuable trades to not go through.
So principal-agent problems are a subset of asymmetric information problems, and I think AGI would solve such problems because copies of a single AGI all have the same values (or are at least much closer to this than the situation with different humans) so they can just ask each other to honestly tell them whatever information they need.
ETA: By "information asymmetry" did you mean something more like the fact that different copies or parts of an AGI can have access to different information and it can be costly (on a technical level) to propagate that information across the whole AGI? If so that seems like a much smaller cost than the kind of cost from "asymmetric information" that I'm talking about. Also it seems like it would be good to use a different phrase to talk about what you mean, so people don't confuse it with the concept of "asymmetric information" in economics. (I'll also add a link to that phrase in the post to clarify what I'm referring to.)
It does seem like this doesn’t hold with something like CAIS, where each AI service is optimized for a particular task, since there likely will be principal-agent problems between services.
This seems right to me, so I think contra Drexler, this is another reason to expect a strong competitive pressure to move from CAIS to AGI.
This won’t happen with companies, because we already have institutions that prevent companies from gaining too much power, and there doesn’t seem to be a strong reason to expect that to stop.
Today, when companies get too big, they often become monopolists, and that (plus their huge internal coordination costs) tend to make the overall efficiency of the economy worse, so competitive pressures between countries force institutions into existence to limit the size of companies. With AGI-operated companies, these problems become smaller because monopolies in different industries can merge without being limited by internal coordination costs and these merged companies can internally charge each other efficient prices. In the limit of a single AGI controlling the whole economy, all such inefficiencies go away. So while institutions that prevent companies from gaining too much power will perhaps persist for a while due to inertia (but even that's unclear, as Raemon suggests), that probably won't last very long when selection pressure for such institutions switches direction.
ETA: On second thought, part of the reason for such institutions to exist must also be domestic political pressure (from people who are afraid of too much concentration of power), so at least that pressure would persist in countries where such pressure exists or has much force in the first place.
Replies from: rohinmshah↑ comment by Rohin Shah (rohinmshah) · 2019-06-10T17:36:54.360Z · LW(p) · GW(p)
ETA: By "information asymmetry" did you mean something more like the fact that different copies or parts of an AGI can have access to different information and it can be costly (on a technical level) to propagate that information across the whole AGI? If so that seems like a much smaller cost than the kind of cost from "asymmetric information" that I'm talking about. Also it seems like it would be good to use a different phrase to talk about what you mean, so people don't confuse it with the concept of "asymmetric information" in economics.
Yes, that's right, though I can see how it's confusing based on the economics literature. Any suggestions for an alternative phrase? I was considering "communication costs", but there could also be costs from the fact that different parts have different competencies.
It's not clear to me that principal-agent costs are more important than the ones I'm talking about here. My experience of working in large companies is not that I was misaligned with the company, it was that the company's "plan" (to the extent that one existed) was extremely large and complex and not something I could easily understand. It could be that this is actually the most efficient way to work even with intent-aligned agents, since communicating the full plan could involve very large communication costs.
(I agree that the Moral Mazes arguments are primarily about principal-agent problems, but I don't know how much to believe Moral Mazes.)
This seems right to me, so I think contra Drexler, this is another reason to expect a strong competitive pressure to move from CAIS to AGI.
Seems reasonable, though I don't think it is arguing against the main arguments in favor of CAIS (which to me are that CAIS seems more technically feasible than AGI).
With AGI-operated companies, these problems become smaller because monopolies in different industries can merge without being limited by internal coordination costs and these merged companies can internally charge each other efficient prices.
I don't see how this suggests that our existing institutions to prevent centralization of power will go away, since even now monopolies could merge, often want to merge, but are prevented by law from doing so. (Though I'm not very confident in this claim, I'm mostly parroting back things I've heard.)
In the limit of a single AGI controlling the whole economy, all such inefficiencies go away.
Right, but that requires government buy-in, which is exactly my model of risk in the opinion I wrote.
On second thought, part of the reason for such institutions to exist must also be domestic political pressure (from people who are afraid of too much concentration of power), so at least that pressure would persist in countries where such pressure exists or has much force in the first place.
Yeah, that's my primary model here. I'd be surprised but not shocked if competition between countries explained most of the effect.
Replies from: Wei_Dai, Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2019-06-10T22:11:45.387Z · LW(p) · GW(p)
Yeah, that’s my primary model here. I’d be surprised but not shocked if competition between countries explained most of the effect.
It seems worth noting here that when it looked for a while like the planned economy of the Soviet Union might outperform western free market economies (and even before that, when many intellectuals just thought based on theory that central planning would perform better *), there were a lot of people in the west who supported switching to socialism / central planning. Direct military competition (which Carl's paper focuses on more) would make this pressure even stronger. So if one country switches to the "one AGI controls everything" model (either deliberately or due to weak/absent existing institutions that work against centralization), it seems hard for other countries to hold out in the long run.
Does that seem right to you, or do you see things turn out a different way (in the long run)?
(* I realize this is also a cautionary tale about using theory to predict the future, like I'm trying to do now.)
Replies from: rohinmshah↑ comment by Rohin Shah (rohinmshah) · 2019-06-11T00:38:54.149Z · LW(p) · GW(p)
Does that seem right to you, or do you see things turn out a different way (in the long run)?
I agree that direct military competition would create such a pressure.
I'm not sure that absent that there actually is competition between countries -- what are they even competing on? You're reasoning as though they compete on economic efficiency, but what causes countries with lower economic efficiency to vanish? Perhaps in countries with lower economic efficiency, voters tend to put in a new government -- but in that case it seems like really the competition between countries is on "what pleases voters", which may not be exactly what we want but it probably isn't too risky if we have an AGI-fueled government that's intent-aligned with "what pleases voters".
(It's possible that you get politicians who look like they're trying to please voters but once they have enough power they then serve their own interests, but this looks like "the government gains power, and the people no longer have effective control over government".)
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2019-06-11T01:03:57.600Z · LW(p) · GW(p)
I’m not sure that absent that there actually is competition between countries—what are they even competing on? You’re reasoning as though they compete on economic efficiency, but what causes countries with lower economic efficiency to vanish?
I guess ultimately they're competing to colonize the universe, or be one of the world powers that have some say in the fate of the universe? Absent military conflict, the less efficient countries won't disappear, but they'll fall increasingly behind in control of resources and overall bargaining power, and their opinions just won't be reflected much in how the universe turns out.
Replies from: rohinmshah↑ comment by Rohin Shah (rohinmshah) · 2019-06-11T06:08:00.929Z · LW(p) · GW(p)
In that case this model would only hold if governments:
- Actually think through the long-term implications of AI
- Think about this particular argument
- Have enough certainty in this argument to actually act upon it
Notably, there aren't any feedback loops for the thing-being-competed-on, and so natural-selection style optimization doesn't happen. This makes me much less likely to believe in arguments of the form "The thing-being-competed-on will have a high value, because there is competition" -- the mechanism that usually makes that true is natural selection or some equivalent.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2019-06-11T07:06:26.256Z · LW(p) · GW(p)
I think I oversimplified my model there. Actually competing to colonize/influence the universe will be the last stage, when the long-term implications of AI and of this particular argument will already be clear. Before that, the dynamics would be driven more by things like internal political and economic processes (some countries already have authoritarian governments and would naturally gravitate towards more centralization of power through political means, and others do not have strong laws/institutions to prevent centralization of the economy through market forces), competition for power (such as diplomatic and military power) and prestige (both of which are desired by leaders and voters alike) on the world stage, and direct military conflicts.
All of these forces create pressure towards greater AGI-based centralization, while the only thing pushing against it appears to be political pressure in some countries against centralization of power. If those countries succeed in defending against centralization but fall significantly behind in economic growth as a result, they will end up not influencing the future of the universe much so we might as well ignore them and focus on the others.
↑ comment by Wei Dai (Wei_Dai) · 2019-06-10T19:44:41.700Z · LW(p) · GW(p)
Yes, that’s right, though I can see how it’s confusing based on the economics literature. Any suggestions for an alternative phrase? I was considering “communication costs”, but there could also be costs from the fact that different parts have different competencies.
This is longer, but maybe "coordination costs that are unrelated to value differences"?
It’s not clear to me that principal-agent costs are more important than the ones I’m talking about here. My experience of working in large companies is not that I was misaligned with the company, it was that the company’s “plan” (to the extent that one existed) was extremely large and complex and not something I could easily understand. It could be that this is actually the most efficient way to work even with intent-aligned agents, since communicating the full plan could involve very large communication costs.
If companies had fully aligned workers and managers, they could adopt what Robin Hanson calls the "divisions" model where each division works just like a separate company except that there is an overall CEO that "looks for rare chances to gain value by coordinating division activities" (such as, in my view, internally charge each other efficient prices instead of profit-maximizing prices), so you'd still gain efficiency as companies merge or get bigger through organic growth. In other words, coordination costs that are unrelated to value differences won't stop a single AGI controlling all resources from being the most efficient way to organize an economy.
While searching for that post, I also came across Firm Inefficiency which like Moral Mazes (but much more concisely) lists many inefficiencies that seem all or mostly related to value differences.
Seems reasonable, though I don’t think it is arguing against the main arguments in favor of CAIS (which to me are that CAIS seems more technically feasible than AGI).
I think it's at least one of the main arguments that Eric Drexler makes, since he wrote this in his abstract:
Perhaps surprisingly, strongly self-modifying agents lose their instrumental value even as their implementation becomes more accessible, while the likely context for the emergence of such agents becomes a world already in possession of general superintelligent-level capabilities.
(My argument says that a strongly self-modifying agent will improve faster than a self-improving ecosystem of CAIS with access to the same resources, because the former won't suffer from principal-agent costs while researching how to self-improve.)
I don’t see how this suggests that our existing institutions to prevent centralization of power will go away, since even now monopolies could merge, often want to merge, but are prevented by law from doing so. (Though I’m not very confident in this claim, I’m mostly parroting back things I’ve heard.)
Yeah I'm not very familiar with this either, but my understanding is that such mergers are only illegal if the effect "may be substantially to lessen competition" or "tend to create a monopoly", which technically (it seems to me) isn't the case when existing monopolies in different industries merge.
Replies from: rohinmshah↑ comment by Rohin Shah (rohinmshah) · 2019-06-11T01:00:27.740Z · LW(p) · GW(p)
If companies had fully aligned workers and managers, they could adopt what Robin Hanson calls the "divisions" model where each division works just like a separate company except that there is an overall CEO that "looks for rare chances to gain value by coordinating division activities"
Once you switch to the "divisions" model your divisions are no longer competing with other firms, and all the divisions live or die as a group. So you're giving up the optimization that you could get via observing which companies succeed / fail at division-level tasks. I'm not sure how big this effect is, though I'd guess it's small.
While searching for that post, I also came across Firm Inefficiency which like Moral Mazes (but much more concisely) lists many inefficiencies that seem all or mostly related to value differences.
Yeah, I'm more convinced now that principal-agent issues are significantly larger than other issues.
I think it's at least one of the main arguments that Eric Drexler makes, since he wrote this in his abstract
Yeah, I agree it's an argument against that argument from Eric. I forgot that Eric makes that point (mainly because I have never been very convinced by it)
Yeah I'm not very familiar with this either, but my understanding is that such mergers are only illegal if the effect "may be substantially to lessen competition" or "tend to create a monopoly", which technically (it seems to me) isn't the case when existing monopolies in different industries merge.
My guess would be that the spirit of the law would apply, and that would be enough, but really I'd want to ask a social scientist or lawyer.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2019-06-11T02:28:10.589Z · LW(p) · GW(p)
Once you switch to the “divisions” model your divisions are no longer competing with other firms, and all the divisions live or die as a group.
Why? Each division can still have separate profit-loss accounting, so you can decide to shut one down if it starts making losses, and the benefits of having that division to the rest of the company doesn't outweigh the losses. The latter may be somewhat tricky to judge though. Perhaps that's what you meant?
Yeah, I’m more convinced now that principal-agent issues are significantly larger than other issues.
I should perhaps mention that I still have some uncertainty about this, mainly because Robin Hanson said "There are many other factors that influence coordination, after all; even perfect value matching is consistent with quite poor coordination." But I haven't been able to find any place where he wrote down what those other factors are, nor did he answer when I asked him about it [LW(p) · GW(p)].
Replies from: rohinmshah↑ comment by Rohin Shah (rohinmshah) · 2019-06-11T06:00:43.549Z · LW(p) · GW(p)
Why? Each division can still have separate profit-loss accounting, so you can decide to shut one down if it starts making losses, and the benefits of having that division to the rest of the company doesn't outweigh the losses. The latter may be somewhat tricky to judge though. Perhaps that's what you meant?
That's a good point. I was imagining that each division ends up becoming a monopoly in its particular area due to the benefits of within-firm coordination, which means that even if the division is inefficient there isn't an alternative that the firm can go with. But that was an assumption, and I'm not sure it would actually hold.
↑ comment by Raemon · 2019-06-09T19:37:44.969Z · LW(p) · GW(p)
I like the practice of posting the planned summary and opinion – not sure if you've been doing that awhile or just started but I think it's quite good as a practice.
Replies from: rohinmshah↑ comment by Rohin Shah (rohinmshah) · 2019-06-10T17:09:47.459Z · LW(p) · GW(p)
I do it sometimes, especially when I expect it to be more controversial, and also when I'm not doing it super last minute an hour before the newsletter goes out. I'm a bit behind on everything but do want to move in this direction in the future.
↑ comment by Raemon · 2019-06-09T18:13:23.595Z · LW(p) · GW(p)
This won't happen with companies, because we already have institutions that prevent companies from gaining too much power, and there doesn't seem to be a strong reason to expect that to stop
...why do you expect those institutions to hold up in a world dominated by AGI or other powerful AI systems? (Maybe specifically, which institutions do you mean? The main options seem like 'governments' and 'other companies')e
US Government (I'm not sure about other governments) seems to lag something like 10 years behind technological developments. (This is a rough guess based on how long I recall seeing the government take significant actions around regulating stuff – Epistemic status: based on news article that made it to my eyeballs... usually via facebook).
And that's before they start trying to take signification actions, which usually still seem pretty confused. (i.e. GDPR doesn't really incentivize the things it needed to incentivize)
Assuming I'm roughly correct that there's a lag, there'd be a several year window where the institutions that normally regulate companies are going to be too confused and disoriented to do so. You might hope that those governments are also being empowered by advanced AI stuff, but I'm approximately as worried about that as I am about companies.
(I realize I didn't get that specific about the details, which are complicated, but I was somewhat surprised by your entire final paragraph and I'm not sure where the disagreement lies)
Replies from: rohinmshah↑ comment by Rohin Shah (rohinmshah) · 2019-06-10T17:47:39.326Z · LW(p) · GW(p)
Maybe specifically, which institutions do you mean?
Governments, and specifically antitrust law.
I think there are big differences between the current situation and previous technologies: a) it is higher-stakes and b) even industry seems to be somewhat pro-regulation.
Assuming I'm roughly correct that there's a lag, there'd be a several year window where the institutions that normally regulate companies are going to be too confused and disoriented to do so.
I'm trying to cache this out into a more concrete failure story. Are you imagining that a company develops AGI, starts becoming more and more powerful, and after 10 years of being confused and disoriented the government says "you're too big, you need to be broken up" and the company says "no" and takes over the government?
Replies from: Raemon, FactorialCode↑ comment by Raemon · 2019-06-10T19:08:15.612Z · LW(p) · GW(p)
Are you imagining that a company develops AGI, starts becoming more and more powerful, and after 10 years of being confused and disoriented the government says "you're too big, you need to be broken up" and the company says "no" and takes over the government?
Sort of, but worse – I'm imagining something more like "the government has already has a lot of regulatory capture going on, so the system-as-is is already fairly broken. Even given slow-ish takeoff assumptions, it seems like within 2-3 years there will either be one-or-several-companies that have gained unprecedented amounts of power. And by the time the government has even figured out an action to take, it will either have already been taken over, regulatory-captured in ways much deeper than previously, or rendered irrelevant."
Replies from: rohinmshah↑ comment by Rohin Shah (rohinmshah) · 2019-06-11T01:03:53.503Z · LW(p) · GW(p)
Okay, I see, that makes sense and seems plausible, though I'd bet against it happening. But you've convinced me that I should qualify that sentence more.
↑ comment by FactorialCode · 2019-07-05T21:48:50.443Z · LW(p) · GW(p)
I suppose another way this could happen is that the company could set up a branch in a much poorer and easily corrupted nation, since it's not constrained by people, it could build up a very large amount of power in a place that's beyond the reach of a superpower's anti-trust institutions.
Replies from: rohinmshah↑ comment by Rohin Shah (rohinmshah) · 2019-07-05T22:57:49.104Z · LW(p) · GW(p)
You'd have to get the employees to move there, which seems like a dealbreaker currently given how hot of a commodity AI researchers are.
Replies from: FactorialCode↑ comment by FactorialCode · 2019-07-06T00:22:28.629Z · LW(p) · GW(p)
I suppose that's true. Although assuming that the company has developed intent aligned AGI, I don't see why the entire branch couldn't be automated, with the exception of a couple of human figureheads. Even if the AGI isn't good enough to do AI research, or the company doesn't trust it to do that, there are other methods for the company to grow. For instance, it could set up fully automated mining operations and factories in the corrupted country.
Replies from: rohinmshah↑ comment by Rohin Shah (rohinmshah) · 2019-07-06T02:13:48.655Z · LW(p) · GW(p)
Oh, right, I forgot we were considering the setting where we already have AGI systems that can be intent aligned. This seems like a plausible story, though it only implies that there is centralization within the corrupted nation.
comment by riceissa · 2021-08-10T00:00:42.669Z · LW(p) · GW(p)
I was reading parts of Superintelligence recently for something unrelated and noticed that Bostrom makes many of the same points as this post:
If the frontrunner is an AI system, it could have attributes that make it easier for it to expand its capabilities while reducing the rate of diffusion. In human-run organizations, economies of scale are counteracted by bureaucratic inefficiencies and agency problems, including difficulties in keeping trade secrets. These problems would presumably limit the growth of a machine intelligence project so long as it is operated by humans. An AI system, however, might avoid some of these scale diseconomies, since the AI’s modules (in contrast to human workers) need not have individual preferences that diverge from those of the system as a whole. Thus, the AI system could avoid a sizeable chunk of the inefficiencies arising from agency problems in human enterprises. The same advantage—having perfectly loyal parts—would also make it easier for an AI system to pursue long-range clandestine goals. An AI would have no disgruntled employees ready to be poached by competitors or bribed into becoming informants.
comment by Wei Dai (Wei_Dai) · 2019-06-09T23:09:58.129Z · LW(p) · GW(p)
After writing this post, I recalled Carl Shulman's Whole Brain Emulation and the Evolution of Superorganisms, which discusses a similar topic, but with WBEs instead of de novo AGIs. I think the main difference between WBE and AGI in this regard is that WBE-based superorganisms probably can't grow as large as AGI-based ones, because with WBE there's a tradeoff between having all the WBEs share the same values and productive efficiency. (If you assign each task to a WBE that is best at that task, you'll end up with a bunch of WBEs with different values who then have to coordinate with each other.) With AGI, each copy of an AGI can specialize into some area and probably still maintain value alignment with the overall superorganism.
(However with some of the more advanced techniques for internally coordinating WBE-based superorganisms you may be able to get pretty close to what is possible with AGI.)
Here's a quote from Carl's paper about the implications of increased coordination / economies of scale due to WBE (which would perhaps apply to AGI even more strongly)
The market considerations discussed above might be circumvented by regulation (although enforcement might be difficult without emulation police officers, perhaps superorganisms for value stability) in a given national jurisdiction, but such regulations could impose large economic costs that would affect international competition. With economic doubling times of perhaps weeks, a major productivity or growth advantage from self-sacrificing software intelligences could quickly give a single nation a preponderance of economic and military power if other jurisdictions lacked or prohibited such (Hanson 1998b, forthcoming). Other nations might abandon their regulations to avoid this outcome, or the influence of the less regulated nation might spread its values as it increased its capabilities, including by military means. A sufficiently large economic or technological lead could enable a leading power to disarm others, but in addition a society dominated by superorganisms could also be much more willing to risk massive casualties to attain its objectives.
comment by Dr_Manhattan · 2024-08-02T18:09:10.194Z · LW(p) · GW(p)
So how does one invest in China, as a country?
comment by Ben Pace (Benito) · 2020-12-12T08:17:39.414Z · LW(p) · GW(p)
Seems like an important consideration, and explained concisely.