Why comparative advantage does not help horses

post by Sherrinford · 2024-09-30T22:27:57.450Z · LW · GW · 15 comments

Contents

15 comments

This post discusses what statements about comparative advantage say and what they do not say, and why comparative advantage does not save horses from getting sent to glue factories. It is only marginally about AI.

Eliezer Yudkowsky, in "The Sun is big, but superintelligences will not spare Earth a little sunlight" [LW · GW], explains Ricardo's Law of Comparative Advantage and then writes:

Ricardo's Law doesn't say, "Horses won't get sent to glue factories after cars roll out."

Ricardo's Law doesn't say (alas!) that -- when Europe encounters a new continent -- Europe can become selfishly wealthier by peacefully trading with the Native Americans, and leaving them their land.

Their labor wasn't necessarily more profitable than the land they lived on.

Comparative Advantage doesn't imply that Earth can produce more with $77 of sunlight, than a superintelligence can produce with $77 of sunlight, in goods and services valued by superintelligences. It would actually be rather odd if this were the case!

These (negative) statements are true, but they may also create confusion. Why? Because they suggest something unclear or wrong about what Ricardo's insight does and does not apply to - i.e., what comparative advantage is about.

Eliezer presents a standard example of comparative advantage, two countries trading, and he says that "Ricardo's Law of Comparative Advantage, ... shows that even if the country of Freedonia is more productive in every way than the country of Sylvania, both countries still benefit from trading with each other." But what does this require?

It is useful to consider Ricardo's example. It starts from a situation in which, in "the absence of trade, England requires 220 hours of work to both produce and consume one unit each of cloth and wine while Portugal requires 170 hours of work to produce and consume the same quantities". Given their production technologies, their natural endowments, or economies of scale (depending on the aspect of reality you are focusing on, or on the trade model you are employing), the countries together can produce more if they specialize. Or, as Wikipedia puts it after presenting a “typical modern interpretation of the classical Ricardian model”:  "by trading and specializing in a good for which it has a comparative advantage, each country can expand its consumption possibilities. Consumers can choose from bundles of wine and cloth that they could not have produced themselves in closed economies."

So Ricardo's model, first of all, tells us that by specializing, the countries can produce more. 

The distribution of this surplus has to (weakly) benefit both countries. If wine is too expensive for the country that specializes in producing cloth, then it does not specialize in producing cloth. If there are two people, two countries, or two machines that are worse off by trading, then they don't trade. 

Worse off compared to what? Worse off than if they were on their own. For this comparison to make sense, the trading partners must be able to decide to be left alone. The trading partners must have ownership over themselves. 

If in the seminal economic model of Ricardo's evil doppelgänger in the Mirror Universe, the dictator of England simply conquered Portugal (and thereby got rid of trade barriers) and forced everybody there to produce wine, cloth or whatever, then the dictator should still choose an efficient labor allocation between the two countries, but whether this is good for Portugal of bad depends on the dictator's preferences. The outside option in the original Ricardo model is self-sufficiency; the outside option in the economic model of Mirror-Universe Ricardo is death.

Noting that specialization and self-ownership are central to Ricardo's model, let us reconsider the claims quoted in the beginning:

"Horses won't get sent to glue factories after cars roll out." 

This may be true; however, the reason is not a failure of comparative advantage, but the fact that the theory of comparative advantage does not apply. Horses never chose to "trade" with their owners. They could not opt out.

"Ricardo's Law doesn't say (alas!) that -- when Europe encounters a new continent -- Europe can become selfishly wealthier by peacefully trading with the Native Americans, and leaving them their land.

Their labor wasn't necessarily more profitable than the land they lived on."

Ricardo's model does say that "Europe can become selfishly wealthier by peacefully trading with the Native Americans, and leaving them their land." However, it may be the case that Europeans can become even wealthier by taking the land. Relative strengths and their assessments change over time, and times in which group A respects the ownership rights of group B may not last forever.

Moreover, maybe there is no real necessity to choose between taking the labor and taking the land; sometimes a conqueror takes both. However, if a conqueror cannot just command the labor of the conquered, then it is possible that the people who are conquered die

"Comparative Advantage doesn't imply that Earth can produce more with $77 of sunlight, than a superintelligence can produce with $77 of sunlight, in goods and services valued by superintelligences. It would actually be rather odd if this were the case!"

Applying the reasoning of the comparative-advantage model to this situation may be misleading. The assumed superintelligence can take what it wants to take, and if people could "produce more with $77 of sunlight, than a superintelligence can produce with $77 of sunlight", then it could probably force people to produce it. 

15 comments

Comments sorted by top scores.

comment by Viliam · 2024-10-01T07:21:13.340Z · LW(p) · GW(p)

Yes, it is generally good to notice that some economical theorems are built upon certain assumptions, so we should not blindly extrapolate them to places where those assumptions do not apply.

"X and Y imply Z" is not the same as "Y implies Z; this universal law of nature was by historical coincidence first discovered in the situation of X, but we can safely extrapolate beyond that". It might be that case that Y implies Z even in absence of X, but that needs to be proved separately, not merely assumed.

comment by avturchin · 2024-10-01T20:55:03.749Z · LW(p) · GW(p)

But horses were not send to the glue factories. Horses population has passed its minimum and now is around 6.65 millions in the US. The peak was around 20 million in 1912. Minimum was 4.5 million in 1959. 

Native American population declined 20 times after Columbus, but rebounded after that and almost returned to pre-Columbian levels now (I don't have exact number).  

Replies from: AnthonyC, Sherrinford
comment by AnthonyC · 2024-10-02T13:25:11.616Z · LW(p) · GW(p)

There was an episode of Stargate SG1 where, instead of killing humans, aliens subtly sterilized the whole population while greatly increasing the well being of existing humans.

This is not a great outcome for the future of humanity, and non-violent population collapse and slow recovery is not necessarily a great outcome for horses, Native Americans, or anyone else. It still serves to illustrate the central point, just with less rhetorical flourish.

Replies from: avturchin
comment by avturchin · 2024-10-02T13:31:02.092Z · LW(p) · GW(p)

May be it is more interesting to ask a question: why we preserve horses if we do not need them for transportation? I think the answer will be that their previous function made an imprint in our value system and now people have pleasure from horse-riding. 

Replies from: AnthonyC
comment by AnthonyC · 2024-10-02T13:34:02.305Z · LW(p) · GW(p)

Yes, that is also an interesting question. 

comment by Sherrinford · 2024-10-01T22:36:07.891Z · LW(p) · GW(p)

With respect to the horses, I did not check Eliezer's claim. However, the exact numbers of the horse population do not really seem to matter for Eliezer's point or for mine. The same is true for the rebound of the Native American population.

comment by AnthonyC · 2024-10-02T13:53:58.827Z · LW(p) · GW(p)

The assumed superintelligence can take what it wants to take, and if people could "produce more with $77 of sunlight, than a superintelligence can produce with $77 of sunlight", then it could probably force people to produce it. 

I was with you until this sentence. This does not follow.

Let's suppose "$77 worth of sunlight" has a consistent, agreed upon meaning. Maybe "Enough sunlight to generate $77 worth of electricity (at current production cost of $0.04/kWh) with current human-made solar panels over their 25 yr lifespan." This is a little less than what falls on an average plot of land on the order of 20cmx20cm. The superintelligence could hire humans to build the solar panels, or use the electricity to run human-made equipment, or farm the plot to grow about 40g of corn.

What can a superintelligence do with that sunlight? Well, it can develop highly optimized 20-junction solar panels using advanced robotic facilities that can then generate 3x as much electricity. Maybe it has space-based manufacturing so it can use space-based solar and get 10-15x more electricity.  It can use the electricity to run superintelligently-designed equipment with greater efficiency and higher quality output than human minds and hands can invent, build, and operate. These options themselves include things like building indoor robot-operated farms that optimally distribute light/water/heat/nutrients to generate >10x more crops per unit area (>40x more with the aforementioned space-based production facilities), or directly chemically synthesize specific desired molecules.

In other words: to have humans produce what a superintelligence could produce from $77 of sunlight, it would cost the superintelligence many times more than $77, and the output quality would be much lower.

Replies from: Sherrinford, jmh
comment by Sherrinford · 2024-10-02T14:07:15.872Z · LW(p) · GW(p)

Right; my point was just that the hypothetical superintelligence does not need to trade with humans if it can force them; therefore trade-related arguments are not relevant. However, it is of course likely that such a superintelligence would neither want to trade nor care enough about the production of humans to force them to do anything.

Replies from: AnthonyC
comment by AnthonyC · 2024-10-02T14:14:22.538Z · LW(p) · GW(p)

Ok, then I agree. As written, it read to me like you were closing by suggesting the AI would want to go for the "conqueror takes both" option instead of the "give the natives smallpox and drive them from their ancestral homes while committing genocide" option.

comment by jmh · 2024-12-16T16:05:33.098Z · LW(p) · GW(p)

I'm not sure that is the correct take in the context of Comparative Advantage.

It would not matter if the SI could produce more than humans in a direct comparison but what the opportunity cost for the SI might be. If the ASI is shifting efforts that would have produced more value to it than it gets from the $77 sunlight output AND that delta in value is greater than the lower productivity of the humans then the trade makes sense to the ASI.

Seems to me the questions here are about resource constraints and whether or not an ASI does or does not need to confront them in a meaningful way.

Replies from: AnthonyC
comment by AnthonyC · 2024-12-16T22:57:25.430Z · LW(p) · GW(p)

The traditional comparative advantage discussion also, as I understand it, does not account for entities that can readily duplicate themselves in order to perform more tasks in parallel, and does not account for the possibility of wildly different transaction costs between ASI and humans vs between ASI and its non-human robot bodies. Transaction costs in this scenario include monitoring, testing, reduced quality, longer lag times. It is possible that the value of using humans to do any task at all could actually be negative, not just low.

Human analogy: you need to make dinner using $77 worth of ingredients. A toddler offers to do it instead. At what price should you take the deal? When does the toddler have comparative advantage?

Replies from: jmh
comment by jmh · 2024-12-17T14:06:26.865Z · LW(p) · GW(p)

Yes, all those conjectures are possible as we don't yet know what the reality will be -- it is currently all conjecture.

The counter argument to yours I think is just what opportunities is the AI giving up to do whatever humans might be left to do? What is the marginal value of all the things this ASI might be able to be doing that we cannot yet even conceive of? 

I think the suggestion of a negative value is just out of scope here as it doesn't fit into theory of comparative advantage. That was kind of the point of the OP. It is fine to say comparative advantage will not apply but we lack any proof of that and have plenty of examples where it actually does hold even when there is a clear absolute advantage for one side. Trying to reject the proposition by assuming it away seems a weak argument.

Replies from: AnthonyC, Sherrinford
comment by AnthonyC · 2024-12-18T13:20:34.155Z · LW(p) · GW(p)

It is a lot of assumption and conjecture, that's true. But it is not all conjecture and assumptions. When comparative advantage applies despite one side having an absolute advantage, we know why it applies. We can point to which premises of the theory are load-bearing, and know what happens when we break those premises. We can point to examples within the range of scenarios that exist among humans, where it doesn't apply, without ever considering what other capabilities an ASI might have.

I will say I do think there's a bit of misdirection, not by you, but by a lot of the people who like to talk about comparative advantage in this context, to the point that I find it almost funny that it's the people questioning premises (like this post does) getting accused of making assumptions and conjectures. I've read a number of articles that start by talking about how comparative advantage normally means there's value in one agent's labor even when another has absolute advantage, which is of course true. Then they simply assume the necessary premises apply in the context of humans and ASI, without actually ever investigating that assumption, looking for limits and edge cases, or asking what actually happens if and when they don't hold. In other words, the articles I've read, aren't trying to figure out whether comparative advantage is likely to apply in this case. They're simply assuming it will, and that those questioning this assumption or asking about the probability and conditions of it holding don't understand the underlying theory.

For comparative advantage to apply, there are conditions. Breaking the conditions doesn't always break comparative advantage, of course, because none of them perfectly apply in real life ever, but they are the openings that allow it to sometimes not apply. Many of these are predictably broken more often when dealing with ASI, meaning there will be more examples where comparative advantage considerations do not control the outcome.

A) Perfect factor mobility within but none between countries. 

B) Zero transportation costs. 

Plausibly these two apply about as well to the ASI scenario as among humans? Although with labor as a factor, human skill and knowledge act as limiters in ways that just don't apply to ASI.

C) Constant returns to scale - untrue in general, but even small discrepancies would be much more significant if ASI typically operates at much larger o much more finely tuned scale than humans can.

D) No externalities - potentially very different in ASI scenario, since methods used for production will also be very different in many cases, and externalities will have very different impacts on ASI vs on humans.

E) Perfect information - theoretically impossible in ASI scenario, ASI will have better information and understanding thereof

F) Equivalent products that differ only in price - not true in general, quality varies by source, and ASI amplifies this gap.

For me, the relevant questions, given all this, are 1) Will comparative advantage still favor ASI hiring humans for any given tasks? 2) If so, will the wage at which ASI is better off choosing to pay humans be at or above subsistence? 3) If so, are there enough such scenarios to support the current human population? 4) Will 1-3 continue to hold in the long run? 5) Are we confident enough in 1-4 for these considerations to meaningfully affect our strategy in developing and deploying AI systems of various sorts?

I happily grant that (1) is likely. (2) is possible but I find it doubtful except in early transitional periods. (3)-(4) seem very, very implausible to me. (5) I don't know enough about to begin to think about concretely, which means I have to assume "no" to avoid doing very stupid things.

comment by Sherrinford · 2024-12-17T20:12:02.663Z · LW(p) · GW(p)

I think I did not assume anything away. I pointed out that the theory of comparative advantage rests on assumptions, in particular autonomy. If someone can just force you to surrender your production (without a loss of production value), he will not trade with you (except maybe if he is nice).

comment by Seth Herd · 2024-10-01T17:17:46.321Z · LW(p) · GW(p)

Great reference! I found myself explaining this repeatedly but without the right terminology. The "but comparative advantage!" argument is quite common among economists trying to wrap their head around AI advances.

I think it applies for worlds with tool/narrow AI, but not with AGI that can do whole jobs for much lower wages than any human can do anything.