Why comparative advantage does not help horses

post by Sherrinford · 2024-09-30T22:27:57.450Z · LW · GW · 10 comments

Contents

10 comments

This post discusses what statements about comparative advantage say and what they do not say, and why comparative advantage does not save horses from getting sent to glue factories. It is only marginally about AI.

Eliezer Yudkowsky, in "The Sun is big, but superintelligences will not spare Earth a little sunlight" [LW · GW], explains Ricardo's Law of Comparative Advantage and then writes:

Ricardo's Law doesn't say, "Horses won't get sent to glue factories after cars roll out."

Ricardo's Law doesn't say (alas!) that -- when Europe encounters a new continent -- Europe can become selfishly wealthier by peacefully trading with the Native Americans, and leaving them their land.

Their labor wasn't necessarily more profitable than the land they lived on.

Comparative Advantage doesn't imply that Earth can produce more with $77 of sunlight, than a superintelligence can produce with $77 of sunlight, in goods and services valued by superintelligences. It would actually be rather odd if this were the case!

These (negative) statements are true, but they may also create confusion. Why? Because they suggest something unclear or wrong about what Ricardo's insight does and does not apply to - i.e., what comparative advantage is about.

Eliezer presents a standard example of comparative advantage, two countries trading, and he says that "Ricardo's Law of Comparative Advantage, ... shows that even if the country of Freedonia is more productive in every way than the country of Sylvania, both countries still benefit from trading with each other." But what does this require?

It is useful to consider Ricardo's example. It starts from a situation in which, in "the absence of trade, England requires 220 hours of work to both produce and consume one unit each of cloth and wine while Portugal requires 170 hours of work to produce and consume the same quantities". Given their production technologies, their natural endowments, or economies of scale (depending on the aspect of reality you are focusing on, or on the trade model you are employing), the countries together can produce more if they specialize. Or, as Wikipedia puts it after presenting a “typical modern interpretation of the classical Ricardian model”:  "by trading and specializing in a good for which it has a comparative advantage, each country can expand its consumption possibilities. Consumers can choose from bundles of wine and cloth that they could not have produced themselves in closed economies."

So Ricardo's model, first of all, tells us that by specializing, the countries can produce more. 

The distribution of this surplus has to (weakly) benefit both countries. If wine is too expensive for the country that specializes in producing cloth, then it does not specialize in producing cloth. If there are two people, two countries, or two machines that are worse off by trading, then they don't trade. 

Worse off compared to what? Worse off than if they were on their own. For this comparison to make sense, the trading partners must be able to decide to be left alone. The trading partners must have ownership over themselves. 

If in the seminal economic model of Ricardo's evil doppelgänger in the Mirror Universe, the dictator of England simply conquered Portugal (and thereby got rid of trade barriers) and forced everybody there to produce wine, cloth or whatever, then the dictator should still choose an efficient labor allocation between the two countries, but whether this is good for Portugal of bad depends on the dictator's preferences. The outside option in the original Ricardo model is self-sufficiency; the outside option in the economic model of Mirror-Universe Ricardo is death.

Noting that specialization and self-ownership are central to Ricardo's model, let us reconsider the claims quoted in the beginning:

"Horses won't get sent to glue factories after cars roll out." 

This may be true; however, the reason is not a failure of comparative advantage, but the fact that the theory of comparative advantage does not apply. Horses never chose to "trade" with their owners. They could not opt out.

"Ricardo's Law doesn't say (alas!) that -- when Europe encounters a new continent -- Europe can become selfishly wealthier by peacefully trading with the Native Americans, and leaving them their land.

Their labor wasn't necessarily more profitable than the land they lived on."

Ricardo's model does say that "Europe can become selfishly wealthier by peacefully trading with the Native Americans, and leaving them their land." However, it may be the case that Europeans can become even wealthier by taking the land. Relative strengths and their assessments change over time, and times in which group A respects the ownership rights of group B may not last forever.

Moreover, maybe there is no real necessity to choose between taking the labor and taking the land; sometimes a conqueror takes both. However, if a conqueror cannot just command the labor of the conquered, then it is possible that the people who are conquered die

"Comparative Advantage doesn't imply that Earth can produce more with $77 of sunlight, than a superintelligence can produce with $77 of sunlight, in goods and services valued by superintelligences. It would actually be rather odd if this were the case!"

Applying the reasoning of the comparative-advantage model to this situation may be misleading. The assumed superintelligence can take what it wants to take, and if people could "produce more with $77 of sunlight, than a superintelligence can produce with $77 of sunlight", then it could probably force people to produce it. 

10 comments

Comments sorted by top scores.

comment by Viliam · 2024-10-01T07:21:13.340Z · LW(p) · GW(p)

Yes, it is generally good to notice that some economical theorems are built upon certain assumptions, so we should not blindly extrapolate them to places where those assumptions do not apply.

"X and Y imply Z" is not the same as "Y implies Z; this universal law of nature was by historical coincidence first discovered in the situation of X, but we can safely extrapolate beyond that". It might be that case that Y implies Z even in absence of X, but that needs to be proved separately, not merely assumed.

comment by avturchin · 2024-10-01T20:55:03.749Z · LW(p) · GW(p)

But horses were not send to the glue factories. Horses population has passed its minimum and now is around 6.65 millions in the US. The peak was around 20 million in 1912. Minimum was 4.5 million in 1959. 

Native American population declined 20 times after Columbus, but rebounded after that and almost returned to pre-Columbian levels now (I don't have exact number).  

Replies from: AnthonyC, Sherrinford
comment by AnthonyC · 2024-10-02T13:25:11.616Z · LW(p) · GW(p)

There was an episode of Stargate SG1 where, instead of killing humans, aliens subtly sterilized the whole population while greatly increasing the well being of existing humans.

This is not a great outcome for the future of humanity, and non-violent population collapse and slow recovery is not necessarily a great outcome for horses, Native Americans, or anyone else. It still serves to illustrate the central point, just with less rhetorical flourish.

Replies from: avturchin
comment by avturchin · 2024-10-02T13:31:02.092Z · LW(p) · GW(p)

May be it is more interesting to ask a question: why we preserve horses if we do not need them for transportation? I think the answer will be that their previous function made an imprint in our value system and now people have pleasure from horse-riding. 

Replies from: AnthonyC
comment by AnthonyC · 2024-10-02T13:34:02.305Z · LW(p) · GW(p)

Yes, that is also an interesting question. 

comment by Sherrinford · 2024-10-01T22:36:07.891Z · LW(p) · GW(p)

With respect to the horses, I did not check Eliezer's claim. However, the exact numbers of the horse population do not really seem to matter for Eliezer's point or for mine. The same is true for the rebound of the Native American population.

comment by AnthonyC · 2024-10-02T13:53:58.827Z · LW(p) · GW(p)

The assumed superintelligence can take what it wants to take, and if people could "produce more with $77 of sunlight, than a superintelligence can produce with $77 of sunlight", then it could probably force people to produce it. 

I was with you until this sentence. This does not follow.

Let's suppose "$77 worth of sunlight" has a consistent, agreed upon meaning. Maybe "Enough sunlight to generate $77 worth of electricity (at current production cost of $0.04/kWh) with current human-made solar panels over their 25 yr lifespan." This is a little less than what falls on an average plot of land on the order of 20cmx20cm. The superintelligence could hire humans to build the solar panels, or use the electricity to run human-made equipment, or farm the plot to grow about 40g of corn.

What can a superintelligence do with that sunlight? Well, it can develop highly optimized 20-junction solar panels using advanced robotic facilities that can then generate 3x as much electricity. Maybe it has space-based manufacturing so it can use space-based solar and get 10-15x more electricity.  It can use the electricity to run superintelligently-designed equipment with greater efficiency and higher quality output than human minds and hands can invent, build, and operate. These options themselves include things like building indoor robot-operated farms that optimally distribute light/water/heat/nutrients to generate >10x more crops per unit area (>40x more with the aforementioned space-based production facilities), or directly chemically synthesize specific desired molecules.

In other words: to have humans produce what a superintelligence could produce from $77 of sunlight, it would cost the superintelligence many times more than $77, and the output quality would be much lower.

Replies from: Sherrinford
comment by Sherrinford · 2024-10-02T14:07:15.872Z · LW(p) · GW(p)

Right; my point was just that the hypothetical superintelligence does not need to trade with humans if it can force them; therefore trade-related arguments are not relevant. However, it is of course likely that such a superintelligence would neither want to trade nor care enough about the production of humans to force them to do anything.

Replies from: AnthonyC
comment by AnthonyC · 2024-10-02T14:14:22.538Z · LW(p) · GW(p)

Ok, then I agree. As written, it read to me like you were closing by suggesting the AI would want to go for the "conqueror takes both" option instead of the "give the natives smallpox and drive them from their ancestral homes while committing genocide" option.

comment by Seth Herd · 2024-10-01T17:17:46.321Z · LW(p) · GW(p)

Great reference! I found myself explaining this repeatedly but without the right terminology. The "but comparative advantage!" argument is quite common among economists trying to wrap their head around AI advances.

I think it applies for worlds with tool/narrow AI, but not with AGI that can do whole jobs for much lower wages than any human can do anything.