(One reason) why capitalism is much maligned

post by multifoliaterose · 2010-07-19T03:48:43.524Z · LW · GW · Legacy · 119 comments

Contents

  The case of trade
None
119 comments

Related to: Fight zero-sum bias

Disclaimer: (added response to comments by Nic_Smith, SilasBarta and Emile) - The point of this post is not to argue in favor of free markets. The point of this post is to discuss an apparent bias in people's thinking about free markets. Some of my own views about free markets are embedded within the post, but my reason for expressing them is to place my discussion of the bias that I hypothesize in context, not to marginalize the adherents to any political affiliation. When I refer to "capitalism" below, I mean "market systems with a degree of government regulation qualitatively similar to the degree present in the United States, Western Europe and Japan in the past 50 years."


The vast majority of the world's wealth has been produced under capitalism. There's a strong correlation between a country's GDP and its citizens' self reported life satisfaction. Despite the evidence for this claims there are some very smart people who are or have been critics of capitalism. There may be legitimate arguments against capitalism, but all too often, these critics of capitalism selectively focus on the negative effects of capitalism, failing to adequately consider the counterfactual "how would things be under other economic systems?" and apparently oblivious to large scale trends.

There are multiple sources of irrational bias against capitalism. Here I will argue that a major factor is zero-sum bias, or perhaps, as hegemonicon suggests, the "relativity heuristic." Within the post I quote Charles Wheelan's Naked Economics several times. I believe that Wheelan has done a great service to society by writing an accessible book which clears up common misconceptions about economics and hope that it becomes even more widely read than it has been so far. Though I largely agreed with the positions that he advocates before taking up the book, the clarity of his writing and focus on the most essential points of economics helped me sharpen my thinking, and I'm grateful to him for this.

A striking feature of the modern world is economic inequality. According to Forbes Magazine, Amazon founder Jeff Bezos made $7.3 billion in 2009. That's more than the (nominal) 2009 earnings of all of the 10 million residents of Chad combined. Many people find such stark inequality troubling. Since humans exhibit diminishing marginal utility, all else being equal, economic inequality is bad. The system that has given rise to severe economic inequality is capitalism. Does it follow that capitalism is bad? Of course not. It seems very likely that for each positive integer x, the average world citizen at the xth percentile in wealth today finds life more fulfilling than the average world citizen at the xth percentile in wealth 50 years ago did. Increased inequality does not reflect decreased average quality of life if the whole pie is getting bigger. And the whole pie has been getting a lot bigger.

While it's conceivable that we could be doing better under socialist governments or communist governments, a large majority of the available evidence points against this idea. Wheelan suggests that

a market economy is to economics what democracy is to government: a decent, if flawed, choice among many bad alternatives.

A compelling argument that it's possible to do better than capitalism (given human nature/limitations as they stands) would at very least have to address the success of capitalism and the relative failure of other forms of government up until now. In light of the historical record, one should read the financial success of somebody like Jeff Bezos as an indication that he's making the world a lot better. Yet many idealistic left wing people have a vague intuition that by making a lot of money, business people are somehow having a negative overall impact on the world.

There are undoubtedly several things going on here, but I'd hypothesize that one is that people have a gut intuition that the wealth of the world is fixed. In a world of fixed wealth like the world that our ancestors experienced, the only way to make things better for the average person is to redistribute wealth. But this is not the world that we live in. On page 115 of Naked Economics, Wheelan attempts to exorcise zero-sum thinking in his readers:

Will the poor always be with us, as Jesus once admonished? Does our free market system make poverty inevitable? Must there be losers if there are huge economic winners? No, no, and no. Economic development is not a zero-sum game; the world does not need poor countries in order to have rich countries, nor must some people be poor in order for others to be rich. Families who live in public housing on the South Side of Chicago are not poor because Bill Gates lives in a big house. They are poor despite the fact that Bill Gates lives in a big house. For a complex array of reasons, America's poor have not shared in the productivity gains spawned by DOS and Windows. Bill Gates did not take their pie away; he did not stand in the way of their success or benefit from their misfortunes. Rather, his vision and talent created an enormous amount of wealth that not everybody got to share. This is a crucial distinction between a world in which Bill Gates gets rich by stealing other people's crops and a world in which he gets rich by growing his own enormous food supply that he shares with some people and not others. The latter is a better representation of how a modern economy works.


It's a great irony that there are people who have rejected high paying jobs on the grounds that they must be hurting someone because the pay is high when they could have helped people more by taking the high paying jobs. To be sure, some people do make their money by taking money away from other people (the case of some of the behavior of the "too big to fail" banks comes to mind), but on average making money seems to be good for society, not bad for society.

The case of trade

Trade and globalization are key features of modern capitalism. These practices have received criticism both from (1) people who are concerned about the well being of foreigners and (2) people who are concerned about the well being of Americans. I'll discuss these two types of criticism in turn.

(1) I remember feeling guilty as an early adolescent about the fact that my shoes and clothes had been made by sweatshop laborers who had been paid very little to make them. When I heard about the 1999 WTO protests as a 14 year old I thought that the protesters were on the right side. It took me a couple of years to dispel the belief that by buying these things I was making things worse for the sweatshop laborers. I implicitly assumed that if they were being paid so little, it must be because somebody was forcing them to do something that they didn't want to do. In doing so, I was anchoring based on my own experience. It didn't initially occur to me that by paying very little for clothes and shoes, I could be making life better for people in poor countries by giving them more opportunities. I eventually realized that if I restricted myself to buying domestic products I would probably make things better for American workers, but that in doing so, I would deny poor foreigners an opportunity.

And it was only much later that I understood that giving a country's citizens' the opportunity to work in sweatshops could ultimately pave the way for the country to develop. It took me many years to internalize the epic quality of economic growth.

Wheelan says

The thrust of the antiglobalization protests has been that world trade is something imposed by rich countries on the developing world. If trade is mostly good for America, then it must be mostly bad for somewhere else. At this point in the book, we should recognize that zero-sum thinking is usually wrong when it comes to economics. So it is in this case.

[...]

Trade paves the way for poor countries to get richer. Export industries often pay higher wages than jobs elsewhere in the economy. But that is only the beginning. New export jobs create more competition for workers, which raises wages everywhere else. Even rural incomes can go up; as workers leave rural areas for better opportunities, there are fewer mouths to be fed from what can be grown on the land they leave behind. Other important things are going on, too. Foreign companies introduce capital, technology, and new skills. Not only does that make export workers more productive; it spills over into other areas of the economy. Workers "learn by doing" and then take their knowledge with them.

and quotes Paul Krugman saying

If you buy a product made in a third-world country, it was produced by workers who are paid incredibly little by Western standards and probably work under awful conditions. Anyone who is not bothered by those facts, at least some of the time, has no heart. But that doesn't mean the demonstrators are right. On the contrary, anyone who thinks that the answer to world poverty is simple outrage against global trade has no head - or chooses not to use it. The anti-globalization movement already has a remarkable track record of hurting the very people and causes it claims to champion.

Why does people's disinclination to use their heads on this point produce such disastrous results? Because (a) the relativity heuristic which hegemonicon mentioned is ill-suited to a world with so much inequality and perhaps because (b) the intuition that the pool of resources that people share is fixed is hardwired into the human brain.

(2) In an interesting article titled Why don't people believe that free trade is good?, economist Hans Melberg cites the following statistics:

89% of economists in the US think trade agreements between the U.S. and other countries is good for the economy, compared to 55% of the general public. Only 3% of economists think trade agreements are bad for the economy, while 28% of the general public think so (p. 111). 68% of the general public think that one reason why the economy is not doing as well as it could, is that "companies are sending jobs overseas". Only 6% of economists agree (p. 114)

Source: Blendon, Robert J. et. al., "Bridging the Gap Between the Public's and Economists' View of the Economy", Journal of Economic Perspectives, Summer 1997, vol. 11, no. 3, pp. 108-118.

My impressions from casual conversations and from the news is that people in the general population have a belief of the type "there's a limited supply of jobs, if jobs are being sent to Southeast Asia then there will be fewer jobs for Americans." Of course there's no intrinsic barrier to people in Southeast Asian and the people in American all having jobs. The idea that the supply of jobs is fixed seems to arise from fallacious zero-sum thinking and Melberg lists zero-sum thinking as one of four factors relevant to why people believe that free trade is bad.

There is a genuine problem for America that arises from allowing free trade, namely the phenomenon of displaced US workers. But aside from being outweighed by the benefits of free trade to America, this phenomenon can and should be considered without the clouding influence of zero-sum thinking.

 


 

[1] See a remark by VijayKrishnan mentioning essays by Paul Graham which may overlap somewhat with the content of this post.

07/19/10 @ 1:30 AM CST - Post very slightly edited to accommodate JoshuaZ's suggestion.

07/19/10 @ 9:34 AM CST - Post very slightly edited to accommodate billswift's suggestion.

119 comments

Comments sorted by top scores.

comment by Nic_Smith · 2010-07-19T04:44:04.405Z · LW(p) · GW(p)

I agree with this article but am not entirely comfortable with it. I fear that it might act as applause lights for those of us who already agree with its premise. In order to fully appreciate many of the points made, I think a more abstract discussion of absolute and comparative advantage might have been useful first, or perhaps discussion of a historical example.

Replies from: SilasBarta
comment by SilasBarta · 2010-07-19T14:54:48.328Z · LW(p) · GW(p)

This, this, this, this, THIS!

"Capitalism" is a polarizing term, and it doesn't help when you say -- indeed, start the article off with -- such things as:

The vast majority of the world's wealth has been produced under capitalism.

I'm considered pro-capitalist, and even I have problems with a simplistic framing like this. Among my concerns:

  • Capitalism means different things to different people, especially its supporters vs. (nominal) opponents. A characteristic example might be government-business entanglement (GBE). Supporters would call that "not capitalism", while many opponents would say it's typical capitalism. Yet they agree on the substance of what policies should exist.

  • Then there are complicated intermediate conflicts about what counts as government-business entanglement: Does it count as GBE when the government sets liability caps on nuclear plants, given that it's the anti-nuclear social taboo in the first place that mostly accounts for why they're uninsurable? Is respect for (strong) property rights a moral obligation, or a concession people are expected to be rewarded for? (See also Kevin Carson's "free market anti-capitalism".)

  • If the world had, to date, persisted under a poor political / economic system with only a few similarities to the ideal one, you would be able to say the same thing. But that doesn't remotely support the conclusion you want: is the wealth because of its similarities to capitalism? Or to the ideal system? Or simply because of the current system in toto? Is the wealth even being tabulated correctly? Are you accounting for the destruction of informal sources of wealth that didn't perfectly align with capitaism? (like the enclosure of commons systems that actually had mechanisms to prevent "tragedy of the commons" situations)

And yes I read the rest of the article, and I don't think it recovered from this error. Also, a lot of the unwise reaction to free trade is because of (imho, legitimate) concern for the displaced workers. Since there's some kind of taboo against simply asking for assistance in re-adjustment (as it's viewed as too much of a handout), people typically push for the policy of "protecting jobs" ad infinitum.

Replies from: Emile, multifoliaterose
comment by Emile · 2010-07-19T15:40:57.735Z · LW(p) · GW(p)

Strongly seconded - political polarization on this site would suck, and adding vague and disputed terminology doesn't help.

comment by multifoliaterose · 2010-07-19T15:09:20.912Z · LW(p) · GW(p)

If the world had, to date, persisted under a poor political / economic system with only a few similarities to the ideal one, you would be able to say the same thing.

It's reasonable to imagine that there was some sort of "natural selection" of economic systems and that the one that emerged is relatively close to ideal.

The anthropic principle is relevant here.

And yes I read the rest of the article, and I don't think it recovered from this error.

I'm open to suggestions for how I might improve the introduction to the article to make the article more palatable.

Replies from: SilasBarta, cupholder
comment by SilasBarta · 2010-07-19T15:50:26.017Z · LW(p) · GW(p)

It's reasonable to imagine that there was some sort of "natural selection" of economic systems and that the one that emerged is relatively close to ideal.

That doesn't follow: the crucial thing about natural selection is that, given path-dependence and local optima, it doesn't matter which particular feature causes the "fitness"; only the fact of its total fitness matters, and any given feature could just be a "hanger-on".

Now, if you had numerous worlds (or merely societies) to compare, and a rigorous, well-accpeted definition of what counts as capitalism, and strong selection pressures and "mutuation", then the present content of a system would be strong evidence of the superiority of all of its parts. But that's not the case.

I'm open to suggestions for how I might improve the introduction to the article to make the article more palatable.

You should have dropped the whole pro-capitalist cheerleading and simply discussed the belief that jobs are zero-sum, the evidence that people generally hold this belief, and its errors. (And then discussed how people can be made to change this belief, given their general cognitive structure.)

comment by cupholder · 2010-07-20T04:33:37.191Z · LW(p) · GW(p)

I'm open to suggestions for how I might improve the introduction to the article to make the article more palatable.

I was going to suggest this, but I see you've already added it: thanks for editing in your definition of capitalism at the top of the post. When I first read the post, that was something I thought would improve it. Like SilasBarta I thought it was a bad idea to leave unclear what you were counting as capitalism.

comment by Roko · 2010-07-19T11:13:35.829Z · LW(p) · GW(p)

Amazon founder Jeff Bezos made $7.3 billion in 2009. That's more than the (nominal) 2009 earnings of all of the 10 million residents of Chad combined.

And Jeff Bezos spends his money on Blue origin which furthers the cause of the human race as a whole, whereas fathers in Africa spend their disposable income on "wine, cigarettes and prostitutes", neglecting even their own children, never mind future generations and the fate of the human race itself. As you increase the level of income of a country, people will simply spend more money on luxuries and better and safer necessities for themselves and their families, even beyond the point of vastly decreasing marginal returns (e.g. American families with 4 cars, a 72 inch flatscreen TV, etc etc). Only the super-rich have a demonstrated psychological capability to spend large amounts of their time and money on the greater good.

Note also that Peter Theil has paid more money to SIAI than all other human beings combined, and that the Future of Humanity Institute is paid for almost entirely by British billionaire James Martin. (Preventative medicine for Multifolaterose's objection: people in the third world aren't going to put their money in a donor advised existential risk fund, whereas if Multifolaterose or someone else comes up with a better x-risk charity, I'm sure Theil would fund it. In fact, one way to see Theil's wealth is as a big existential risk fund waiting for someone to do a better version of SIAI)

Our hunter-gatherer intuitions about equality are based on assumptions of zero sum games and technological standstill, and are almost completely counterproductive in this modern, highly-positive-sum, highly-complex world.

Edit: Holden Karnovsky at Givewell says:

“Is it really such a big surprise that the poor also want recreation? That the poor have a life? Including some of the same vices that the rich have?"

Bezos spends money on reducing extinction risks by furthering human spaceflight. Poor people (and, indeed medium wealth people) spend the money according to their very human needs for entertainment/recreation. Hell, if I were poor and deprived, I'd probably spend money on drugs and prostitutes.

The point is not whether it is forgivable that poor people want a little bit of fun. The point is that transferring wealth from Bezos to poor people in Chad would be, pragmatically, an act of genocide of cosmic proportions (because even a 0.1% reduction in human extinction risk amounts to a cosmically large increase in future humans).

So we should rejoice that the wealth does, in fact, reside with Bezos.

I picked on Bezos because he was the example given. There are many rich people who don't use their wealth for the greater good, though those who do tend to disproportionately be rich white males, the most maligned demographic in the world. Bill Gates and Warren Buffet together have given an awesome amount, never mind Bezos, Theil, James Martin, Elon Musk.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-07-19T14:13:27.737Z · LW(p) · GW(p)

And Jeff Bezos spends his money on Blue origin which furthers the cause of the human race as a whole

This seems good to me from the little that I know.

fathers in Africa spend their disposable income on "wine, cigarettes and prostitutes",

See point 2 of http://blog.givewell.org/2010/05/26/thoughts-on-moonshine-or-the-kids/

Only the super-rich have a demonstrated psychological capability to spend large amounts of their time and money on the greater good.

In my opinion the overall giving record of the super-rich is appalling and I strain to find a meaningful sense in which the above statement is true. I don't think that it's clear that the super-rich show more demonstrated psychological capability to spend time and money on the greater good than fathers in Africa do.

According to http://features.blogs.fortune.cnn.com/2010/06/16/gates-buffett-600-billion-dollar-philanthropy-challenge/

"The IRS facts for 2007 show that the 400 biggest taxpayers had a total adjusted income of $138 billion, and just over $11 billion was taken as a charitable deduction, a proportion of about 8%...Is it possible that annual giving misses the bigger picture? One could imagine that the very rich build their net worth during their lifetimes and then put large charitable bequests into their wills. Estate tax data, unfortunately, make hash of that scenario, as 2008 statistics show."

It should be kept in mind that (a) there are a few very big donors who drag the mean up and (b) much of the money donated by the super-rich is donated for signaling reasons without a view toward maximizing positive impact.

Note also that Peter Theil has paid more money to SIAI than all other human beings combined, and that the Future of Humanity Institute is paid for almost entirely by British billionaire James Martin.

It's not clear that funding SIAI and FHI has positive expected value.

At http://blog.givewell.org/2009/05/07/small-unproven-charities/ Holden Karnofsky points out that

"[Funding a small charity carries a risk that] it succeeds financially but not programmatically – that with your help, it builds a community of donors that connect with it emotionally but don’t hold it accountable for impact. It then goes on to exist for years, even decades, without either making a difference or truly investigating whether it’s making a difference. It eats up money and human capital that could have saved lives in another organization’s hands.

As a donor, you have to consider this a disaster that has no true analogue in the for-profit world. I believe that such a disaster is a very common outcome, judging simply by the large number of charities that go for years without ever even appearing to investigate their impact. I believe you should consider such a disaster to be the default outcome for an new, untested charity, unless you have very strong reasons to believe that this one will be exceptional."

The "saving lives" reference may not be relevant, but the fact remains that by funding SIAI and FHI when these organizations have not demonstrated high levels of accountability, donors to these organizations may systematically increase rather than decrease existential risk.

See Holden's remarks on SIAI at the comment linked under http://blog.givewell.org/2010/06/29/singularity-summit/

Our hunter-gatherer intuitions about equality are based on assumptions of zero sum games and technological standstill, and are almost completely counterproductive in this modern, highly-positive-sum, highly-complex world.

Agree with this.

At the same time, I would say that too much inequality may be bad for economic growth. In practice, too much inequality seems to give rise to political instability and interferes with the ability of very bright children born to poor parents to make the most of their talents.

Replies from: xamdam, Roko, Roko
comment by xamdam · 2010-07-19T16:35:35.446Z · LW(p) · GW(p)

And Jeff Bezos spends his money on Blue origin which furthers the cause of the human race as a whole

This seems good to me from the little that I know.

No need to reply to this red herring about spending habits of super-rich; they are largely irrelevant to your argument (that capitalism is still the better system).

But once we go down that road...

"The IRS facts for 2007 show that the 400 biggest taxpayers had a total adjusted income of $138 billion, and just over $11 billion was taken as a charitable deduction, a proportion of about 8%...Is it possible that annual giving misses the bigger picture? One could imagine that the very rich build their net worth during their lifetimes and then put large charitable bequests into their wills. Estate tax data, unfortunately, make hash of that scenario, as 2008 statistics show."

It's a good counter-point to Roko's fantasy about the kindness of billionaires. I suspect he fell for availability bias with his space program idea and Bezos. BTW, the Blue Origin investment is not even close to closing his income gap with the average Joe. Buffett is giving away all of his money, to be managed by tech-smart Gates, would have made a better example (which again supports the availability bias ;).

Still, the real economics of it is that the super-rich by and large do not take away much from society, with some exceptions. This is because they either buy goods and services for themselves or invest. This per se adds a single cycle to the money circulation rate, which is not huge. The exceptions come when they spend obscene amount on essentially single-used goods that need to be produced anew, such as building palaces for them and their whole f*g royal extended family (sorry for the emotion, but this is what I feel for the oil sheiks). Somewhat counter-intuitively, if they compete on existing luxuries, such Michelangelo paintings, and spend billion dollars on them, the harm is pretty minimal, since little societal resources needed to be wasted on these goods.

All in all I heard of very few, billion-dollar self indulgent spenders. The rest of the money gets invested, and often in new startups/technologies that you mutual fund will not invest in, and which in fact is a very valuable service.

Replies from: Roko, multifoliaterose
comment by Roko · 2010-07-19T17:44:55.138Z · LW(p) · GW(p)

it's a good counter-point to Roko's fantasy about the kindness of billionaires

$11 billion was taken as a charitable deduction, a proportion of about 8%

Note that SpaceX, Blue Origin, Virgin Galactic wouldn't be counted in that as it is not technically charity.

In 2002, the average British person gave 147 pounds to charity, and the average net income is roughly 15,000 pounds per person, making an average donation level of about 1%.

So the super rich are 8 times as charitable as the average person.

In the USA, the average person is waaaay more charitable. Unfortunately, that's all money going to churches, so not high-impact money.

Perhaps I should have qualified myself: ordinary Americans do spend money on what they think is important: they overwhelmingly give to churches.

But again, intention is not what matters to a consequentialist. Results matter.

Replies from: xamdam
comment by xamdam · 2010-07-19T18:02:02.779Z · LW(p) · GW(p)

I venture that if you put this data next to the marginal utility of money the 1% donation of the ordinary people will look way more charitable than the 8% or the super-rich.

Buffett said, rather honestly (after declaring his intention of giving away 99%) something along the lines of "don't look at me for charity advice, I never gave away a dollar I actually needed". You have to discount super-rich giving quite steeply on altruism scale.

Additional accounting note: the 8% comes from American data, so 8x is not the true ratio your 1% figure is from UK, and according to you Americans' giving is waaay more charitable. Also not known how much of the super-rich giving goes to churches as you point out.

Just to point out, we are not arguing about altruism of the super-rich, not their usefulness in a capitalist society; they are not only a necessary evil but are actually useful because of their investment profile.

Replies from: Roko
comment by Roko · 2010-07-19T18:06:29.121Z · LW(p) · GW(p)

It is the results that matter, though. I should have worded my original comment to make this clear. Ordinary people in the USA give bucketloads of money to Churches, but that one set of donations by Theil to SIAI matters almost infinitely more.

Replies from: xamdam
comment by xamdam · 2010-07-19T18:44:30.202Z · LW(p) · GW(p)

Agreed, but this still does not indicate any general altruism of the super-rich. Pragmatically, you're better off hitting them up for 10M than me for $100, even if I am giving up more utils in process. Individually Theil deserves credit for far-sightedness, of course.

comment by multifoliaterose · 2010-07-19T19:22:36.058Z · LW(p) · GW(p)

Thanks for your interesting response. If you have any relevant references concerning what billionaires do with their money, I would appreciate them.

If super-rich people really do reinvest most of their money in startups/technologies, then their disinclination toward charitable spending may not be problematic at all. It's occurred to me that investment in startups/technologies may more cost effective than donations to virtually all presently existing charities (even the ones that GiveWell recommends, which I presently donate to).

At the same time, if the situation is as you describe, then why don't billionaires make this point more often to increase their public adulation?

Replies from: xamdam, Roko
comment by xamdam · 2010-07-19T20:42:57.141Z · LW(p) · GW(p)

Most of my data is just plain logic and some reading of biographies/news.

I imagine it's actually pretty hard to spend a billion dollars on yourself, because each thing that you acquire, if it is of any value above rubbish, carries management overhead. These people have teams managing their staff; owning too much stuff can get pretty annoying.

I do not know what they invest in in general, I suspect hedge funds and VC firms, if not their own business expansion, since these can provide greater returns with small risk if you are rich enough to diversify. What I can say is that I and other ordinary folk do not invest in startups, as I cannot diversify that risk enough and cannot afford time for due diligence etc. It's up to the rich to provide Angel/VC funding.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-07-19T21:00:59.235Z · LW(p) · GW(p)

Thanks for your response.

•I have the same impression that rich people can't spend too much money on themselves. But I remain concerned that they may split their fortune many ways among their children, grandchildren, great-grandchildren etc. who all use a lot of money on luxury goods. It would be good to have some data on this point.

•Hedge funds may skew wealth on account of picking up "quarters on the side walk" that otherwise would have been distributed randomly among members of the population. Wealth skewing seems to be bad for (economic growth)/(political stability)/(average quality of life). On the other hand hedge funds may stabilize the economy on account of suppressing bubbles. On the other hand they may destabilize the economy on account of leveraging a lot of funds and occasionally messing up. These things are complicated.

•Angel/VC funding is probably good.

•I would like to see super-rich people systematically using their money to achieve maximum positive social impact. Angel/VC funding should have some positive social impact, but since the market system does not take into account externalities & because there are tragedy of the commons issues in the market system, I think that super-rich people could be benefiting the world much more than they are now if they were actively trying to benefit the world rather than just trying to make more money.

comment by Roko · 2010-07-19T20:27:18.764Z · LW(p) · GW(p)

It's occurred to me that investment in startups/technologies may more cost effective than donations to virtually all presently existing charities

Good point.

comment by Roko · 2010-07-19T16:15:45.174Z · LW(p) · GW(p)

It's not clear that funding SIAI and FHI has positive expected value.

Perhaps I should ask for clarification of what you mean by "not clear that X has positive expected value." Relative to what alternative?

Also, you shouldn't say "not clear whether the expected value off X is positive, negative, or zero" -- relative to a particular probability distribution, the expected value of something has a definite value.

You might express what you want to say better by saying "I currently think that X has positive expected value, but I expect my beliefs to change very rapidly with incoming evidence".

This is an example of mistaking instability of a subjective probability with uncertainty about a subjective probability (there is no uncertainty about your subjective probabilities).

Replies from: multifoliaterose, FAWS
comment by multifoliaterose · 2010-07-19T16:26:31.228Z · LW(p) · GW(p)

Okay, fine: I currently believe that funding SIAI and FHI has expected value near zero but my belief on this matter is unstable and subject to rapid change with incoming evidence.

Replies from: Vladimir_Nesov, Soki, Roko
comment by Vladimir_Nesov · 2010-07-19T16:52:23.556Z · LW(p) · GW(p)

As I see it, most of current worth of SIAI is in focusing attention on the problem of FAI, and it doesn't need to produce any actual research on AI to make progress on that goal. The mere presence of this organization allows people like me to (1) recognize the problem of FAI, something you are unlikely to figure out or see as important on your own and (2) see the level of support for the cause, and as a result be more comfortable about seriously devoting time to studying the problem (in particular, extensive discussion by many smart people on Less Wrong and elsewhere gives more confidence that the idea is not a mirage).

Initially, most of the progress in this direction was produced personally by Eliezer, but now SIAI is strong enough to carry on. Publicity causes more people to seriously think about the problem, which will eventually lead to technical progress, if it's possible at all, regardless of whether current SIAI is capable of making that progress.

This makes current SIAI clearly valuable, because whatever is the truth about possible paths towards FAI, it takes a significant effort to explore them, and SIAI calls attention to that task. If SIAI can make progress on the technical problem as well, more power to them. If other people begin to make technical progress, they now have the option of affiliating with SIAI, which might be a significant improvement over personally trying to fight for funding on FAI research.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-07-19T17:14:55.971Z · LW(p) · GW(p)

Not all publicity is good publicity. The majority of people who I've met off of Less Wrong who have heard of SIAI think that the organization is full of crazy people. A lot of these people are smart. Some of these people have Ph.D.'s from top tier universities in sciences.

I think that SIAI should be putting way more emphasis on PR, networking within academic, etc. This is in consonance with a comment by Holden Karnofsky here

To the extent that your activities will require “beating” other organizations (in advocacy, in speed of innovation, etc.), what are the skills and backgrounds of your staffers that are relevant to their ability to do this?

I'm worried that SIAI's poor ability to make a good public impression may poison the cause of existential risk in the mind of the public and dissuade good researchers from studying existential risk. There are some very smart people who it would be good to have working on Friendly AI who, despite their capabilities, care a lot about their status in broader society. I think that it's very important that an organization that works toward Friendly AI at least be well regarded by a sizable minority people in the scientific community.

Replies from: andreas, Vladimir_Nesov
comment by andreas · 2010-07-19T18:50:58.243Z · LW(p) · GW(p)

In my experience, academics often cannot distinguish between SIAI and Kurzweil-related activities such as the Singularity University. With its 25k tuition for two months, SU is viewed as some sort of scam, and Kurzweilian ideas of exponential change are seen as naive. People hear about Kurzweil, SU, the Singularity Summit, and the Singularity Institute, and assume that the latter is behind all those crazy singularity things.

We need to make it easier to distinguish the preference and decision theory research program as an attempt to solve a hard problem from the larger cluster of singularity ideas, which, even in the intelligence explosion variety, are not essential.

Replies from: Utilitarian, Roko
comment by Utilitarian · 2010-07-25T04:46:30.723Z · LW(p) · GW(p)

Agreed. I'm often somewhat embarrassed to mention SIAI's full name, or the Singularity Summit, because of the term "singularity" which, in many people's minds -- to some extent including my own -- is a red flag for "crazy".

Honestly, even the "Artificial Intelligence" part of the name can misrepresent what SIAI is about. I would describe the organization as just "a philosophy institute researching hugely important fundamental questions."

Replies from: ata
comment by ata · 2010-07-25T07:10:44.900Z · LW(p) · GW(p)

Agreed. I'm often somewhat embarrassed to mention SIAI's full name, or the Singularity Summit, because of the term "singularity" which, in many people's minds -- to some extent including my own -- is a red flag for "crazy".

Agreed; I've had similar thoughts. Given recent popular coverage of the various things called "the Singularity", I think we need to accept that it's pretty much going to become a connotational dumping ground for every cool-sounding futuristic prediction that anyone can think of, centered primarily around Kurzweil's predictions.

Honestly, even the "Artificial Intelligence" part of the name can misrepresent what SIAI is about. I would describe the organization as just "a philosophy institute researching hugely important fundamental questions."

I disagree somewhat there. Its ultimate goal is still to create a Friendly AI, and all of its other activities (general existential risk reduction and forecasting, Less Wrong, the Singularity Summit, etc.) are, at least in principle, being carried out in service of that goal. Its day-to-day activities may not look like what people might imagine when they think of an AI research institute, but that's because FAI is a very difficult problem with many prerequisites that have to be solved first, and I think it's fair to describe SIAI as still being fundamentally about FAI (at least to anyone who's adequately prepared to think about FAI).

Describing it as "a philosophy institute researching hugely important fundamental questions" may give people the wrong impressions, if it's not quickly followed by more specific explanation. When people think of "philosophy" + "hugely important fundamental questions", their minds will probably leap to questions which are 1) easily solved by rationalists, and/or 2) actually fairly silly and not hugely important at all. ("Philosophy" is another term I'm inclined toward avoiding these days.) When I've had to describe SIAI in one phrase to people who have never heard of it, I've been calling it an "artificial intelligence think-tank". Meanwhile, Michael Vassar's Twitter describes SIAI as a "decision theory think-tank". That's probably a good description if you want to address the current focus of their research; it may be especially good in academic contexts, where "decision theory" already refers to an interesting established field that's relevant to AI but doesn't share with "artificial intelligence" the connotations of missed goals, science fiction geekery, anthropomorphism, etc.

comment by Roko · 2010-07-19T20:30:30.238Z · LW(p) · GW(p)

Ah, I think I can guess who you are. You work under a professor called Josh and have an umlaut in your surname. Shame that the others in that great research group don't take you seriously.

comment by Vladimir_Nesov · 2010-07-19T17:17:49.504Z · LW(p) · GW(p)

I'm pretty sure usable suggestions for improvement are welcome. About ten years ago there was only the irrational version of Eliezer who just recently understood that the problem existed, while right now we have some non-crazy introductory and scholary papers, and a community that understands the problem. The progress seems to be in the right direction.

If you asked the same people about the idea of FAI fifteen years ago, say, they'd label it crazy just the same. SIAI gets labeled automatically, by association with the idea. Perceived craziness is the default we must push the public perception away from, not something initiated by actions of SIAI (you'd need to at least point out specific actions to attempt this argument).

Replies from: multifoliaterose, whpearson
comment by multifoliaterose · 2010-07-19T18:28:24.760Z · LW(p) · GW(p)

Good point - I will write to SIAI about this matter.

I actually agree that up until this point progress has been in the right direction, I guess my thinking is that the SIAI has attracted a community consisting of a very particular kind of person, may have achieved near-saturation within this population, and that consequently SIAI as presently constituted may have outlived the function that you mention. This is the question of room for more funding

Agree with

Perceived craziness is the default we must push the public perception away from, not something initiated by actions of SIAI (you'd need to at least point out specific actions to attempt this argument).

There are things that I have in mind but I prefer to contact SIAI about them directly before discussing them in public.

comment by whpearson · 2010-07-19T18:14:45.334Z · LW(p) · GW(p)

I think there are many people who worry about AI in one form or another. They may not do very informed worrying and they may be anthropomorphising, but they still worry and that might be harnessable. See Stephen Hawkings on AI.

SIAIs emphasis on the singularity aspect of the possible dangers of AI is unfortunate as it requires people to get their heads around this. So it alienates the people who just worry about the robot uprising or their jobs being stolen and being outcompeted evolutionarily.

So lets say instead of SIAI you had IRDAI (Institute to research the Dangers of AI). It could look at each potential AI and assess the various risks each architecture posed. It could practice on things like feed forward neural networks and say what types of danger they might pose (job stealing, being rooted and used by a hacker, or going FOOM), based on their information theoretical ability to learn from different information sources, security model and the care being take to make sure human values are embedded in it. In the process of doing that it would have to develop theories of FAI in order to say whether a system was going to have human-like values stably.

The emphasis placed upon very hard take off just makes it less approachable and look more wacky to the casual observer.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-19T19:33:13.511Z · LW(p) · GW(p)

Safe robots have nothing whatsoever to do with FAI. Saying otherwise would be incompetent, or a lie. I believe that there need not be an emphasis of hard takeoff, but likely for reasons not related to yours.

Replies from: thomblake, whpearson, FAWS
comment by thomblake · 2010-07-19T19:37:41.945Z · LW(p) · GW(p)

Agreed. My dissertation is on moral robots, and one of the early tasks was examining SIAI and FAI and determining that the work was pretty much unrelated (I presented a pretty bad conference paper on the topic).

comment by whpearson · 2010-07-19T19:41:10.293Z · LW(p) · GW(p)

Apart from they both need a fair amount of computer science to predict their capabilities and dangers?

Call your research institute something like the Institute for the prevention of Advanced Computational Threats, and have separate divisions for robotics and FAI. Gain the trust of the average scientist/technology aware person by doing a good job on robotics and they are more likely to trust you when it comes to FAI.

Replies from: Vladimir_Nesov, cupholder
comment by Vladimir_Nesov · 2010-07-19T19:57:29.694Z · LW(p) · GW(p)

Apart from they both need a fair amount of computer science to predict their capabilities and dangers?

I recently shifted to believing that pure mathematics is more relevant for FAI than computer science.

Call your research institute something like the Institute for the prevention of Advanced Computational Threats, and have separate divisions for robotics and FAI. Gain the trust of the average scientist/technology aware person by doing a good job on robotics and they are more likely to trust you when it comes to FAI.

A truly devious plan.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-07-20T13:06:17.195Z · LW(p) · GW(p)

I recently shifted to believing that pure mathematics is more relevant for FAI than computer science.

That's interesting. What's your line of thought?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-20T15:48:47.580Z · LW(p) · GW(p)

In FAI, the central question is what a program wants (which is a certain kind of question about what the program means), and not what a program does.

Computer science will tell lots about what which programs can do how, and how to construct a program that does what you need, but less about what a program means (the sort of computer science that does is already a fair distance towards mathematics). This is also a problem with statistics/machine learning, and the reason they are not particularly useful for FAI: they teach certain tools, and how these tools work, but understanding they provide isn't portable enough.

Mathematical logic, on the other hand, contains lots of wisdom in the right direction: what kinds of mathematical structures can be defined how, which structures a given definition defines, what concepts are definable, and so on. And to understands the concepts themselves one needs to go further.

Unfortunately, I can't give a good positive argument for the importance of math; that would require a useful insight (arrived at through use of mathematical tools). At the least, I can attest finding a lot of confusion in my past thinking about FAI as a result of each "level up" in understanding of mathematics, and that counts for something.

comment by cupholder · 2010-07-20T02:41:00.217Z · LW(p) · GW(p)

I think that's a clever idea that deserves more eyeballs.

comment by FAWS · 2010-07-19T19:51:02.870Z · LW(p) · GW(p)

Nothing whatsoever is a bit strong. About as much as preventing tiger attacks and fighting malaria, perhaps?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-19T19:54:37.533Z · LW(p) · GW(p)

Saving tigers from killer robots.

comment by Soki · 2010-07-19T17:10:37.289Z · LW(p) · GW(p)

This video addresses this question : Anna Salamon's 2nd Talk at Singularity Summit 2009 -- How Much it Matters to Know What Matters: A Back of the Envelope Calculation
It is 15 minutes long, but you can take a look at 11m37s

Edit : added the name of the video, thanks for the remark Vladimir.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-19T17:15:23.176Z · LW(p) · GW(p)

The link above is Anna Salamon's 2nd Talk at Singularity Summit 2009 "How Much it Matters to Know What Matters: A Back of the Envelope Calculation."

(You should give some hint of the content of a link you give, at least the title of the talk.)

comment by Roko · 2010-07-19T16:48:34.927Z · LW(p) · GW(p)

Still not good enough. $10000000000 is near zero, for some definition of "near". Why not just give a dollar value? I think people have a strong fear of being accused of spurious precision, but really, giving a precise number and then saying "unstable with a standard deviation of X per hour of debate" is the only mathematically consistent way of saying what you want to say.

Plus, I can't disagree with your statement.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-07-19T17:04:13.741Z · LW(p) · GW(p)

Okay, let's try again. My current belief is that at present, donations to SIAI are a less cost effective way of accomplishing good than donating to a charity like VillageReach or StopTB which improves health in the developing world.

My internal reasoning is as follows:

Roughly speaking the potential upside of donating to SIAI (whatever research SIAI would get done) is outwieghed by the potential downside (the fact that SIAI could divert funding away from future existential risk organizations). By way of contrast, I'm reasonably confident that there's some upside to improving health in the developing world (keep in mind that historically, development has been associated with political stability and getting more smart people in the pool of people thinking about worthwhile things) and giving to accountable effectiveness oriented organizations will raise the standard for accountability across the philanthropic world (including existential risk charities).

I wish that there were better donation opportunities than VillageReach and StopTB and I'm moderately optimistic that some will emerge in the near future (e.g. over the next ten years) but I don't see any at the moment.

Replies from: Roko
comment by Roko · 2010-07-19T17:27:50.539Z · LW(p) · GW(p)

What about the comparison of a donor advised existential risks fund versus StopTB?

Replies from: multifoliaterose
comment by multifoliaterose · 2010-07-19T18:07:28.900Z · LW(p) · GW(p)

Good question. I haven't considered this point - thanks for bringing it to my consideration!

Replies from: Roko
comment by Roko · 2010-07-19T18:28:05.437Z · LW(p) · GW(p)

So we both agree that a more-accountable set of existential risk organizations would (all else equal) be the best way to spend money, better than third world charity certainly.

The disagreement is about this idea of current existential risk organizations diverting money away from future organizations that are better.

My impression is that existential risk charity is very much unlike third-world aid charity, in that how to deliver third world aid is not a philosophically challenging problem. Everyone has a good intuitive understanding of people, of food and the lack thereof, and at least some understanding of things like incentive problems.

However, something like Friendly AI theory requires a virtually complete re-education of a person (that is if they are very smart to start with. If not, they'll just never understand it). If it were easy to understand, it would be something for which charity was not required: governments would be doing it, not out of charity, but out of self-interest.

Given this difference, your idea of demanding high levels of accountability might itself need some scrutiny. My personal position is to require nothing in terms of accountability, competence or performance unless and until it is demonstrated that there are, in fact, other groups who want to start an existential risk charity, and to begin the process of competition by funding those other groups, should they in fact arise.

I am currently working for the Lifeboat foundation, by the way, which is such an "other group", and is, in fact, funded to the tune of $200k. But three is still pretty darn small, and the number of people involved is tiny.

Replies from: multifoliaterose, Vladimir_Nesov
comment by multifoliaterose · 2010-07-19T19:09:27.270Z · LW(p) · GW(p)

•I think that at the margin a highly accountable existential risk charity would definitely be better than a third world charity. I could imagine that if a huge amount of money were being flooded into the study of existential risk, it would be more cost effective to send money to the developing world.

•I'm very familiar with pure mathematics. My belief is that in pure mathematics the variability in productivity of researchers stretches over many orders of magnitude. By analogy, I would guess that the productivity of Friendly AI researchers will also differ by many orders of magnitude. I suspect that the current SIAI researchers are not at the high end of this range (out of virtue of the fact that the most talented researchers are very rare, very few people are currently thinking about these things, and my belief that the correlation between currently thinking about these things and having talent is weak).

Moreover, I think that if a large community of people who value Friendly AI research emerges, there will be positive network effects that heighten the productivity of the researchers.

For these reasons, I think that the expected value of the research that SIAI is doing is negligible in comparison with the expected value of the publicity that SIAI generates. At the margin, I'm not convinced that SIAI is generating good publicity for the cause of existential risk. I think that SIAI may be generating bad publicity for the cause of existential risk. See my exchange with Vladimir Nesov. Aside from the general issue of it being good to encourage accountability, this is why I don't think that funding SIAI is a good idea right now. But as I said to Vladimir Nesov, I will write to SIAI about this and see what happens.

•I think that the reason that governments are not researching existential risk and artificial intelligence is because (a) the actors involved in governments are shortsighted and (b) the public doesn't demand that governments research these things. It seems quite possible to me that in the future governments will put large amounts of funding into these things.

•Thanks for mentioning the Lifeboat foundation.

Replies from: Roko, Roko, Roko
comment by Roko · 2010-07-19T20:59:49.433Z · LW(p) · GW(p)

I think that the reason that governments are not researching existential risk and artificial intelligence is because (a) the actors involved in governments are shortsighted and (b) the public doesn't demand that governments research these things. It seems quite possible to me that in the future governments will put large amounts of funding into these things.

Maybe, but more likely rich individuals will see the benefits long before the public does, then the "establishment" will organize a secret AGI project. Though this doesn't even seem remotely close to happening: the whole thing pattern matches for some kind of craziness/scam.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-07-19T21:10:40.487Z · LW(p) · GW(p)

•I agree that there's gap between when rich individuals see the benefits of existential risk research and when the general public sees the benefits of existential risk research.

•The gap may nevertheless be inconsequential relative to the time that it will take to build a general AI.

•I presently believe that it's not desirable for general AI research to be done in secret. Secret research proceeds slower than open research, and we may be "on the clock" because of existential risks unrelated to general AI. In my mind this factor outweighs the arguments that Eliezer has advanced for general AI research being done in secret.

Replies from: Roko, CronoDAS
comment by Roko · 2010-07-19T22:17:58.053Z · LW(p) · GW(p)

I presently believe that it's not desirable for general AI research to be done in secret.

There are shades between complete secrecy and blurting it out on the radio. Right now, human-universal cognitive biases keep it effectively secret, but in the future we may find that the military closes in on it like knowledge of how to build nuclear weapons.

comment by CronoDAS · 2010-07-19T21:26:06.215Z · LW(p) · GW(p)

That, and secrets are damn hard to keep. In all of history, there has only been one military secret that has never been exposed, and that's the composition of Greek fire. Someone is going to leak.

comment by Roko · 2010-07-19T20:34:48.919Z · LW(p) · GW(p)

Moreover, I think that if a large community of people who value Friendly AI research emerges, there will be positive network effects that heighten the productivity of the researchers.

Note that if uFAI is >> easier than FAI, then the size of the research community must be kept small, otherwise FAI research may acquire a Klaus Fuchs who goes and builds a uFAI for fun and vengeance.

This makes it all a lot harder.

comment by Roko · 2010-07-19T20:22:17.529Z · LW(p) · GW(p)

I think that at the margin a highly accountable existential risk charity would definitely be better than a third world charity. I could imagine that if a huge amount of money were being flooded into the study of existential risk, it would be more cost effective to send money to the developing world.

Do you buy the argument that we should take the ~10^50 future people the universe could support into account in our expected utility calculations?

If so, then it is hard to see how anything other than existential risks matters, i.e. all money devoted to the third world, animal welfare, poor people, diseases, etc, would ideally be redirected to the goal of ensuring a positive (rather then negative) singularity.

Of course this point is completely academic, because the vast majority of people won't ever believe it, but I'd be interested to hear if you buy it.

Replies from: multifoliaterose, rhollerith_dot_com
comment by multifoliaterose · 2010-07-19T20:46:14.868Z · LW(p) · GW(p)

Do you buy the argument that we should take the ~10^50 future people the universe could support into our expected utility calculations?

Yes, I buy this argument.

If so, then it is hard to see how anything other than existential risks matters.

The question is just whether donating to an existential risk charity is the best way to avert existential risk.

•I believe that political instability is conducive to certain groups desperately racing to produce and utilize powerful technologies. This points in the direction of promotion of political stability reducing existential risk.

•I believe that when people are leading lives that they find more fulfilling, they make better decisions, so that improving quality of life reduces existential risk

•I believe that (all else being equal), economic growth reduces "existential risk in the broad sense." By this I mean that economic growth may prevent astronomical waste.

Of course, as a heuristic it's more important that technologies develop safely than that they develop quickly, but one could still imagine that at some point, the marginal value of an extra dollar spent on existential risk research drops so low that speeding up economic growth is a better use of money.

•Of the above three points, the first two are more compelling than the third, but the third could still play a role, and I believe that there's a correlation between each pair of political stability, quality of life, and economic growth, so that it's possible to address the three simultaneously.

•As I said above, at the margin I think that a good charity devoted to studying existential risk should be getting more funding, but at present I do not believe that a good charity devoted to studying existential risk could cost effectively absorb arbitrarily many dollars.

comment by RHollerith (rhollerith_dot_com) · 2010-07-19T22:18:20.220Z · LW(p) · GW(p)

Do you buy the argument that we should take the ~10^50 future people the universe could support into account in our expected utility calculations?

I do. In fact, I assign a person certain to be born a million years from now about the same intrinsic value as a person who exists today though there are a lot of ways in which my doing good for a person who exist today has significant insttrumental value which doing good for a person certain to be born a million years does not.

comment by Vladimir_Nesov · 2010-07-19T19:38:48.528Z · LW(p) · GW(p)

My impression is that existential risk charity is very much unlike third-world aid charity, in that how to deliver third world aid is not a philosophically challenging problem. Everyone has a good intuitive understanding of people, of food and the lack thereof, and at least some understanding of things like incentive problems.

I suspect helping dead states efficiently and sustainably is very difficult, possibly more so than developing FAI as a shortcut. Of course, it's a completely different kind of challenge.

Replies from: Roko
comment by Roko · 2010-07-19T20:09:23.169Z · LW(p) · GW(p)

I disagree strongly. You can repeatedly get it it wrong with failed states, and learn from your mistakes. The utility cost for each failure is additive, whereas the first FAI failure is fatal. Also, third world development is a process that might spontaneously solve itself via economic development and cultural change. Much to the chagrin of many charities, that might even be the optimal way to solve the problem given our resource constraints. In fact the development of the west is a particular example of this; we started out as medieval third world nations.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-19T21:10:00.927Z · LW(p) · GW(p)

I disagree strongly. You can repeatedly get it it wrong with failed states, and learn from your mistakes. The utility cost for each failure is additive, whereas the first FAI failure is fatal.

Distinguish the difficulty of developing an adequate theory, from the difficulty of verifying that a theory is adequate. It's the failure with the latter that might lead to disaster, while not failing requires a lot of informed rational caution. On the other hand, not inventing an adequate theory doesn't directly lead to a disaster, and failure to invent an adequate theory of FAI is something you can learn from (the story of my life for the last three years).

comment by FAWS · 2010-07-19T16:27:56.909Z · LW(p) · GW(p)

I read "not clear that X has positive expected value" as something like "I'm not sure an observer with perfect knowledge of all relevant information, but not of future outcomes would assign X a positive expected value."

Replies from: Roko
comment by Roko · 2010-07-19T16:56:57.991Z · LW(p) · GW(p)

observer with perfect knowledge of all relevant information, but not of future outcomes

Nonsense!

In any case, trying to guess what variously omniscient yet handicapped ideal observers would say is a dumb way to do decision theory; just be a bayesian with a subjective probability.

Replies from: FAWS
comment by FAWS · 2010-07-19T17:04:12.263Z · LW(p) · GW(p)

To clarify: No knowledge of things like the state of individual electrons or photons, and therefore no knowledge of future "random" (chaos theory) outcomes. This was one of the possible objections I had considered, but decided against addressing in advance, turns out I should have.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-19T17:06:55.144Z · LW(p) · GW(p)

Logical uncertainty is also something you must fight on your own. Like you can't know what's actually in the world, if you haven't seen it, you can't know what logically follows from what you know, if you didn't perform the computation.

Replies from: FAWS
comment by FAWS · 2010-07-19T17:16:34.749Z · LW(p) · GW(p)

And that was the other possible objection I had thought of!

I had meant to include that sort of thing in "relevant knowledge", but couldn't think of any good way to phase it in the 5 seconds I thought about it. I wasn't trying to make any important argument, it was just a throwaway comment.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-19T17:23:33.145Z · LW(p) · GW(p)

And that was the other possible objection I had thought of!

I don't understand what this refers to. (Objection to what? What objection? In what context did you think of it?)

Replies from: FAWS
comment by FAWS · 2010-07-19T18:20:25.688Z · LW(p) · GW(p)

I commented on the objection that being unsure whether the expected value of something is positive conflicts with the definition of expected value with:

I read "not clear that X has positive expected value" as something like "I'm not sure an observer with perfect knowledge of all relevant information, but not of future outcomes would assign X a positive expected value."

When writing this I thought of two possible objections/comments/requests for clarification/whatever:

  1. That perfect knowledge implies knowledge of future outcomes.

  2. Your logical uncertainty point (though I had no good way to phrase this).

I briefly considered addressing them in advance, but decided against it. Both whatevers were made in fairly rapid succession (though yours apparently not with that comment in mind?), so I definitely should have.

There is no way that short throwaway comment deserved a seven post comment thread.

comment by Roko · 2010-07-19T14:24:41.736Z · LW(p) · GW(p)

It's not clear that funding SIAI and FHI has positive expected value.

If we disagree on this, then we are not even on the same page; never mind the other counter-points you bring up.

I can't imagine how you could come to the conclusion that SIAI/FHI have zero or negative expected value.

If you acknowledge the possibility of uFAI, then it makes even less sense to want to remove the only people whose aim is to prevent that. There is already an existing AGI research community, and they're not super-safety oriented, and there's an AI research community who are not taking the risk seriously.

Not to mention the work that FHI does on a host of issues other than AI.

Replies from: Vladimir_Nesov, multifoliaterose
comment by Vladimir_Nesov · 2010-07-19T14:36:42.874Z · LW(p) · GW(p)

I can't imagine how you could come to the conclusion that SIAI/FHI have zero or negative expected value.

SIAI has a higher risk of producing uFAI than your average charity.

Replies from: Roko, Roko
comment by Roko · 2010-07-19T15:47:55.549Z · LW(p) · GW(p)

If you acknowledge the possibility of uFAI, then it makes even less sense to want to remove the only people whose aim is to prevent that. There is already an existing AGI research community, and they're not super-safety oriented, and there's an AI research community who are not taking the risk seriously.

Replies from: Vladimir_Nesov, FAWS
comment by Vladimir_Nesov · 2010-07-19T15:59:35.017Z · LW(p) · GW(p)

They could be dangerously deluded, for example, even if their aim is right. Currently, I don't believe they are, but I gave an example of how you could possibly come to a conclusion that SIAI has negative expected value.

comment by FAWS · 2010-07-19T15:59:02.526Z · LW(p) · GW(p)

Maybe FAI is impossible, humanity's only hope is to avoid the emergence of any super-human AIs, fooming is difficult and slow enough for that to be a somewhat realistic prospect and almost friendly AI is a lot more dangerous because it is less likely to be destroyed in time?

Replies from: Vladimir_Nesov, Roko
comment by Vladimir_Nesov · 2010-07-19T16:05:03.849Z · LW(p) · GW(p)

Then sane variant of SIAI should figure that out, produce documents that argue the case, and try to promote the ban on AI. (Of course, FAI is possible in principle, by its very problem statement, but might be more difficult than for humanity to grow up for itself.)

Replies from: FAWS
comment by FAWS · 2010-07-19T16:10:17.108Z · LW(p) · GW(p)

(Of course, FAI is possible in principle, by its very problem statement, but might be more difficult than for humanity grow up for itself.)

Could you rephrase that? I have no idea what you are saying here.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-19T16:14:34.934Z · LW(p) · GW(p)

FAI is a device for producing good outcome. Humanity itself is such a device, to some extent. FAI as AI is an attempt to make that process more efficient, to understand the nature of good and design a process for producing more of it. If it's in practice impossible to develop such a device significantly more efficient than humanity, then we just let the future play out, guarding it against known failure modes, such as AGI with arbitrary goals.

Replies from: FAWS
comment by FAWS · 2010-07-19T16:20:41.125Z · LW(p) · GW(p)

Thank you, now I see how the short version says the same thing, even though it sounded like gibberish to me before. I think I agree.

comment by Roko · 2010-07-19T16:04:13.963Z · LW(p) · GW(p)

Maybe God will strike us down just for thinking about building a Friendly AI.

When you argue that the expected utility of action X is negative, you won't get much headway by proposing an unlikely and gerrymandered set of circumstances such that, conditional on them being true, the conditional expectation is negative.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-19T17:11:54.949Z · LW(p) · GW(p)

Now what kind of civilized rational conversation is that?

comment by Roko · 2010-07-19T15:45:00.030Z · LW(p) · GW(p)

If you acknowledge the possibility of uFAI, then it makes even less sense to want to remove the only people whose aim is to prevent that. There is already an existing AGI research community, and they're not super-safety oriented, and there's an AI research community who are not taking the risk seriously.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-19T15:47:17.574Z · LW(p) · GW(p)

They could be dangerously deluded, for example, even if their aim is right. Currently, I don't believe they are, but I gave an example of how you could possibly come to a conclusion that SIAI has negative expected value.

comment by multifoliaterose · 2010-07-19T14:31:52.842Z · LW(p) · GW(p)

What SIAI/FHI are trying to do has very high expected value, but in general, because unaccountable charities often exhibit gross inefficiency at accomplishing their stated goals, donating to organizations with low levels of accountability may hurt the causes that the charities work toward (on account of resulting in the charities ballooning and making it harder for more promising organizations that work on the same causes to emerge).

Replies from: Roko
comment by Roko · 2010-07-19T15:33:54.344Z · LW(p) · GW(p)

What makes you say that SIAI and FHI are less-than-averagely accountable?

Replies from: multifoliaterose
comment by multifoliaterose · 2010-07-19T15:36:31.148Z · LW(p) · GW(p)

I don't think that SIAI and FHI are less-than-averagely accountable. I think that the standard for accountability in the philanthropic world is in general is very low and that there's an opportunity for rationalists to raise it by insisting that the organizations that they donate to demonstrate high levels of accountability.

Replies from: Roko
comment by Roko · 2010-07-19T15:51:53.658Z · LW(p) · GW(p)

You want to shut down SIAI/FHI in the hope that some other organization will spring up that otherwise wouldn't have, and cite lack of accountability as the justification, whilst admitting that most charities are very unaccountable? Why should a new organization be more accountable? Where is your evidence that SIAI/FHI are preventing such other organizations from coming into existence?

Replies from: multifoliaterose, Vladimir_Nesov
comment by multifoliaterose · 2010-07-19T16:23:57.682Z · LW(p) · GW(p)

I'm saying that things can change. In recent times there's been much more availability of information than there was in the past. As such, interested donors have means of holding charities more accountable than they did in the past. The reason that the standard for accountability in the philanthropic world is so low is because donors do not demand high accountability. If we start demanding high accountability then charities will become more accountable.

Last year GiveWell leveraged 1 million dollars toward charities demonstrating unusually high accountability. Since GiveWell is a young organization (founded in 2007) I expect the amount leveraged to grow rapidly over the next few years.

(Disclaimer: The point of my above remark is not to promote GiveWell in particular, GiveWell itself may need improvement, I'm just pointing to GiveWell as an example showing that incentivizing charities based on accountability is possible.)

Since SIAI/FHI are fairly new, it's reasonable to suppose that they just happened to be the first organizations on the ground and that over time there will be more and more people interested in funding/creating/(working at) organizations with goals similar to SIAI and FHI. I believe that it's best for most donors interested in the causes that SIAI and FHI are working toward to place money in donor advised funds, commit to giving the money to an organization devoted to existential risk demonstrating high accountability and to hold out for such an organization.

(Disclaimer: This post is not anti-SIAI/FHI, quite possibly SIAI and FHI are capable of demonstrating high levels of accountability and if/when they do so that they will be worthy of funding, the point is just that they are not presently doing so.)

Replies from: Roko
comment by Roko · 2010-07-19T17:05:58.325Z · LW(p) · GW(p)

I must say that this is a remarkably good quality suggestion.

However, going back to the original point of the debate, the discussion was about whether money in the hands of Peter Theil was better than money in the hands of poor Africans.

The counter-factual was not

(money in a donor advised fund to reduce existential risks) versus (money in SIAI account)

The counterfactual was

(money-in-SIAI-account) versus (money spent on alcohol, prostitutes, festivals and other entertainment in the third world)

There's probably a name for this fallacy but I can't find it.

comment by Vladimir_Nesov · 2010-07-19T16:07:29.233Z · LW(p) · GW(p)

How is this a reply to the grandparent?

Replies from: Roko
comment by Roko · 2010-07-19T16:13:35.613Z · LW(p) · GW(p)

multifoliaterose is claiming that SIAI/FHI have zero or negative expected value. I claim that his justification for this claim is very flimsy.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-19T16:23:32.501Z · LW(p) · GW(p)

He is claiming uncertainty about that, but in this particular thread he is discussing accountability in particular, and you attack the overall conclusion instead of focusing on the particular argument. To fight rationalization, you must resist the temptation to lump different considerations together, and consider each on their own merit, no matter what they argue for.

You must support a good argument, even if it's used as an argument for destroying the world and torturing everyone for eternity, and you must oppose a bad argument for saving the future. That's the price you pay for epistemic rationality.

comment by nerzhin · 2010-07-19T15:01:58.816Z · LW(p) · GW(p)

I agree with most of this post, but think you need to work a little harder to provide evidence. Below I've quoted some lines with emphasis added to try to show this.

It seems very likely that for each positive integer x, the average world citizen at the xth percentile in wealth today finds life more fulfilling than the average world citizen at the xth percentile in wealth 50 years ago did.

Surely there's social science research on exactly this that you could cite?

I'd hypothesize that one is that people have a gut intuition that the wealth of the world is fixed.

This is harder to measure, but still, evidence would be nice.

On average making money seems to be good for society, not bad for society.

How would we test this?

My impressions from casual conversations and from the news is that people in the general population have a belief of the type "there's a limited supply of jobs, if jobs are being sent to Southeast Asia then there will be fewer jobs for Americans."

This is very likely true, and you cite statistics about trade more generally to back it up. But "your impressions from casual conversations"? Really?

Replies from: multifoliaterose
comment by multifoliaterose · 2010-07-19T15:29:53.214Z · LW(p) · GW(p)

I'm happy to add to my post examples of empirical evidence for or against the claims in my post as people point them out.

In the absence of empirical evidence on a given subject matter, it's still reasonable to talk about the subject matter. For example, if there's no hard data supporting or opposing my claims then my post may prompt empirical investigation of the ostensible phenomena in question by a social scientist reader of Less Wrong.

When I say things like "it seems very likely" "I'd hypothesize that" "seems" "my impressions", I'm indicating that (a) I don't know of hard data to back up my claims (b) my subjective impressions point to the said conclusions.

Subjective impressions play an important role in rational decision making about complicated matters where it's impossible to get really robust hard data.

Also, one doesn't always need hard data to have a justified high degree of confidence in something. If you haven't done so already, read some of Eliezer's posts about science vs. Bayesian reasoning:

http://lesswrong.com/lw/jo/einsteins_arrogance/

http://lesswrong.com/lw/qb/science_doesnt_trust_your_rationality/

http://lesswrong.com/lw/qa/the_dilemma_science_or_bayes/

comment by James_K · 2010-07-19T10:27:30.010Z · LW(p) · GW(p)

89% of economists in the US think trade agreements between the U.S. and other countries is good for the economy, compared to 55% of the general public. Only 3% of economists think trade agreements are bad for the economy, while 28% of the general public think so

Just to add a little colour to this quote, I would suspect some of the economists aren't answering the question in quite the same way as the non-economists. For instance, I would venture a guess that a significant fraction of the 11% of economists not in favour of trade agreements are opposing for technical reasons like "bilateral agreements can cause trade diversion, so it's better to just unilaterally lower your trade barriers instead". There may be some of that among non-economists, but much less I should think.

My estimate is that if one could adjust for this effect it would make the difference of opinion between economists and non-economists more stark.

comment by Mass_Driver · 2010-07-20T04:23:35.639Z · LW(p) · GW(p)

I appreciate that you have gone to some effort to avoid being needlessly controversial. Your article doesn't read like strident propaganda; it reads like a thoughtful defense of capitalism that has been composed with an eye toward teaching people about rationality.

Nevertheless, I don't think this should have been a top-level article. I have trouble seeing what it adds to the general post on fighting zero-sum bias, and I think that, despite your best efforts, your article does tend to marginalize people who are skeptical of the virtues of capitalism by implicitly portraying such skepticism as primarily caused by irrational thinking.

If you take a look at the other comments here, they seem like pretty good evidence that Politics is Still the Mindkiller. Although we've (so far) avoided trading insults over your post, many LWers are being distracted by the post's political content to the point of staging debates about ordinary political issues that have almost nothing to do with rationality.

comment by JoshuaZ · 2010-07-19T04:00:43.892Z · LW(p) · GW(p)

The system that has given rise to economic inequality is capitalism.

Minor nitpick: economic inequality exists in other systems as well, it just often isn't as severe.

In any event, a lot of what you discuss also seems to stem from a failure to understand comparative advantage. If people got that a lot of this might go away. There's a TED Talk by Matt Ridley that touches on some of these issues.

Replies from: Thomas, multifoliaterose
comment by Thomas · 2010-07-19T07:29:57.689Z · LW(p) · GW(p)

economic inequality exists in other systems as well, it just often isn't as severe.

One dies of hunger, another one has a few kilos of grains and survives. Had the second had a few tons of grains, the inequality would be even greater. But the first one could survive also, working for the second guy and be payed for with the food.

comment by multifoliaterose · 2010-07-19T04:22:07.678Z · LW(p) · GW(p)

Minor nitpick: economic inequality exists in other systems as well, it just often isn't as severe.

Yes, I agree, there's always economic inequality, what I was trying to say was something like "without capitalism, we wouldn't be living in a world where some people make 10 million times as much money as other people." (Maybe I should edit my post accordingly?)

There's a TED Talk by Matt Ridley that touches on some of these issues.

Thanks, I'll check it out.

Replies from: JoshuaZ, Larks
comment by JoshuaZ · 2010-07-19T04:28:33.541Z · LW(p) · GW(p)

Yes, I agree, there's always economic inequality, what I was trying to say was something like "without capitalism, we wouldn't be living in a world where some people make 10 million times as much money as other people." (Maybe I should edit my post accordingly?)

Yes. But there's also another reason for this that you don't touch on: Large scale inequality requires large scale success. If the only thing to go around is a 1000 chickens, then no one is ever going to have millions more of anything than anyone else does. You can only have things where people have such stark contrasts in wealth and resources when large quantities of wealth exist.

Replies from: Vladimir_Nesov, multifoliaterose
comment by Vladimir_Nesov · 2010-07-19T07:26:07.360Z · LW(p) · GW(p)

You can only have things where people have such stark contrasts in wealth and resources when large quantities of wealth exist.

Or large number of people to steal from. This argument doesn't quite work.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-19T13:35:39.159Z · LW(p) · GW(p)

Or large number of people to steal from. This argument doesn't quite work.

In order for that to occur one needs large numbers of people surviving. The current human population would not be remotely sustainable without capitalism. Even many very poor countries are only able to keep their near starving populations from completely dying out due to this.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-19T14:27:19.524Z · LW(p) · GW(p)

What's your point? Your argument has a problem, not necessarily your conclusion. You argue that your conclusion is right, but I don't disagree, my comment was about the specific argument you used.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-07-19T14:30:01.990Z · LW(p) · GW(p)

My point is that you can't have large numbers of people to steal from unless there's already a lot of wealth in the system, so your criticism of the argument doesn't work.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-07-19T15:10:50.032Z · LW(p) · GW(p)

It's a coincidental and not obviously correct consideration. There could be enough people on whole Earth, however they live. There certainly can be many people living under non-capitalistic system (China at the appropriate epoch). If there weren't enough people, there could be if Earth was bigger.

Replies from: JoshuaZ, Strange7
comment by JoshuaZ · 2010-07-19T15:22:28.232Z · LW(p) · GW(p)

It's a coincidental and not obviously correct consideration. There could be enough people on whole Earth, however they live. There certainly can be many people living under non-capitalistic system (China at the appropriate epoch).

That's a good point. I withdraw my argument.

comment by Strange7 · 2010-07-26T00:15:46.084Z · LW(p) · GW(p)

If Earth was bigger, those additional people would be further away and thus unavailable to steal from.

comment by multifoliaterose · 2010-07-19T06:39:35.833Z · LW(p) · GW(p)

I edited my post to add "severe" but feel that there's no need to add a qualifier about how large scale inequality requires large scale success. It seems fairly likely that large scale success by 2010 AD would not be have been possible if the governments had intervened in economic affairs to a much greater degree than they did.

comment by Larks · 2010-07-20T15:43:59.474Z · LW(p) · GW(p)

There is some evidence (p25) that economic inequality isn't correlated with economic system; specifically, that the share of income going to the poorest 10% is uncorrelated with economic freedom. As Friedman put it, the pay multiple between boss and worker in the USA and USSR were the same; the difference was that the American boss could only fire you, whereas under communism, you could be shot.

Secondly, comparing the wealth of Americans with the residents of chad is a bit tangential, unless all live under one economic system. If Chat doesn't operate in the same system as America, the same comparison could simply be evidence for the superiority of the American system. The inequality within America is a more relevant piece of information here.

comment by Douglas_Knight · 2010-07-20T19:25:44.870Z · LW(p) · GW(p)

I think that it is largely meaningless to talk about the income of people like Bezos. At least, it is a mistake to identify it with changes to their net worth. By that standard, the great inequality is not between Bezos and the poor, but between Bezos this year and Bezos last year, when he lost billions of dollars!

I can't object much if you do ten year averaging, so this is just quibbling over an order of magnitude.

comment by CronoDAS · 2010-07-19T18:58:58.193Z · LW(p) · GW(p)

A very informative article about the impact of outsourcing.

The tl;dr version:

Between 1980 and 2000, India, China, and the former Soviet block joined the rest of the global economy. They had enough potential workers to basically double the global labor supply, but had relatively little useful capital (because of poverty and technological obsolescence). We now have twice as many workers competing to sell their labor to the same supply of capital. Which means that wages are going to fall.

Globalization is good for employers and Indian/Chinese/Russian workers. It sucks for American and Western European workers - even skilled, highly educated workers, because Indian/Chinese/Russian workers are becoming skilled and highly educated very quickly.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-07-20T08:01:58.635Z · LW(p) · GW(p)

Any theories about when the amount of capital will start catching up?

Replies from: CronoDAS
comment by CronoDAS · 2010-07-20T17:29:53.250Z · LW(p) · GW(p)

From the article:

Even considering the high savings rate in the new entrants -- the World Bank estimates that China has a savings rate of 40% of GDP -- it will take 30 or so years for the world to re-attain the capital/labor ratio among the countries that had previously made up the global economy.

comment by Unknowns · 2010-07-19T06:21:36.529Z · LW(p) · GW(p)

"Many people find such stark inequality troubling." Not only for the reason stated. It is also a fact, perhaps because of the bias under consideration, that great economic inequality causes envy in humans (given their present constitution), and envy is bad. So economic inequality is also bad for this reason.

So whether or not a system that produces such inequality is better than another that doesn't, does not only depend on how rich or poor the other society would be. It also depends on exactly how much envy is caused by the inequality, and how bad envy is.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-07-19T06:25:17.530Z · LW(p) · GW(p)

I agree with this. But if people could recognize that their envy borne of inequality proper will not help them get what they want, then their envy might vanish.

Replies from: Unknowns
comment by Unknowns · 2010-07-19T06:29:48.906Z · LW(p) · GW(p)

Yes, that's true. But it might be difficult for you (or for anyone) to get all 10 million of those people in Chad to recognize that, since there will be a whole lot of bias that you will have to overcome, multiplied by 10 million.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-07-19T06:49:08.126Z · LW(p) · GW(p)

Agreed.

The amount of inequality in the world may not be very salient to the citizens of Chad on account of their geographic distance from wealthy people and low access to information. I don't know what the situation is there - I guess this is an empirical matter.

In the United States, the factor that you mention pushes in the direction of the desirability of redistribution of wealth. There are other factors that push against the desirability of redistribution of wealth. I'm presently agnostic on the subject of exactly how much the US government should redistribute wealth.

comment by [deleted] · 2015-08-16T09:22:10.463Z · LW(p) · GW(p)

Today I heard a politician on the radio. He said he moved from being a clerk under justice Kirby who studied law, to an economics professor, because he thoughts the economic system of incentives was more powerful than the legal system of rights. I thought this was very interesting, and related the tension between democracy and capitalism.

That's probably not what a lot of mainstream libertarians want to hear. People don't want to hear that, but it's the god damned truth. Or it's just a clever soundbite. I'm a sucker for people that sound like real world versions of clickbait, like buzzfeed. At least when I replicate it for myself, it makes it easier to accept conscious positive self talk.

Note, I don't actually know if what he says is true. I simply like things for which identifying the problem space is too difficult to search, yet someone has a stance anyway. Those people know now to let their problem space search grow faster than their solution space search.

comment by billswift · 2010-07-19T12:38:08.667Z · LW(p) · GW(p)

It is not possible to do better than free markets, not because of human "nature", but because of human limitations. It may someday be possible for post-humans (for example the Vile Offspring in Strauss's Accelerando) to develop superior algorithms for economic exchanges and planning; but for humans there is no possible "solution" to the coordination problem as good as prices set by a free market (a set of open exchanges). As an aside about regulation - the less regulation, the more accurate the information encoded in prices will be; but there are other aspects of society to take into account than accurate prices - like everything in real world societies it is a trade off between multiple, sometimes conflicting values.

ADDED: I strongly recommend that anyone interested in the value of markets to read Thomas Sowell's "Knowledge and Decisions".

Replies from: JoshuaZ, NancyLebovitz, multifoliaterose
comment by JoshuaZ · 2010-07-19T14:33:42.866Z · LW(p) · GW(p)

As an aside about regulation - the less regulation, the more accurate the information encoded in prices will be;

Not necessarily. Free markets with no regulation do a really bad job at encoding information about externalities such as pollutants.

Replies from: SilasBarta, CronoDAS
comment by SilasBarta · 2010-07-19T18:33:06.125Z · LW(p) · GW(p)

Unless rights with respect to those externalities are as well defined (and reasonable) as the other property rights that are enforced.

Of course, the typical libertarian, in my experience, incorrectly classifies such tradeable pollution rights (i.e. where the total permitted right to pollute is kept low and can be traded between polluters) as evil evil evil terrorist regulation.

Actually, that disagreement pretty much describes 18 months of my interaction with Bob Murphy, starting a month before this.

comment by CronoDAS · 2010-07-19T18:21:33.611Z · LW(p) · GW(p)

And there's also adverse selection / information asymmetry effects, as in "The Market for Lemons". (There's a big difference between the market price of a new car and a used car that was purchased new from a dealer one day ago.)

comment by NancyLebovitz · 2010-07-19T13:50:45.240Z · LW(p) · GW(p)

It's not possible to use detailed centralized control to do better than free markets, but this doesn't prove that a strong safety net is worse than a free market.

comment by multifoliaterose · 2010-07-19T14:21:05.954Z · LW(p) · GW(p)

I agree with NancyLebovitz's comment. Agree with you that computation issues associated with the coordination problem play a big role in why free markets are valuable (and added the world "limitations" to my post to accommodate your suggestion).

comment by CarlShulman · 2010-07-19T06:12:26.166Z · LW(p) · GW(p)

"Capitalism" is something of an ideological term, and really too vague. What are you contrasting it against? All existing states have extensive government intervention in and regulation of their economies. I agree with others that the post would be improved by a more clinical and clearly non-ideological presentation.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-07-19T06:21:01.272Z · LW(p) · GW(p)

My post was not about what the right balance is between free markets and government regulation is, my post was about an irrational bias that (some) people have against free markets.

When I say "The vast majority of the world's wealth has been produced under capitalism" I'm contrasting the level of economic regulation present in America and similar states with the level of economic regulation present in the China and USSR of the 1950s-1970s (for example).