[SEQ RERUN] Traditional Capitalist Values

post by MinibearRex · 2012-09-25T03:39:20.395Z · LW · GW · Legacy · 24 comments

Contents

24 comments

Today's post, Traditional Capitalist Values was originally published on 17 October 2008. A summary (taken from the LW wiki):

 

Before you start talking about a system of values, try to actually understand the values of that system as believed by its practitioners.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Entangled Truths, Contagious Lies, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

24 comments

Comments sorted by top scores.

comment by torekp · 2012-09-25T22:33:43.557Z · LW(p) · GW(p)

Before you start talking about a system of values, try to actually understand the values of that system as believed by its practitioners.

Dubious advice. Better advice: try to actually understand the values of that system as practiced, and look at the behaviors that are rewarded or punished by practitioners.

Replies from: ewbrownv, Viliam_Bur
comment by ewbrownv · 2012-09-25T22:47:35.537Z · LW(p) · GW(p)

I think Eli has the better starting point. If you want to understand a strange philosophical system you have to start by figuring out what it's members believe, how they think about the world, what they profess that they ought to do. Once you understand that, you'll want to go on to look at how their actual actions differ from their beliefs and how the predictions of their theories differ from reality. But if you don't start with their beliefs, you'll never be able to predict how an adherent would actually respond to any novel situation.

comment by Viliam_Bur · 2012-09-26T11:17:39.311Z · LW(p) · GW(p)

This creates some reference-class problems. You say "X is bad, look how Y is practicing it", and then I say "what Y does is not really X; in fact nobody has tried X yet; I am the first person who will try it, so please don't compare me with people who did something else".

In other words, if you can successfully claim that your system is original, you automatically receive "get out of the jail, free" card. On the other hand, some systems really are original... or at least modified enough to possibly work differently from their old versions.

But it certainly is legitimate to look how the (arguably not the same) system is practiced, what are its failures, and ask proponents: "How exactly are you planning to fix this? Please be very very specific."

Replies from: torekp
comment by torekp · 2012-09-28T00:03:04.070Z · LW(p) · GW(p)

EY says that real value systems "are phrased to generate warm fuzzies in their users". If we move from phrasings to beliefs, as you and ewbrownv suggest, that's a step in the right direction in my view. And that step requires looking at actions.

Classification is challenging, sure, especially in social matters. If you want to make predictions, you are generally well advised to do clustering and pattern matching and to pay attention to base rates. If there are many who have said A, B, C, and D, and done W, X, Y, and Z, it's a good bet that the next ABCD advocate will do all or most of WXYZ too. And usually, what we're most interested in predicting is actions other than utterances. For that, past data on actions constitutes the most vital information.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-09-28T07:13:04.213Z · LW(p) · GW(p)

For that, past data on actions constitutes the most vital information.

Exactly. Which is why manipulating the past data is essential in politics. Exaggerating the differences between ABCD and A'B'C'D'; and/or claiming that WXYZ never happened and was just an enemy propaganda. You can teach people to cluster incorrectly, if you make your clustering criteria well known.

(As an example, you can teach people to classify "national socialism" as something completely opposite and unrelated to "socialism"; and you can also convince them that whatever experiences people had with "socialism" in the past, are completely unrelated to the experiences we would have with "socialism" in the future. This works even on people who had first-hand experience, and works better on people who didn't have it.)

comment by Viliam_Bur · 2012-09-26T11:30:44.411Z · LW(p) · GW(p)

Perhaps it could be useful to look at all reasonable value systems, try to extract the good parts, and put them all together.

Moving from values to applied rationality, it could also be useful to create a collection of biases by asking the rational supporters of given value systems: "which bias or lack of understanding do you find most frustrating when dealing with otherwise rational people from the opposite camp". Filter the rational answers, and make a social rationality textbook.

For example a capitalism proponent may be frustrated by people not understanding the relation between "what is seen and what is not seen". A socialism proponent may be frustrated by people not understanding that "the market can stay irrational longer than you can stay solvent". -- Perhaps not the best examples, but I hope you get the idea.

Replies from: evand
comment by evand · 2012-09-26T19:45:41.012Z · LW(p) · GW(p)

I'm frustrated by naive capitalists' failure to understand externalities.

On the other side, I'm frustrated by people failing to understand that technological progress creates new jobs in the process of destroying old ones, and that this is a net good, even though the people losing their jobs are the most visible. I suppose that's closely related to your "seen vs unseen" idea. Related: Jevon's paradox.

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2012-09-29T03:50:12.517Z · LW(p) · GW(p)

On the other side, I'm frustrated by people failing to understand that technological progress creates new jobs in the process of destroying old ones, and that this is a net good, even though the people losing their jobs are the most visible.

Is this thing still considered obvious these days? The problem with it is that the new jobs that still need people to do them are getting more difficult. We seem to have actually viable self-driving cars now, which hints that just needing to do hand-eye coordination in diverse environments no longer guarantees that a job needs a human to do it.

If we ever get automated natural language interfaces to be actually good, that's another massive sector of human labor, customer service, who just got replacable with a bunch of $10 microprocessors and a software license. So, do we now assure everyone that good natural language interfaces will never happen, even though self-driving cars obviously were never going to work in the real world either, except that now they appear to do?

At least the people in high abstraction knowledge work can be at peace knowing that if automation ever gets around to doing their jobs better than them, they probably don't need to worry very long about unemployment on the account of everybody probably ending up dead.

Replies from: TimS
comment by TimS · 2012-09-29T04:50:35.073Z · LW(p) · GW(p)

There's a lot of status quo bias here. Once upon a time, elevators and telephones had operators, but no longer.

The problem with it is that the new jobs that still need people to do them are getting more difficult.

This is an important fact, if true. There are obvious lock-in effects. For example, unemployed auto workers have skills that are no longer valued in the market because of automation. But the claim that replacement jobs are systematically more difficult, so that newly unemployed lack the capacity to learn the new jobs, is a much stronger claim.

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2012-09-29T06:10:59.312Z · LW(p) · GW(p)

But the claim that replacement jobs are systematically more difficult, so that newly unemployed lack the capacity to learn the new jobs, is a much stronger claim.

Yes. It's obviously true that useful things that are easier to automate will get automated more, so the job loss should grow from the easily automated end. The open question is how much do human skill distributions and the human notion of 'difficulty' match up with the easier to automate thing. It's obviously not a complete match, as a human job, bookkeeping is considered to require more skill than warehouse work, but bookkeeping is much more easily automated than warehouse work.

Human labor in basic production, farming, mining, manufacturing, basically relies on humans coming with built-in hand-eye coordination and situation awareness that has been impossible to automate satisfactorily so far. Human labor in these areas mostly consists of following instructions though, so get a good enough machine solutions for hand-eye coordination and situation awareness in the real world, and most just-following-orders, dealing-with-dumb-matter human labor is toast.

Then there's the simpler service labor where you deal with other humans and need to model humans successfully. This is probably more difficult, AI-wise. Then again, these jobs are also less essential, people don't seem to miss the telephone and elevator operators much. Human service personnel are an obvious status signal, but if the automated solution is 100x cheaper, actual human service personnel is going to end up a luxury good, and the nearby grocery store and fast food restaurant probably won't be hiring human servers if they can make do with a clunky automated order and billing system. In addition to being more scarce, high-grade customer service jobs at status-conscious organizations are going to require more skills than a random grocery store cashier job.

This leaves us mostly with various types of abstract knowledge work, which are generally considered the types of job that require the most skill. Also, one dealing with people job sector where the above argument of replacing humans with automated systems that aren't full AIs won't work are various security professions. You can't do away with modeling other humans very well and being very good at social situation awareness there.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-09-29T06:42:21.078Z · LW(p) · GW(p)

Human service personnel are an obvious status signal, but if the automated solution is 100x cheaper, actual human service personnel is going to end up a luxury good

On the other hand, the wealth said automated solutions will generate means that luxury goods are a lot more affordable.

Edit: The downside is that this means that most jobs will essentially consist of playing status games, I believe the common word for this is decadence.

comment by Manfred · 2012-09-25T04:22:12.040Z · LW(p) · GW(p)

Unearned gains destroy people, nations, and teenagers. If you want to help, go into the dark places of the world and open stores that offer lower prices

AIIIIIEEEEEEEEE pschbschchch*explosion noises*

Replies from: Multiheaded, Decius
comment by Multiheaded · 2012-09-25T18:36:27.296Z · LW(p) · GW(p)

I happen to agree with your poorly phrased objection - I happen to think that "personal responsibility" is mostly bullshit, and often a dangerous cover word for collective indifference and the just-world fallacy, much like "equality of opportunity", "self-reliance" and other applause lights - but please elaborate!

As far as I know, the conservative/right-libertarian argument against giving people in desperate need (e.g. Africans) "unearned" aid is that it doesn't just offer perverse incentives and degrade their society's mindset - it doesn't even lead to improved material conditions either, as birth rates, structures of supply and demand, etc would make things as bad or worse down the road.
However, I'm very unsure if that criticism true for many aid programs like basic education, healthcare, monitoring (or experiments like OLPC and the proposed wind/solar energy systems for Africa) - it sounds plausible for agricultural "aid" and other charities that replace the local economy - so it might be a case of taking true evidence in a few specific cases and applying it indiscriminately to all "interventionist" social projects for a fully general "libertarian" counterargument.
It's a commonly seen political dynamic, IMO: left-liberals follow some naive do-gooder sentiment to do something stupid/counter-productive (and often go in denial if it blows up, like decolonization); then their cynical/unscurplous opponents cite it as "evidence" why doing nice things for altruistic reasons is always Worse Than Hitler, and develop it into a general indictment of the altruistic/"socialist" mindset, a la Ayn Rand or... certain other writers.

Is that your take as well?

Replies from: Manfred, Viliam_Bur
comment by Manfred · 2012-09-25T20:36:28.127Z · LW(p) · GW(p)

Largely yes (I would also note efforts like GiveWell that try to document positive effects of charity).

But I quoted the second sentence for a reason - and that reason is because it could have strong negative impacts (outcompeting local businesses, leading to economic instability at least and prolonged poverty at worst), but that's not considered at all. So we have charity in the first sentence, eyed with skepticism and doubt, and then we have business in the second sentence, jumped into uncritically.

comment by Viliam_Bur · 2012-09-26T11:08:00.493Z · LW(p) · GW(p)

Seems like a good heuristic could be: "don't do for Africans what Africans can do for themselves; instead do the things they can't do for themselves". Though in reality it gets messy: what if they can do X, but can't do enough X? By providing additional X you compete with those who already provided X.

Even better, every provided help should employ local people where possible. For example, you bring the food help to one place, and then pay local people to distribute it to other places. But pay them a market rate only. (You are already increasing local wages by increasing local demand for labor.)

Similarly you could improve healthcare by providing free education to local doctors, and then let them sell their services to their customers. (The more doctors you teach, the greater competition will be between them, and their services will be cheaper.) The same for teachers.

All these suggestions seem to me fully compatible with the "traditional capitalist values". And if this happens to be sponsored by voluntary donors, even Ayn Rand wouldn't object. -- So if someone objects against this, citing "libertarian" arguments, either they are mistaken, or that is not their true rejection.

comment by Decius · 2012-09-25T18:09:02.911Z · LW(p) · GW(p)

Surprised that 'unearned gains' and 'stealing' have many of the same effects? Or do you somehow think that those two statements are intended as a justification/imperative pair?

Replies from: Manfred
comment by Manfred · 2012-09-25T20:32:36.494Z · LW(p) · GW(p)

That first one is bad mindreading - not bad because it's unsuccessful, although it is (mindreading is dang hard though, so like I said, I don't care). But because it's written from your point of view. If you're trying to mind-read, you should end up with something that's written from the point of view of your model of the other person.

The second guess is closer. Here's my take:

The consequences of charity are examined with skepticism and generalization, and then outcompeting local businesses is immediately proposed as a solution with no thought to the consequences. Each of these parts has their own problems, but writing them one after the other like that is bad out of proportion to the sum of the parts. Thus, *explosion noises*.

Replies from: Decius, ewbrownv
comment by Decius · 2012-09-25T23:50:03.086Z · LW(p) · GW(p)

I was drawing on my model of another person- but my model was based on so little information that it was very likely to be wrong.

Building on ewbronv's response: The starving man is best off if you open a bakery which can profitably sell him bread at a price he can afford. If there are starving people, it is because food is not available at a price they can afford.

It isn't about driving other companies out of business, it's about meeting an unmet need. If you can make a greater profit than the free market competition, it must be because you are filling needs more effectively- everyone who would have their personal needs filled more valuably by the competition patronizes the competition, driving their profits instead of yours.

Replies from: Manfred
comment by Manfred · 2012-09-26T03:47:46.246Z · LW(p) · GW(p)

Building on ewbronv's response: The starving man is best off if you open a bakery which can profitably sell him bread at a price he can afford. If there are starving people, it is because food is not available at a price they can afford.

Right. So I am saying that businesses can affect both the price, and the affording.

Replies from: Decius
comment by Decius · 2012-09-26T16:34:28.441Z · LW(p) · GW(p)

In a trade balance sense, where imports are perceived to reduce the economic viability of an area and exports are perceived to increase it? Or are you saying that businesses that pay low wages are harmful compared to businesses which pay high wages, sell the same amount of the same product (at a higher price), make the same profit, and distribute that profit the same way?

Replies from: Manfred
comment by Manfred · 2012-09-26T18:28:50.303Z · LW(p) · GW(p)

The trade balance perspective is a pretty interesting one :P If the locals spend money at your store, how does it get back to the locals so they can buy more stuff? Is the entire cycle good for the locals (typical import/export), or is there a tragedy of the commons problem (the people hurt aren't necessarily the same ones who shop at your store)?

But what I was mainly thinking of was labor distribution. The economy is (from the pragmatist perspective) a system for distributing goods and labor to where they'll benefit people, better than a central planner could do it. If goods were previously produced locally, and now you import them, that's not making the distribution of goods worse, but it can make the distribution of labor worse, imposing costs on the local economy. Moving people around is a pain.

Replies from: Decius
comment by Decius · 2012-09-27T18:10:54.172Z · LW(p) · GW(p)

The store hires locals to man it, and pays taxes at the local rates. The local currency (henceforth: lc, symbol #) can only be exchanged for foreign currency (henceforth: dollars, $) if there are things which can be purchased using the local currency that people who have foreign currency want. If the lc isn't worth anything, nobody can make a profit buying bread for dollars and selling the bread for lc.

I think that it is overwhelmingly likely that the areas with the highest unemployment will have the lowest labor costs; If that is the case, the businessman imports wheat into that area, bakes it using local labor, and sells it in lc; he has now changed some amount of dollars into some amount of lc. He then invests that lc into a labor-intensive product and ships that product back home to sell for dollars.

Exactly the same thing can and does happen within a single currency: raw materials are shipped from where resources are plentiful to where labor is plentiful, and finished goods are shipped back out. The major hurdle is scarce raw materials limiting the number of factories that it is possible to operate, followed by political control of the economic system in order to 'keep jobs' within the privileged group.

comment by ewbrownv · 2012-09-25T23:03:54.565Z · LW(p) · GW(p)

On the contrary, a capitalist believes there is abundant evidence that increased competition in any market almost always generates positive externalities, and is a net benifit to the community. Endless volumes have been written on this topic, and while there have been some attempts to argue the opposite case none of the ones I've seen are especially convincing. So 'open a business instead' isn't a thoughtless reflex, it's a carefully considered opinion about what course of action has the highest likelyhood of generating positive results in an uncertain world.

The first half, of course, is a lot less firmly supported. While unearned wealth certainly has a tendency to produce negative side effects, there isn't a lot of hard data on how strong this trend is. Certainly, it seems hard to argue that a starvig man is worse off if you give him a free loaf of bread.

Of course in reality it generally isn't possible to give large amounts of resources to the needy, especially in poverty-stricken nations. Whever you try it the local kleptocrats will quickly arrange to steal the majority of your donations and use them to maintain their own power, so the real effect of most international charity is simply to prop up corrupt regimes and allow them to get away with oppressing their own people.

Replies from: Manfred
comment by Manfred · 2012-09-26T05:04:13.141Z · LW(p) · GW(p)

On the contrary, a capitalist believes there is abundant evidence that increased competition in any market almost always generates positive externalities, and is a net benifit to the community.

Hm. Well, this is reasonable - competition is pretty nice. But if you have a society that's just plain old less efficient than ours - uses lots of human labor rather than industrialization, doesn't have good economies of scale, etc etc - and I go in and open up some store that simply offers lower prices on widgets than the local stores can offer, this doesn't particularly increase competition. The other stores aren't going to build economies of scale overnight, they simply go out of business, and so now I have all the local widget sales.

So 'open a business instead' isn't a thoughtless reflex, it's a carefully considered opinion about what course of action has the highest likelyhood of generating positive results in an uncertain world.

It happens to be a claim that meshes with an identity as "capitalist," and shows signs of being protected from details by the first good argument.