To what extent does improved rationality lead to effective altruism?

post by JonahS (JonahSinick) · 2014-03-20T07:08:38.997Z · LW · GW · Legacy · 156 comments

Contents

  Claim 1: "When people are more rational, they're more likely to pick their altruistic endeavors that they engage in with a view toward maximizing utilitarian expected value."
  Claim 2: "When people are more rational, they're more likely to succeed in their altruistic endeavors."
  Claim 3: "Being more rational strengthens people's altruistic motivation."
  Putting it all together
  My own experience
None
156 comments

It's been claimed that increasing rationality increases effective altruism. I think that this is true, but the effect size is unclear to me, so it seems worth exploring how strong the evidence for it is. I've offered some general considerations below, followed by a description of my own experience. I'd very much welcome thoughts on the effect that rationality has had on your own altruistic activities (and any other relevant thoughts).

The 2013 LW Survey found that 28.6% of respondents identified as effective altruists. This rate is much higher than the rate in the general population (even after controlling for intelligence), and because LW is distinguished by virtue of being a community focused on rationality, one might be led to the conclusion that increasing rationality increases effective altruism. But there are a number of possible confounding factors: 

  1. It's ambiguous what the respondents meant when they said that they're "effective altruists." (They could have used the term the way Wikipedia does, or they could have meant it in a more colloquial sense.)
  2. Interest in rationality and interest in effective altruism might both stem from an underlying dispositional variable.
  3. Effective altruists may be disproportionately likely to seek to improve their epistemic rationality than are members of the general population.
  4. The rationalist community and the effective altruist community may have become intertwined by historical accident, out of virtue of having some early members in common.

So it's helpful to look beyond the observed correlation and think about the hypothetical causal pathways between increased rationality and increased effective altruism.

The above claim can be broken into several subclaims (any or all of which may be intended):

Claim 1: When people are more rational, they're more likely to pick their altruistic endeavors that they engage in with a view toward maximizing utilitarian expected value. 

Claim 2: When people are more rational, they're more likely to succeed in their altruistic endeavors.

Claim 3: Being more rational strengthens people's altruistic motivation.


Claim 1: "When people are more rational, they're more likely to pick their altruistic endeavors that they engage in with a view toward maximizing utilitarian expected value."

Some elements of effective altruism thinking are:

Claim 2: "When people are more rational, they're more likely to succeed in their altruistic endeavors."

If "rationality" is taken to be "instrumental rationality" then this is tautologically true, so the relevant sense of "rationality" here is "epistemic." 

Claim 3: "Being more rational strengthens people's altruistic motivation."

Putting it all together

The considerations above point in the direction of increased rationality of a population only slightly (if at all?) increasing the effective altruism at the 50th percentile of the population, but increasing the effective altruism at higher percentiles more, with the skewing becoming more and more extreme the further up one goes. This is in parallel with, e.g. the effect of height on income.

My own experience

In A personal history of involvement with effective altruism I give some relevant autobiographical information. Summarizing and elaborating a bit:

How about you?


156 comments

Comments sorted by top scores.

comment by Ixiel · 2014-03-20T09:43:43.964Z · LW(p) · GW(p)

Sorry if this is obviously covered somewhere but every time I think I answer it in either direction I immediately have doubts.

Does EA come packaged with "we SHOULD maximize our altruism" or does it just assert that IF we are giving, well, anything worth doing is worth doing right?

For example, I have no interest in giving materially more than I already do, but getting more bang for my buck in my existing donations sounds awesome. Do I count? I currently think not but I've changed my mind enough to just ask.

Replies from: JonahSinick, ESRogs, Lukas_Gloor
comment by JonahS (JonahSinick) · 2014-03-20T20:20:53.558Z · LW(p) · GW(p)

It's a semantic distinction, but I would count yourself – every bit counts. There is some concern that the EA movement will become "watered down," but the concern is that epistemic standards will fall, not that the average percentage donated by members of the movement will fall.

Replies from: diegocaleiro
comment by diegocaleiro · 2014-03-21T08:25:57.508Z · LW(p) · GW(p)

Well, distortion of ideas and concepts within EA can go a long way. It doesn't hurt to be prepared for some meaning shift as well.

comment by ESRogs · 2014-03-21T08:07:25.710Z · LW(p) · GW(p)

I dunno, does Holden Karnofsky count as an EA? See: http://blog.givewell.org/2007/01/06/limits-of-generosity/.

You count in my book.

comment by Lukas_Gloor · 2014-03-20T16:40:03.120Z · LW(p) · GW(p)

I think it is viewed as something in between by the EA community. Dedicating 10% of your time and resources in order to most effectively help others would definitely count as EA according to most self-identifying EAs, while 1% probably wouldn't, but it's a spectrum anyway.

EA does not necessarily include any claims about moral realism / universally binding "shoulds", at least not as I understand it. It comes down to what you want to do.

comment by Sniffnoy · 2014-03-20T09:04:39.680Z · LW(p) · GW(p)
  • Consequentialism. In Yvain's Consequentialism FAQ, he argues that consequentialism follows from the intuitively obvious principles "Morality Lives In The World" and "Others Have Non Zero Value" upon reflection. Rationality seems useful for recognizing that there's a tension between these principles and other common moral intuitions, but this doesn't necessarily translate into a desire to resolve the tension nor a choice to resolve the tension in favor of these principles over others. So it seems that increased rationality does increase the likelihood that one will be a consequentialist, but that it's also not sufficient.

  • Expected value maximization. In Circular Altruism and elsewhere, Eliezer describes cognitive biases that people employ in scenarios with a probabilistic element, and how reflection can lead one to the notion that one should organize one's altruistic efforts to maximize expected value (in the technical sense), rather than making decisions based on these biases. Here too, rationality seems useful for recognizing that one's intuitions are in conflict because of cognitive biases, without necessarily entailing an inclination to resolve the tension. However, in this case, if one does seek to resolve the tension, the choice of expected value maximization over other alternatives is canonical, so rationality seems to take one further toward expected value maximization than to consequentialism.

This part seems a bit mixed up to me. This is partly because Yvain's Consequentialism FAQ is itself a bit mixed up, often conflating consequentialism with utilitarianism. "Others have nonzero value" really has nothing to do with consequentialism; one can be a consequentialist and be purely selfish, one can be non-consequentialist and be altruistic. "Morality lives in the world" is a pretty good argument for consequentialism all by itself; "others have nonzero value" is just about what type of consequences you should favor.

What's really mixed up here though is the end. When one talks about expected value maximization, one is always talking about the expected value over consequences; if you accept expected value maximization (for moral matters, anyway), you're already a consequentialist. Basically, what you've written is kind of backwards. If, on the other hand, we assume that by "consequentialism" you really meant "utilitarianism" (which, for those who have forgotten, does not mean maximizing expected utility in the sense discussed here but rather something else entirely[0]), then it would make sense; it takes you further towards maximizing expected value (consequentialism) than utilitarianism.

[0]Though it still is a flavor of consequentialism.

Replies from: JonahSinick
comment by JonahS (JonahSinick) · 2014-03-20T20:23:24.315Z · LW(p) · GW(p)

Good points. Is my intended meaning clear?

Replies from: Sniffnoy
comment by Sniffnoy · 2014-03-20T21:14:57.534Z · LW(p) · GW(p)

I mean, kind of? It's still all pretty mixed-up though. Enough people get consequentialism, expected utility maximization, and utilitarianism mixed up that I really don't think it's a good thing to further confuse them.

comment by Lukas_Gloor · 2014-03-20T16:06:49.277Z · LW(p) · GW(p)

Being more rational makes rationalization harder. When confronted with thought experiments such as Peter Singer's drowning child example, it makes it harder to come up with reasons for not changing one's actions while still maintaining a self-image of being caring. While non-rationalists often object to EA by bringing up bad arguments (e.g. by not understanding expected utility theory or decision-making under uncertainty), rationalists are more likely to draw more radical conclusions. This means they might either accept the extreme conclusion that they want to be more effectively altruistic, or they accept the extreme conclusion that they don't share the premise that the thought experiment relies on, namely that they care significantly about others for their own sake. Increased rationality weeds out the "middle-ground-positions" that are kept in place by rationalizations or a simple lack of further thinking (which doesn't refer to all such positions of course).

It would be interesting to figure out the factors that determine which way the bullet will be bitten. I would predict that the vast majority of EA-rationalists have other EAs in their close social environment.

Replies from: Jiro, JonahSinick
comment by Jiro · 2014-03-20T18:53:49.101Z · LW(p) · GW(p)

I wouldn't suggest that people's response to dilemmas like Singer's is rationalization. Rather, I'd say that people have principles but are not very good at articulating them. If they say they should save a dying child because of some principle, that "principle" is just their best attempt to approximate the actual principle that they can't articulate.

If the principle doesn't fit when applied to another case, fixing up the principle isn't rationalization; it's recognizing that the stated principle was only ever an approximation, and trying to find a better approximation. (And if the fix up is based on bad reasoning, that's just "trying to find a better approximation, and making a mistake doing so".)

It may be easier to see when not talking about saving children. If you tell me you don't like winter days, and I point out that Christmas is a winter day and you like Christmas, and you then respond "well, I meant a typical winter day, not a special one like Christmas", that's not a rationalization, that's just revising what was never a 100% accurate statement and should not have been expected to be.

comment by JonahS (JonahSinick) · 2014-03-20T20:34:33.082Z · LW(p) · GW(p)

Yes, this is a good point.

comment by RomeoStevens · 2014-03-20T09:09:38.475Z · LW(p) · GW(p)

Interest in rationality and interest in effective altruism might both stem from an underlying dispositional variable.

My impression is that a lack of compartmentalization is a risk factor for both LW and EA group membership.

Replies from: Richard_Kennaway, JonahSinick
comment by Richard_Kennaway · 2014-03-20T11:42:35.973Z · LW(p) · GW(p)

My impression is that a lack of compartmentalization is a risk factor for both LW and EA group membership.

My impression is also that it is a risk factor for religious mania.

Lack of compartmentalization, also called taking ideas seriously, when applied to religious ideas, gives you religious mania. Applied to various types of collective utilitarianism, can produce various anything from EA to antinatalism, from tithing to giving away all that you have. Applied to what it actually takes to find out how the world works, gives you Science.

Whether it's a good thing or a bad thing depends on what's in the compartments.

Replies from: TheOtherDave
comment by TheOtherDave · 2014-03-20T15:44:10.603Z · LW(p) · GW(p)

Whether it's a good thing or a bad thing depends on what's in the compartments.

Also on how conflicts are resolved.

comment by JonahS (JonahSinick) · 2014-03-20T20:33:25.734Z · LW(p) · GW(p)

Yes, this is a good point that I was semi-conscious of, but not sufficiently salient so that it occurred to me explicitly while writing my post.

comment by Said Achmiz (SaidAchmiz) · 2014-03-20T07:55:13.606Z · LW(p) · GW(p)

My interest in rationality was more driven by my interest in effective altruism than the other way around.

This comment actually makes aspects of your writings here make sense, that did not make sense to me before.

Your post, overall, seems to have the assumption underlying it, that effective altruism is rational, and obviously so. I am not convinced this is the case (at the very least, not the "and obviously so" part).

To the extent that effective altruism is anything like a "movement", a "philosophy", a "community", or really, anything less trivial than "well, altruism seems like the way to go, and we should be effective at things", it seems to me to need some justification, some arguing-for. I've not seen a whole lot of that. (Perhaps I have missed it.) I've not even seen a whole lot of really clear definitions, statements of purpose, or laying out of views.

So, do you happen to have a link handy, to something like a "this is what effective altruism is, and here's why it's a good idea, and obviously so"? (If not, then you might consider writing such a thing.)

Replies from: JonahSinick
comment by JonahS (JonahSinick) · 2014-03-20T08:18:57.557Z · LW(p) · GW(p)

My post does carry the connotation "whether or not people engage in effective altruism is significant," but I didn't mean for it to carry the connotation that effective altruism is rational – on the contrary, that's the very question that I'm exploring :-) (albeit from the opposite end of the telescope).

For an introduction to effective altruism, you could check out:

Are you familiar with them?

Thanks also, for the feedback.

Replies from: SaidAchmiz, SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-03-20T08:47:25.287Z · LW(p) · GW(p)

I've read Yvain's article, and reread it just now. It has the same underlying problem, which is: to the extent that it's obviously true, it's trivial[1]; to the extent that it's nontrivial, it's not obviously true.

Yvain talks about how we should be effective in the charity we choose to engage in (no big revelation here), then seems almost imperceptibly to slide into an assumed worldview where we're all utilitarians, where saving children is, of course, what we care about most, where the best charity is the one that saves the most children, etc.

To what extent are all of these things part of what "effective altruism" is? For instance (and this is just one possible example), let's say I really care about paintings more than dead children, and think that £550,000 paid to keep one mediocre painting in a UK museum is money quite well spent, even when the matter of sanitation in African villages is put to me as bluntly as you like; but I aspire to rationality, and want to purchase my artwork-retention-by-local-museums as cost-effectively as I can. Am I an effective altruist?

To put this another way: if "effective altruism" is really just "we should be effective in our altruistic actions", then it seems frankly ridiculous that less than one-third of Less Wrong readers should identify as EA-ers. What do the other 71.4% think? That we should be ineffective altruists?? That altruism in general is just a bad idea? Do those two views really account for over seventy percent of the LW readership, do you think? Surely, in this case, the effective altruist movement just really needs to get better at explaining itself, and its obvious and uncontroversial nature, to the Less Wrong audience.

But effective altruism isn't just about that, yes? As a movement, as a philosophy, it's got all sorts of baggage, in the form of fairly specific values and ethical systems (that are assumed, and never really argued for, by EA-ers), like (a specific form of) utilitarianism, belief in things like the moral value of animals, and certain other things. Or, at least — such is the perception of people around here (myself included); and that, I think, is what's behind that 28.6% statistic.

[1] Well, trivial given the background that we, as Lesswrongians who have read and understood the Sequences, are assumed to have.

I haven't watched that TED talk (though I've read some of Peter Singer's writings); I will do that tomorrow.

comment by Said Achmiz (SaidAchmiz) · 2014-03-20T19:39:04.796Z · LW(p) · GW(p)

Ok, I've watched Singer's TED talk now, thank you for linking it. It does work as a statement of purpose, certainly. On the other hand it fails as an attempt to justify or argue for the movement's core values; at the same time, it makes it quite clear that effective altruism is not just about "let's be altruists effectively". It's got some specific values attached, more specific than can justifiably be called simply "altruism".

I want to see, at least, some acknowledgment of that fact, and preferably, some attempt to defend those values. Singer doesn't do this; he merely handwaves in the general direction of "empathy" and "a rational understanding of our situation" (note that he doesn't explain what makes this particular set of values — valuing all lives equally — "rational").

Edit: My apologies! I just looked over your post again, and noticed this line, which my brain somehow ignored at first:

I'd venture the guess its [the principle of indifference's] popularity among rationalists is an artifact of culture or a selection effect rather than a consequence of rationality.

That (in fact, that whole paragraph) does go far toward addressing my concerns. Consider the objections in this comment at least partially withdrawn!

Replies from: JonahSinick
comment by JonahS (JonahSinick) · 2014-03-20T20:24:46.894Z · LW(p) · GW(p)

Apology accepted :-). (Don't worry, I know that my post was long and that catching everything can require a lot of energy.)

comment by Lumifer · 2014-03-20T14:46:38.939Z · LW(p) · GW(p)

In my understanding of things rationality does not involve values and altruism is all about values. They are orthogonal.

LW as a community (for various historical reasons) has a mix of rationalists and effective altruists. That's a characteristic of this particular community, not a feature of either rationalism or EA.

Replies from: Nornagest, Viliam_Bur, tom_cr
comment by Nornagest · 2014-03-20T20:38:53.609Z · LW(p) · GW(p)

rationality does not involve values and altruism is all about values. They are orthogonal.

Effective altruism isn't just being extra super altruistic, though. EA as currently practiced presupposes certain values, but its main insight isn't value-driven: it's that you can apply certain quantification techniques toward figuring out how to optimally implement your value system. For example, if an animal rights meta-charity used GiveWell-inspired methods to recommend groups advocating for veganism or protesting factory farming or rescuing kittens or something, I'd call that a form of EA despite the differences between its conception of utility and GiveWell's.

Seen through that lens, effective altruism seems to have a lot more in common with LW-style rationality than, say, a preference for malaria eradication does by itself. We don't dictate values, and (social pressure aside) we probably can't talk people into EA if their value structure isn't compatible with it, but we might easily make readers with the right value assumptions more open to quantified or counterintuitive methods.

Replies from: Lumifer
comment by Lumifer · 2014-03-20T21:05:27.325Z · LW(p) · GW(p)

Effective altruism isn't just being extra super altruistic

Yes, of course.

effective altruism seems to have a lot more in common with LW-style rationality than, say, a preference for malaria eradication does by itself.

So does effective proselytizing, for example. Or effective political propaganda.

Take away the "presupposed values" and all you are left with is effectiveness.

Replies from: Nornagest
comment by Nornagest · 2014-03-20T21:14:07.253Z · LW(p) · GW(p)

Yes, LW exposure might easily dispose readers with those inclinations toward EA-like forms of political or religious advocacy, and if that's all they're doing then I wouldn't call them effective altruists (though only because politics and religion are not generally considered forms of altruism). That doesn't seem terribly relevant, though. Politics and religion are usually compatible with altruism, and nothing about effective altruism requires devotion solely to GiveWell-approved causes.

I'm really not sure what you're trying to demonstrate here. Some people have values incompatible with EA's assumptions? That's true, but it only establishes the orthogonality of LW ideas with EA if everyone with compatible values was already an effective altruist, and that almost certainly isn't the case. As far as I can tell there's plenty of room for optimization.

(It does establish an upper bound, but EA's market penetration, even after any possible LW influence, is nowhere near it.)

Replies from: Lumifer
comment by Lumifer · 2014-03-21T00:46:16.225Z · LW(p) · GW(p)

I'm really not sure what you're trying to demonstrate here.

That rationality and altruism are orthogonal. That effective altruism is predominantly altruism and "effective" plays a second fiddle to it. That rationality does not imply altruism (in case you think it's a strawman, tom_cr seems to claim exactly that).

Replies from: Nornagest
comment by Nornagest · 2014-03-21T01:20:32.443Z · LW(p) · GW(p)

If effective altruism was predominantly just altruism, we wouldn't be seeing the kind of criticism of it from a traditionally philanthropic perspective that we have been. I see this as strong evidence that it's something distinct, and therefore that it makes sense to talk about something like LW rationality methods bolstering it despite rationality's silence on pure questions of values.

Yes, it's just [a method of quantifying] effectiveness. But effectiveness in this context, approached in this particular manner, is more significant -- and, perhaps more importantly, a lot less intuitive -- than I think you're giving it credit for.

Replies from: Lumifer
comment by Lumifer · 2014-03-21T14:22:11.919Z · LW(p) · GW(p)

we wouldn't be seeing the kind of criticism of it from a traditionally philanthropic perspective that we have been

I don't know about that. First, EA is competition for the limited resource, the donors' money, and even worse, EA keeps on telling others that they are doing it wrong. Second, the idea that charity money should be spend in effective ways is pretty uncontroversial. I suspect (that's my prior adjustable by evidence) that most of the criticism is aimed at specific recommendations of GiveWell and others, not at the concept of being getting more bang for your buck.

Take a look at Bill Gates. He is explicitly concerned with the effectiveness and impact of his charity spending -- to the degree that he decided to bypass most established nonprofits and set up his own operation. Is he a "traditional" or an "effective" altruist? I don't know.

and, perhaps more importantly, a lot less intuitive

Yes, I grant you that. Traditional charity tends to rely on purely emotional appeals. But I don't know if that's enough to push EA into a separate category of its own.

comment by Viliam_Bur · 2014-03-20T17:13:07.832Z · LW(p) · GW(p)

rationality does not involve values and altruism is all about values

Rationality itself does not involve values. But a human who learns rationality already has those values, and rationality can help them understand those values better, decompartmentalize, and optimize more efficiently.

Replies from: Lumifer
comment by Lumifer · 2014-03-20T17:31:54.717Z · LW(p) · GW(p)

But a human who learns rationality already has those values, and rationality can help them understand those values better, decompartmentalize, and optimize more efficiently.

So? Let's say I value cleansing the Earth of untermenschen. Rationality can indeed help me achieve my goals and "optimize more efficiently". Once you start associating rationality with sets of values, I don't see how can you associate it with only "nice" values like altruism, but not "bad" ones like genocide.

Replies from: Armok_GoB, Oscar_Cunningham
comment by Armok_GoB · 2014-03-20T19:15:25.917Z · LW(p) · GW(p)

Maybe, but at least they'll be campaigning for mandatory genetic screening for genetic disorders rather than kill people of some arbitrary ethnicity they happened to fixate on.

comment by Oscar_Cunningham · 2014-03-20T18:58:14.735Z · LW(p) · GW(p)

Because there's a large set of "nice" values that most of humanity shares.

Replies from: Lumifer
comment by Lumifer · 2014-03-20T19:54:04.713Z · LW(p) · GW(p)

Along with a large set of "not so nice" values that most of humanity shares as well. A glance at history should suffice to demonstrate that.

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2014-03-20T20:44:24.364Z · LW(p) · GW(p)

I think one of the lessons from history is that we can still massacre each other even when everyone is acting in good faith.

comment by tom_cr · 2014-03-20T19:08:55.834Z · LW(p) · GW(p)

rationality does not involve values

Yikes!

May I ask you, what is it you are trying to achieve by being rational? Where does the motivation come from come from?

Or to put it another way, if it is rational to do something one way but not another, where does the difference derive from?

In my view, rationality is use of soundly reliable procedures for achieving one's goals. Rationality is 100% about values. Altruism (depending how you define it) is a subset of rationality as long as the social contract is useful (ie nearly all the time).

Replies from: SaidAchmiz, Lumifer
comment by Said Achmiz (SaidAchmiz) · 2014-03-20T19:26:11.083Z · LW(p) · GW(p)

Altruism (depending how you define it) is a subset of rationality as long as the social contract is useful (ie nearly all the time).

Basing altruism on contractarianism is very different from basing altruism on empathy. For one thing, the results may be different (one might reasonably conclude that we, here in the United States or wherever, have no implicit social contract with the residents of e.g. Nigeria). For another, it's one level removed from terminal values, whereas empathy is not, so it's a different sort of reasoning, and not easily comparable.

(btw, I also think there's a basic misunderstanding happening here, but I'll let Lumifer address it, if he likes.)

Replies from: tom_cr
comment by tom_cr · 2014-03-20T19:55:27.719Z · LW(p) · GW(p)

Yes, non-rational (perhaps empathy-based) altruism is possible. This is connected to the point I made elsewhere that consequentialism does not axiomatically depend on others having value.

empathy is not [one level removed from terminal values]

Not sure what you mean here. Empathy may be a gazillion levels removed from the terminal level. Experiencing an emotion does not guarantee that that emotion is a faithful representation of a true value held. Otherwise "do exactly as you feel immediately inclined, at all times," would be all we needed to know about morality.

comment by Lumifer · 2014-03-20T19:52:10.358Z · LW(p) · GW(p)

rationality is use of soundly reliable procedures for achieving one's goals.

Yes. Any goals.

Rationality is 100% about values.

No. Rationality is about implementing your values, whatever they happen to be.

Altruism (depending how you define it) is a subset of rationality as long as the social contract is useful (ie nearly all the time).

An interesting claim :-) Want to unroll it?

Replies from: tom_cr
comment by tom_cr · 2014-03-20T20:13:25.379Z · LW(p) · GW(p)

Rationality is about implementing your goals

That's what I meant.

An interesting claim :-) Want to unroll it?

Altruism is also about implementing your goals (via the agency of the social contract), so rationality and altruism (depending how you define it) are not orthogonal.

Lets define altruism as being nice to other people. Lets describe the social contract as a mutually held belief that being nice to other people improves society. If this belief is useful, then being nice to other people is useful, i.e furthers one's goals, i.e. it is rational. I know this is simplistic, but it should be more than enough to make my point.

Perhaps you interpret altruism to be being nice in a way that is not self serving. But then, there can be no sense in which altruism could be effective or non-effective. (And also your initial reasoning that "rationality does not involve values and altruism is all about values" would be doubly wrong.)

Replies from: Lumifer
comment by Lumifer · 2014-03-20T20:34:01.628Z · LW(p) · GW(p)

Lets define altruism as being nice to other people. Lets describe the social contract as a mutually held belief that being nice to other people improves society.

Let's define things the way they are generally understood or at least close to it. You didn't make your point.

I understand altruism, generally speaking, as valuing the welfare of strangers so that you're willing to attempt to increase it at some cost to yourself. I understand social contract as a contract, a set of mutual obligations (in particular, it's not a belief).

Replies from: tom_cr
comment by tom_cr · 2014-03-20T23:03:09.135Z · LW(p) · GW(p)

Apologies if my point wasn't clear.

If altruism entails a cost to the self, then your claim that altruism is all about values seems false. I assumed we are using similar enough definitions of altruism to understand each other.

We can treat the social contract as a belief, a fact, an obligation, or goodness knows what, but it won't affect my argument. If the social contract requires being nice to people, and if the social contract is useful, then there are often cases when being nice is rational.

Furthermore, being nice in a way the exposes me to undue risk is bad for society (the social contract entails shared values, so such behaviour would also expose others to risk), so under the social contract, cases where being nice is not rational do not really exist.

Thus, if I implement the belief / obligation / fact of the social contract, and that is useful, then being nice is rational.

Replies from: Lumifer
comment by Lumifer · 2014-03-21T00:51:23.840Z · LW(p) · GW(p)

If altruism entails a cost to the self, then your claim that altruism is all about values seems false

Why does it seem false? It is about values, in particular the relationship between the value "welfare of strangers" and the value "resources I have".

If the social contract requires being nice to people

It does not. The social contract requires you not to infringe upon the rights of other people and that's a different thing. Maybe you can treat it as requiring being polite to people. I don't see it as requiring being nice to people.

Furthermore, being nice in a way the exposes me to undue risk is bad for society (the social contract entails shared values, so such behaviour would also expose others to risk), so under the social contract, cases where being nice is not rational do not really exist.

I think we have a pretty major disagreement about that :-/

Replies from: tom_cr
comment by tom_cr · 2014-03-21T15:50:03.077Z · LW(p) · GW(p)

Why does it seem false?

If welfare of strangers is something you value, then it is not a net cost.

Yes, there is an old-fashioned definition of altruism that assumes the action must be non-self-serving, but this doesn't match common contemporary usage (terms like effective altruism and reciprocal altruism would be meaningless), doesn't match your usage, and is based on a gross misunderstanding of how morality comes about (if written about this misunderstanding here - see section 4, "Honesty as meta-virtue," for the most relevant part).

Under that old, confused definition, yes, altruism can not be rational (but not orthogonal to rationality - we could still try to measure how irrational any given altruistic act is, each act still sits somewhere on the scale of rationality).

It does not.

You seem very confident of that. Utterly bizarre, though, that you claim that not infringing on people's rights is not part of being nice to people.

But the social contract demands much more than just not infringing on people's rights. (By the way, where do those right come from?) We must actively seek each other out, trade (even if it's only trade in ideas, like now), and cooperate (this discussion wouldn't be possible without certain adopted codes of conduct ).

The social contract enables specialization in society, and therefore complex technology. This works through our ability to make and maintain agreements and cooperation. If you know how to make screws, and I want screws, the social contract enables you to convincingly promise to hand over screws if I give you some special bits of paper. If I don't trust you for some reason, then the agreement breaks down. You lose income, I lose the screws I need for my factory employing 500 people, we all go bust. Your knowledge of how to make screws and my expertise in making screw drivers now counts for nothing, and everybody is screwed.

We help maintain trust by being nice to each other outside our direct trading. Furthermore, by being nice to people in trouble who we have never before met, we enhance a culture of trust that people in trouble will be helped out. We therefore increase the chances that people will help us out next time we end up in the shit. Much more importantly, we reduce a major source of people's fears. Social cohesion goes up, cooperation increases, and people are more free to take risks in new technologies and / or economic ventures: society gets better, and we derive personal benefit from that.

I think we have a pretty major disagreement about that :-/

The social contract is a technology that entangles the values of different people (there are biological mechanisms that do that as well). Generally, my life is better when the lives of people around me are better. If your screw factory goes bust, then I'm negatively affected. If my neighbour lives in terror, then who knows what he might do out of fear - I am at risk. If everybody was scared about where their next meal was coming from, then I would never leave the house for fear that what food I have would be stolen in my absence - economics collapses. Because we have this entangled utility function, what's bad for others is bad for me (in expectation), and what's bad for me is bad for everybody else. For the most part, then, any self defeating behaviour (e.g. irrational attempts to be nice to others) is bad for society, and, in the long run, doesn't help anybody.

I hope this helps.

Replies from: Lumifer, Nornagest
comment by Lumifer · 2014-03-21T18:57:17.826Z · LW(p) · GW(p)

If welfare of strangers is something you value, then it is not a net cost.

Having a particular value cannot have a cost. Values start to have costs only when they are realized or implemented.

Costlessly increasing the welfare of strangers doesn't sound like altruism to me. Let's say we start telling people "Say yes and magically a hundred lives will be saved in Chad. Nothing is required of you but to say 'yes'." How many people will say "yes"? I bet almost everyone. And we will be suspicious of those who do not -- they would look like sociopaths to us. That doesn't mean that we should call everyone but sociopaths is an altruist -- you can, of course, define altruism that way but at this point the concept becomes diluted into meaninglessness.

We continue to have major disagreements about the social contract, but that's a big discussion that should probably go off into a separate thread if you want to pursue it.

Replies from: tom_cr
comment by tom_cr · 2014-03-21T21:58:06.739Z · LW(p) · GW(p)

Values start to have costs only when they are realized or implemented.

How? Are you saying that I might hold legitimate value in something, but be worse off if I get it?

Costlessly increasing the welfare of strangers doesn't sound like altruism to me.

OK, so we are having a dictionary writers' dispute - one I don't especially care to continue. So every place I used 'altruism,' substitute 'being decent' or 'being a good egg,' or whatever. (Please check, though, that your usage is somewhat consistent.)

But your initial claim (the one that I initially challenged) was that rationality has nothing to do with value, and is manifestly false.

Replies from: Lumifer
comment by Lumifer · 2014-03-22T00:44:40.299Z · LW(p) · GW(p)

I don't think we understand each other. We start from different points, ascribe different meaning to the same words, and think in different frameworks. I think you're much confused and no doubt you think the same of me.

comment by Nornagest · 2014-03-21T20:36:46.174Z · LW(p) · GW(p)

The social contract enables specialization in society, and therefore complex technology. This works through our ability to make and maintain agreements and cooperation. If you know how to make screws, and I want screws, the social contract enables you to convincingly promise to hand over screws if I give you some special bits of paper. If I don't trust you for some reason, then the agreement breaks down.

Either you're using a broader definition of the social contract than I'm familiar with, or you're giving it too much credit. The model I know with provides (one mechanism for) the legitimacy of a government or legal system, and therefore of the legal rights it establishes including an expectation of enforcement; but you don't need it to have media of exchange, nor cooperation between individuals, nor specialization. At most it might make these more scalable.

And of course there are models that deny the existence of a social contract entirely, but that's a little off topic.

Replies from: tom_cr
comment by tom_cr · 2014-03-21T21:38:53.036Z · LW(p) · GW(p)

If you look closely, I think you should find that legitimacy of government & legal systems comes from the same mechanism as everything I talked about.

You don't need it to have media of exchange, nor cooperation between individuals, nor specialization

Actually, the whole point of governments and legal systems (legitimate ones) is to encourage cooperation between individuals, so that's a bit of a weird comment. (Where do you think the legitimacy comes from?) And specialization trivially depends upon cooperation.

Yes, these things can exist to a small degree in a post-apocalyptic chaos, but they will not exactly flourish. (That's why we call it post-apocalyptic chaos.) But the extent to which these things can exist is a measure of how well the social contract flourishes. Don't get too hung up on exactly, precisely what 'social contract' means, it's only a crude metaphor. (There is no actual bit of paper anywhere.)

I may not be blameless, in terms clearly explaining my position, but I'm sensing that a lot of people on this forum just plain dislike my views, without bothering to take the time to consider them honestly.

Replies from: Nornagest
comment by Nornagest · 2014-03-21T22:23:13.193Z · LW(p) · GW(p)

Actually, the whole point of governments and legal systems [...] is to encourage cooperation between individuals [...] And specialization trivially depends upon cooperation.

I have my quibbles with the social contract theory of government, but my main objection here isn't to the theory itself, but that you're attributing features to it that it clearly isn't responsible for. You don't need post-apocalyptic chaos to find situations that social contracts don't cover: for example, there is no social contract on the international stage (pre-superpower, if you'd prefer), but nations still specialize and make alliances and transfer value.

The point of government (and therefore the social contract, if you buy that theory of legitimacy) is to facilitate cooperation. You seem to be suggesting that it enables it, which is a different and much stronger claim.

Replies from: tom_cr
comment by tom_cr · 2014-03-21T23:14:59.716Z · LW(p) · GW(p)

I think that international relations is a simple extension of social-contract-like considerations.

If nations cooperate, it is because it is believed to be in their interest to do so. Social-contract-like considerations form the basis for that belief. (The social contract is simply that which makes it useful to cooperate.) "Clearly isn't responsible for," is a phrase you should be careful before using.

You seem to be suggesting that [government] enables [cooperation]

I guess you mean that I'm saying cooperation is impossible without government. I didn't say that. Government is a form of cooperation. Albeit a highly sophisticated one, and a very powerful facilitator.

I have my quibbles with the social contract theory of government

I appreciate your frankness. I'm curious, do you have an alternative view of how government derives legitimacy? What is it that makes the rules and structure of society useful? Or do you think that government has no legitimacy?

Replies from: Nornagest
comment by Nornagest · 2014-03-21T23:31:43.017Z · LW(p) · GW(p)

If nations cooperate, it is because it is believed to be in their interest to do so. Social-contract-like considerations form the basis for that belief. (The social contract is simply that which makes it useful to cooperate.)

The social contract, according to Hobbes and its later proponents, is the implicit deal that citizens (and, at a logical extension, other subordinate entities) make with their governments, trading off some of their freedom of action for greater security and potentially the maintenance of certain rights. That implies some higher authority with compelling powers of enforcement, and there's no such thing in international relations; it's been described (indeed, by Hobbes himself) as a formalized anarchy. Using the phrase to describe the motives for cooperation in such a state extends it far beyond its original sense, and IMO beyond usefulness.

There are however other reasons to cooperate: status, self-enforced codes of ethics, enlightened self-interest. It's these that dominate in international relations, which is why I brought that up.

comment by tom_cr · 2014-03-20T18:57:08.599Z · LW(p) · GW(p)

A couple of points:

(1) You (and possibly others you refer to) seem to use the word 'consequentialism' to point to something more specific, e.g. classic utilitarianism, or some other variant. For example you say

[Yvain] argues that consequentialism follows from the intuitively obvious principles "Morality Lives In The World" and "Others Have Non Zero Value"

Actually, consequentialism follows independently of "others have non zero value." Hence, classic utilitarianism's axiomatic call to maximize the good for the greatest number is dubious. Obviously, this principle is a damn fine heuristic, but it follows from consequentialism (as long as the social contract can be inferred to be useful), and isn't a foundation for it. The paper-clipping robot is still a consequentialist.

(2) Your described principle of indifference seems to me to be manifestly false.

When we talk of the value of any thing, we are not talking of an intrinsic property of the thing, but a property of the relationship between the thing and the entity holding the value. (People are also things. ) If an entity holds any value in some object, the object must exhibit some causal effect on the entity. The nature and magnitude of the value held must be consequences of that causality. Thus, we must expect value to scale (in an order-reversing way) with some generalized measure of proximity, or causal connectedness. It is not rational for me to care as much about somebody outside my observable universe as I do about a member of my family.

Replies from: JonahSinick, SaidAchmiz
comment by JonahS (JonahSinick) · 2014-03-20T20:31:41.753Z · LW(p) · GW(p)

(1) You (and possibly others you refer to) seem to use the word 'consequentialism' to point to something more specific, e.g. classic utilitarianism, or some other variant.

I didn't quite have in mind classical utilitarianism in mind. I had in mind principles like

  • Not helping somebody is equivalent to hurting the person
  • An action that doesn't help or hurt someone doesn't have moral value.

(2) Your described principle of indifference seems to me to be manifestly false.

I did mean after controlling for ability to have an impact.

Replies from: tom_cr
comment by tom_cr · 2014-03-20T20:44:36.931Z · LW(p) · GW(p)

I did mean after controlling for an ability to have impact

Strikes me as a bit like saying "once we forget about all the differences, everything is the same." Is there a valid purpose to this indifference principle?

Don't get me wrong, I can see that quasi-general principles of equality are worth establishing and defending, but here we are usually talking about something like equality in the eyes of the state, ie equality of all people, in the collective eyes of all people, which has a (different) sound basis.

Replies from: nshepperd, JonahSinick
comment by nshepperd · 2014-03-21T05:32:26.450Z · LW(p) · GW(p)

Strikes me as a bit like saying "once we forget about all the differences, everything is the same." Is there a valid purpose to this indifference principle?

If you actually did some kind of expected value calculation, with your utility function set to something like U(thing) = u(thing) / causal-distance(thing), you would end up double-counting "ability to have an impact", because there is already a 1/causal-distance sort of factor in E(U|action) = sum { U(thing') P(thing' | action) } built into how much each action affects the probabilities of the different outcomes (which is basically what "ability to have an impact" is).

That's assuming that what JonahSinick meant by "ability to have an impact" was the impact of the agent upon the thing being valued. But it sounds like you might have been talking about the effect of thing upon the agent? As if all you can value about something is any observable effect that thing can have on yourself (which is not an uncontroversial opinion)?

Replies from: tom_cr
comment by tom_cr · 2014-03-21T16:06:02.681Z · LW(p) · GW(p)

Value is something that exists in a decision-making mind. Real value (as opposed to fictional value) can only derive from the causal influences of the thing being valued on the valuing agent. This is just a fact, I can't think of a way to make it clearer.

Maybe ponder this:

How could my quality of life be affected by something with no causal influence on me?

comment by JonahS (JonahSinick) · 2014-03-20T21:13:07.175Z · LW(p) · GW(p)

Note that I wasn't arguing that it's rational. See the quotation in this comment. Rather, I was describing an input into effective altruist thinking.

comment by Said Achmiz (SaidAchmiz) · 2014-03-20T19:38:55.216Z · LW(p) · GW(p)

Thank you for bringing this up. I've found myself having to point out this distinction (between consequentialism and utilitarianism) a number of times; it seems a commonplace confusion around here.

Replies from: tom_cr
comment by tom_cr · 2014-03-20T19:46:52.680Z · LW(p) · GW(p)

I see Sniffnoy also raised the same point.

comment by Gunnar_Zarncke · 2014-03-20T11:41:39.873Z · LW(p) · GW(p)

Cognitive biases were developed for survival and evolutionary fitness, and these things correlate more strongly with personal well-being than with the well-being of others.

I think this needs to differentiated further or partly corrected:

  • Cognitive biases which improve individual fitness by needing less resources, i.e. heuristics which arrive at the same or almost equally good result but without less resources. Reducing time and energy thus benefits the individual. Example:

  • Cognitive biases which improve individual fitness by avoiding dangerous parts of life space. Examples: Risk aversion, status-quo bias (in a way this is a more abstract for of the basic fears like fear of heigh or spiders which also avoid dangerous situations (or help getting out of them quickly)).

  • Cognitive biases which improve individual fitness by increasing likelihood of reproductive success. These are probably the most complex and intricately connected to emotions. In a way emotions are comparable to biases or at least trigger specific biases. For example infatuation does activate powerful biases regarding the object of the infatuation and the situation at large: Positive thinking, confirmation bias, ...

  • Cognitive biases that developed which improve collective fitness (i.e. benefitting other carriers of the same gene). My first examples are all not really biases but emotions: Love toward children (your own, but also others), initial friendliness toward strangers (tit-for-tat strategy), altruism in general. An example of a real bias is the positive thinking related to children. Disregard of their faults, confirmation bias. But these are I think mostly used to rationalize ones behavior in the absence of the real explanation: You love your children and expend significant energy never to be payed back because those who do have more successful offspring.

In general I wonder how to disentangle biases from emotions. You wouldn't want to rationalize against your emotions. That will not work. And if emotions trigger/streangthen biases then suppressing biases essentially means suppressing emotion.

I think the expression of the relationship between emotions and biases is at least partly learned. It could be possible to unlearn the triggering effect of the emotions. Kind of hacking your terminal goals. The question is: If you tricked your emotions to no longer grip what it means to have them expect providing internal sensation.

Replies from: JonahSinick, Lumifer, V_V
comment by JonahS (JonahSinick) · 2014-03-20T20:32:46.653Z · LW(p) · GW(p)

Thanks for the thoughts. These points all strike me as reasonable.

comment by Lumifer · 2014-03-20T14:59:17.372Z · LW(p) · GW(p)

You wouldn't want to rationalize against your emotions. That will not work.

Why not? Rationalizing against (unreasonable) fear seems fine to me. Rationalizing against anger looks useful. Etc., etc.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2014-03-20T18:20:15.989Z · LW(p) · GW(p)

Yes. I didn't think this through to all its consequences.

It is a well-know psychological fact that humans have a quite diverse set of basic fears that appear, develop and are normally overcome (understood, limited, suppressed,...) during childhood. Dealing with your fear, comming to terms with them is indeed a normal process.

Quite a good read about this is Helping Children Overcome Fears.

Indeed, having them initially is in the most cases adaptive (I wonder whether it would be a globally net positive if we could remove fear of spiders weighing up the cost of lost time and energy due to spider fear versus the remaining dangerous cases).

The key point is that a very unspecific fear like fear of darkness is moderated into a form where it doesn't control you and where it only applies to cases that you didn't adapt to earlier (many people still freak out if put into extremely unusual situations which add (multiply?) multiple such fears). And whether having them in these cases is positive I can as best speculate on.

Nonetheless this argument that many fears are less adaptive then they used to (because civilization weeded them out) is independent of the other emotions esp. the 'positive' ones like love, empathy, happiness and curiosity which it appears also do put you into a biased state. Whould you want to get rid of these too? Which?

Replies from: Lumifer
comment by Lumifer · 2014-03-20T18:46:50.988Z · LW(p) · GW(p)

do put you into a biased state

Humans exist in permanent "biased state". The unbiased state is the province of Mr.Spock and Mr.Data, Vulcans and androids.

I think that rationality does not get rid of biases, but rather allows you to recognize them and compensate for them. Just like with e.g. fear -- you rarely lose a particular fear altogether, you just learn to control and manage it.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2014-03-21T08:46:13.961Z · LW(p) · GW(p)

You seem to mean that biases are the brains way to perceive the world in a way that focusses on the 'important' parts. Beside terminal goals which just evaluate the perception with respect to utility this acts acts as a filter but thereby also implies goals (namely the reduction of the importance of the filtered out parts).

Replies from: Lumifer
comment by Lumifer · 2014-03-21T14:33:09.133Z · LW(p) · GW(p)

but thereby also implies goals

Yes, but note that a lot of biases are universal to all humans. This means they are biological (as opposed to cultural) in nature. And this implies that the goals they developed to further are biological in nature as well. Which means that you are stuck with these goals whether you conscious mind likes it or not.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2014-03-21T15:26:57.208Z · LW(p) · GW(p)

Yes. That's what I meant when I said: "You wouldn't want to rationalize against your emotions. That will not work."

If your conscious mind has goals incompatible with the effects of bioneuropsychological processes then frustrations seems the least result.

Replies from: Lumifer
comment by Lumifer · 2014-03-21T15:35:47.042Z · LW(p) · GW(p)

If your conscious mind has goals incompatible with the effects of bioneuropsychological processes then frustrations seems the least result.

I still don't know about that. A collection of such "incompatible goals" has been described as civilization :-)

For example, things like "kill or drive away those-not-like-us" look like biologically hardwired goals to me. Having a conscious mind have its own goals incompatible with that one is probably a good thing.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2014-03-21T16:35:09.192Z · LW(p) · GW(p)

Sure we have to deal with some of these inconsistencies. And for some of us this is an continuous source of frustration. But we do not have to add more to these than absolutely necessary, or?

comment by V_V · 2014-03-20T13:57:34.601Z · LW(p) · GW(p)

risk aversion is not a bias.

Replies from: Lumifer, tom_cr
comment by Lumifer · 2014-03-20T14:56:54.955Z · LW(p) · GW(p)

risk aversion is not a bias

It might or might not be. If it is coming from your utility function, it's not. If it is "extra" to the utility function it can be a bias.

comment by tom_cr · 2014-03-20T19:37:20.019Z · LW(p) · GW(p)

I understood risk aversion to be a tendency to prefer a relatively certain payoff, to one that comes with a wider probability distribution, but has higher expectation. In which case, I would call it a bias.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-03-20T19:46:13.795Z · LW(p) · GW(p)

It's not a bias, it's a preference. Insofar as we reserve the term bias for irrational "preferences" or tendencies or behaviors, risk aversion does not qualify.

Replies from: tom_cr
comment by tom_cr · 2014-03-20T20:28:15.786Z · LW(p) · GW(p)

I would call it a bias because it is irrational.

It (as I described it - my understanding of the terminology might not be standard) involves choosing an option that is not the one most likely to lead to one's goals being fulfilled (this is the definition of 'payoff', right?).

Or, as I understand it, risk aversion may amount to consistently identifying one alternative as better when there is no rational difference between them. This is also an irrational bias.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-03-20T20:48:17.462Z · LW(p) · GW(p)

Problems with your position:

1. "goals being fulfilled" is a qualitative criterion, or perhaps a binary one. The payoffs at stake in scenarios where we talk about risk aversion are quantitative and continuous.

Given two options, of which I prefer the one with lower risk but a lower expected value, my goals may be fulfilled to some degree in both case. The question then is one of balancing my preferences regarding risks with my preferences regarding my values or goals.

2. The alternatives at stake are probabilistic scenarios, i.e. each alternative is some probability distribution over some set of outcomes. The expectation of a distribution is not the only feature that differentiates distributions from each other; the form of the distribution may also be relevant.

Taking risk aversion to be irrational means that you think the form of a probability distribution is irrelevant. This is not an obviously correct claim. In fact, in Rational Choice in an Uncertain World [1], Robyn Dawes argues that the form of a probability distribution over outcomes is not irrelevant, and that it's not inherently irrational to prefer some distributions over others with the same expectation. It stands to reason (although Dawes doesn't seem to come out and say this outright, he heavily implies it) that it may also be rational to prefer one distribution to another with a lower (Edit: of course I meant "higher", whoops) expectation.

[1] pp. 159-161 in the 1988 edition, if anyone's curious enough to look this up. Extra bonus: This section of the book (chapter 8, "Subjective Expected Utility Theory", where Dawes explains VNM utility) doubles as an explanation of why my preferences do not adhere to the von Neumann-Morgenstern axioms.

Replies from: tom_cr
comment by tom_cr · 2014-03-20T22:06:17.612Z · LW(p) · GW(p)

Point 1:

my goals may be fulfilled to some degree

If option 1 leads only to a goal being 50% fulfilled, and option 2 leads only to the same goal being 51% fulfilled, then there is a sub-goal that option 2 satisfies (ie 51% fulfillment) but option 1 doesn't, but not vice versa. Thus option 2 is better under any reasonable attitude. The payoff is the goal, by definition. The greater the payoff, the more goals are fulfilled.

The question then is one of balancing my preferences regarding risks with my preferences regarding my values or goals.

But risk is integral to the calculation of utility. 'Risk avoidance' and 'value' are synonyms.

Point 2:

Thanks for the reference.

But, if we are really talking about a payoff as an increased amount of utility (and not some surrogate, e.g. money), then I find it hard to see how choosing an option that it less likely to provide the payoff can be better.

If it is really safer (ie better, in expectation) to choose option 1, despite having a lower expected payoff than option 2, then is our distribution really over utility?

Perhaps you could outline Dawes' argument? I'm open to the possibility that I'm missing something.

Replies from: SaidAchmiz, SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-03-20T22:40:21.788Z · LW(p) · GW(p)

Re: your response to point 1: again, the options in question are probability distributions over outcomes. The question is not one of your goals being 50% fulfilled or 51% fulfilled, but, e.g., a 51% probability of your goals being 100% fulfilled vs., a 95% probability of your goals being 50% fulfilled. (Numbers not significant; only intended for illustrative purposes.)

"Risk avoidance" and "value" are not synonyms. I don't know why you would say that. I suspect one or both of us is seriously misunderstanding the other.

Re: point #2: I don't have the time right now, but sometime over the next couple of days I should have some time and then I'll gladly outline Dawes' argument for you. (I'll post a sibling comment.)

Replies from: tom_cr
comment by tom_cr · 2014-03-20T23:24:25.605Z · LW(p) · GW(p)

The question is not one of your goals being 50% fulfilled

If I'm talking about a goal actually being 50% fulfilled, then it is.

"Risk avoidance" and "value" are not synonyms.

Really?

I consider risk to be the possibility of losing or not gaining (essentially the same) something of value. I don't know much about economics, but if somebody could help avoid that, would people be willing to pay for such a service?

If I'm terrified of spiders, then that is something that must be reflected in my utility function, right? My payoff from being close to a spider is less than otherwise.

I'll post a sibling comment.

That would be very kind :) No need to hurry.

comment by Said Achmiz (SaidAchmiz) · 2014-03-23T21:57:11.426Z · LW(p) · GW(p)

Dawes' argument, as promised.

The context is: Dawes is explaining von Neumann and Morgenstern's axioms.


Aside: I don't know how familiar you are with the VNM utility theorem, but just in case, here's a brief primer.

The VNM utility theorem presents a set of axioms, and then says that if an agent's preferences satisfy these axioms, then we can assign any outcome a number, called its utility, written as U(x); and it will then be the case that given any two alternatives X and Y, the agent will prefer X to Y if and only if E(U(X)) > E(U(Y)). (The notation E(x) is read as "the expected value of x".) That is to say, the agent's preferences can be understood as assigning utility values to outcomes, and then preferring to have more (expected) utility rather than less (that is, preferring those alternatives which are expected to result in greater utility).

In other words, if you are an agent whose preferences adhere to the VNM axioms, then maximizing your utility will always, without exception, result in satisfying your preferences. And in yet other words, if you are such an agent, then your preferences can be understood to boil down to wanting more utility; you assign various utility values to various outcomes, and your goal is to have as much utility as possible. (Of course this need not be anything like a conscious goal; the theorem only says that a VNM-satisfying agent's preferences are equivalent to, or able to be represented as, such a utility formulation, not that the agent consciously thinks of things in terms of utility.)

(Dawes presents the axioms in terms alternatives or gambles; a formulation of the axioms directly in terms of the consequences is exactly equivalent, but not quite as elegant.)

N.B.: "Alternatives" in this usage are gambles, of the form ApB: you receive outcome A with probability p, and otherwise (i.e. with probability 1–p) you receive outcome B. (For example, your choice might be between two alternatives X and Y, where in X, with p = 0.3 you get consequence A and with p = 0.7 you get consequence B, and in Y, with p = 0.4 you get consequence A and with p = 0.6 you get consequence B.) Alternatives, by the way, can also be thought of as actions; if you take action X, the probability distribution over the outcomes is so-and-so; but if you take action Y, the probability distribution over the outcomes is different.

(If all of this is old hat to you, apologies; I didn't want to assume.)


The question is: do our preferences satisfy VNM? And: should our preferences satisfy VNM?

It is commonly said (although this is in no way entailed by the theorem!) that if your preferences don't adhere to the axioms, then they are irrational. Dawes examines each axiom, with an eye toward determining whether it's mandatory for a rational agent to satisfy that axiom.

Dawes presents seven axioms (which, as I understand it, are equivalent to the set of four listed in the wikipedia article, just with a difference in emphasis), of which the fifth is Independence.

The independence axiom says that AB (i.e., A is preferred to B) if and only if ApCBpC. In other words, if you prefer receiving cake to receiving pie, you also prefer receiving (cake with probability p and death with probability 1–p) to receiving (pie with probability p and death with probability 1–p).

Dawes examines one possible justification for violating this axiom — framing effects, or pseudocertainty — and concludes that it is irrational. (Framing is the usual explanation given for why the expressed or revealed preferences of actual humans often violate the independence axiom.) Dawes then suggests another possibility:

Is such irrationality the only reason for violating the independence axiom? I believe there is another reason. Axiom 5 [Independence] implies that the decision maker cannot be affected by the skewness of the consequences, which can be conceptualized as a probability distribution over personal values. Figure 8.1 shows (Note: This is my reproduction of the figure. I've tried to make it as exact as possible.) the skewed distributions of two different alternatives. Both distributions have the same average, hence the same expected personal value, which is a criterion of choice implied by the axioms. These distributions also have the same variance.

If the distributions in Figure 8.1 were those of wealth in a society, I have a definite preference for distribution a; its positive skewness means that income can be increased from any point — an incentive for productive work. Moreover, those people lowest in the distribution are not as distant from the average as in distribution b. In contrast, in distribution b, a large number of people are already earning a maximal amount of money, and there is a "tail" of people in the negatively skewed part of this distribution who are quite distant from the average income.[5] If I have such concerns about the distribution of outcomes in society, why not of the consequences for choosing alternatives in my own life? In fact, I believe that I do. Counter to the implications of prospect theory, I do not like alternatives with large negative skews, especially when the consequences in the negatively skewed part of the distribution have negative personal value.

[5] This is Dawes' footnote; it talks about an objection to "Reaganomics" on similar grounds.

Essentially, Dawes is asking us to imagine two possible actions. Both have the same expected utility; that is, the "degree of goal satisfaction" which will result from each action, averaged appropriately across all possible outcomes of that action (weighted by probability of each outcome), is exactly equal.

But the actual probability distribution over outcomes (the form of the distrbution) is different. If you do action A, then you're quite likely to do alright, there's a reasonable chance of doing pretty well, and a small chance of doing really great. If you do action B, then you're quite likely to do pretty well, there's a reasonable chance to do ok, and a small chance of doing disastrously, ruinously badly. On average, you'll do equally well either way.

The Independence axiom dictates that we have no preference between those two actions. To prefer action A, with its attendant distribution of outcomes, to action B with its distribution, is to violate the axiom. Is this irrational? Dawes says no. I agree with him. Why shouldn't I prefer to avoid the chance of disaster and ruin? Consider what happens when the choice is repeated, over the course of a lifetime. Should I really not care whether I occasionally suffer horrible tragedy or not, as long as it all averages out?

But if it's really a preference — if I'm not totally indifferent — then I should also prefer less "risky" (i.e. less negatively skewed) distributions even when the expectation is lower than that of distributions with more risk (i.e. more negative skew) — so long as the difference in expectation is not too large, of course. And indeed we see such a preference not only expressed and revealed in actual humans, but enshrined in our society: it's called insurance. Purchasing insurance is an expression of exactly the preference to reduce the negative skew in the probability distribution over outcomes (and thus in the distributions of outcomes over your lifetime), at the cost of a lower expectation.

Replies from: nshepperd, tom_cr
comment by nshepperd · 2014-03-25T01:56:30.352Z · LW(p) · GW(p)

This sounds like regular risk aversion, which is normally easy to model by transforming utility by some concave function. How do you show that there's an actual violation of the independence axiom from this example? Note that the axioms require that there exist a utility function u :: outcome -> real such that you maximise expected utility, not that some particular function (such as the two graphs you've drawn) actually represents your utility.

In other words, you haven't really shown that "to prefer action A, with its attendant distribution of outcomes, to action B with its distribution, is to violate the axiom" since the two distributions don't have the form ApC, BpC with A≅B. Simply postulating that the expected utilities are the same only shows that that particular utility function is not correct, not that no valid utility function exists.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-03-25T04:57:53.371Z · LW(p) · GW(p)

This sounds like regular risk aversion, which is normally easy to model by transforming utility by some concave function.

Assuming you privilege some reference point as your x-axis origin, sure. But there's no good reason to do that. It turns out that people are risk averse no matter what origin point you select (this is one of the major findings of prospect theory), and thus the concave function you apply will be different (will be in different places along the x-axis) depending on which reference point you present to a person. This is clearly irrational; this kind of "regular risk aversion" is what Dawes refers to when he talks about independence axiom violation due to framing effects, or "pseudocertainty".

Note that the axioms require that there exist a utility function u :: outcome -> real such that you maximise expected utility, not that some particular function (such as the two graphs you've drawn) actually represents your utility.

The graphs are not graphs of utility functions. See the first paragraph of my post here.

How do you show that there's an actual violation of the independence axiom from this example? ... the two distributions don't have the form ApC, BpC with A≅B.

Indeed they do; because, as one of the other axioms states, each outcome in an alternative may itself be an alternative; i.e, alternatives may be constructed as probability mixtures of other alternatives, which may themselves be... etc. If it's the apparent continuity of the graphs that bothers you, then you have but to zoom in on the image, and you may pretend that the pixelation you see represents discreteness of the distribution. The point stands unchanged.

Simply postulating that the expected utilities are the same only shows that that particular utility function is not correct, not that no valid utility function exists.

The point Dawes is making, I think, is that any utility function (or at least, any utility function where the calculated utilities more or less track an intuitive notion of "personal value") will lead to this sort of preference between two such distributions. In fact, it is difficult to imagine any utility function that both corresponds to a person's preferences and doesn't lead to preferences for less negatively skewed probably distributions over outcomes, over more negatively skewed ones.

Replies from: Jiro, nshepperd
comment by Jiro · 2014-03-25T14:35:49.497Z · LW(p) · GW(p)

It turns out that people are risk averse no matter what origin point you select (this is one of the major findings of prospect theory), and thus the concave function you apply will be different (will be in different places along the x-axis) depending on which reference point you present to a person.

Couldn't this still be rational in general if the fact that a particular reference point is presented provides information under normal circumstances (though perhaps not rational in a laboratory setting)?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-03-25T17:33:35.578Z · LW(p) · GW(p)

I think you'll have to give an example of such a scenario before I could comment on whether it's plausible.

comment by nshepperd · 2014-03-25T09:05:17.754Z · LW(p) · GW(p)

Assuming you privilege some reference point as your x-axis origin, sure.

What? This has nothing to do with "privileged reference points". If I am [VNM-]rational, with utility function U, and you consider an alternative function $ = exp(U) (or an affine transformation thereof), I will appear to be risk averse with respect to $. This doesn't mean I am irrational, it means you don't have the correct utility function. And in this case, you can turn the wrong utility function into the right one by taking log($).

That is what I mean by "regular risk aversion".

The graphs are not graphs of utility functions.

I know, they are graphs of P(U). Which is implicitly a graph of the composition of a probability function over outcomes with (the inverse of) a utility function.

Indeed they do; because, as one of the other axioms states, each outcome in an alternative may itself be an alternative;

Okay, which parts, specifically, are A, B and C, and how is it established that the agent is indifferent between A and B?

The point Dawes is making, I think, is that any utility function (or at least, any utility function where the calculated utilities more or less track an intuitive notion of "personal value") will lead to this sort of preference between two such distributions.

And I say that is assuming the conclusion. And, if only established for some set of utility functions that "more or less track an intuitive notion of "personal value"", fails to imply the conclusion that the independence axiom is violated for a rational human.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-03-25T17:32:33.675Z · LW(p) · GW(p)

What? This has nothing to do with "privileged reference points". If I am [VNM-]rational, with utility function U, and you consider an alternative function $ = exp(U) (or an affine transformation thereof), I will appear to be risk averse with respect to $. This doesn't mean I am irrational, it means you don't have the correct utility function. And in this case, you can turn the wrong utility function into the right one by taking log($).

That is what I mean by "regular risk aversion".

It actually doesn't matter what the values are, because we know from prospect theory that people's preferences about risks can be reversed merely by framing gains as losses, or vice versa. No matter what shape the function has, it has to have some shape — it can't have one shape if you frame alternatives as gains but a different, opposite shape if you frame them as losses.

I know, they are graphs of P(U). Which is implicitly a graph of the composition of a probability function over outcomes with (the inverse of) a utility function.

True enough. I rounded your objection to the nearest misunderstanding, I think.

And I say that is assuming the conclusion.

Are you able to conceive of a utility function, or even a preference ordering, that does not give rise to this sort of preference over distributions? Even in rough terms? If so, I would like to hear it!

The core of Dawes' argument is not a mathematical one, to be sure (and it would be difficult to make it into a mathematical argument, without some sort of rigorous account of what sorts of outcome distribution shapes humans prefer, which in turn would presumably require substantial field data, at the very least). It's an argument from intuition: Dawes is saying, "Look, I prefer this sort of distribution of outcomes. [Implied: 'And so do other people.'] However, such a preference is irrational, according to the VNM axioms..." Your objection seems to be: "No, in fact, you have no such preference. You only think you do, because your are envisioning your utility function incorrectly." Is that a fair characterization?

Your talk of the utility function possibly being wrong makes me vaguely suspect a misunderstanding. It's likely I'm just misunderstanding you, however, so if you already know this, I apologize, but just in case:

If you have some set of preferences, then (assuming your preferences satisfy the axioms), we can construct a utility function (up to positive affine transformation). But having constructed this function — which is the only function you could possibly construct from that set of preferences (up to positive affine transformation) — you are not then free to say "oh, well, maybe this is the wrong utility function; maybe the right function is something else".

Of course you might instead be saying "well, we haven't actually constructed any actual utility function from any actual set of preferences; we're only imagining some vague, hypothetical utility function, and a vague hypothetical utility function certainly can be the wrong function". Fair enough, if so. However, I once again invite you to exhibit a utility function — or even a preference ordering — which does not give rise to a preference for less-negatively-skewed distributions.

Okay, which parts, specifically, are A, B and C, and how is it established that the agent is indifferent between A and B?

I'm afraid an answer to this part will have to wait until I have some free time to do some math.

Replies from: nshepperd
comment by nshepperd · 2014-03-26T00:12:53.498Z · LW(p) · GW(p)

It actually doesn't matter what the values are, because we know from prospect theory that people's preferences about risks can be reversed merely by framing gains as losses, or vice versa. No matter what shape the function has, it has to have some shape — it can't have one shape if you frame alternatives as gains but a different, opposite shape if you frame them as losses.

Yes, framing effects are irrational, I agree. I'm saying that the mere existence of risk aversion with respect to something does not demonstrate the presence of framing effects or any other kind of irrationality (departure from the VNM axioms).

"No, in fact, you have no such preference. You only think you do, because your are envisioning your utility function incorrectly."

That would be one way of describing my objection. The argument Dawes is making is simply not valid. He says "Suppose my utility function is X. Then my intuition says that I prefer certain distributions over X that have the same expected value. Therefore my utility function is not X, and in fact I have no utility function." There are two complementary ways this argument may break:

If you take as a premise that the function X is actually your utility function (ie. "assuming I have a utility function, let X be that function") then you have no license to apply your intuition to derive preferences over various distributions over the values of X. Your intuition has no facilities for judging meaningless numbers that have only abstract mathematical reasoning tying them to your actual preferences. If you try to shoehorn the abstract constructed utility function X into your intuition by imagining that X represents "money" or "lives saved" or "amount of something nice" you are making a logical error.

On the other hand, if you start by applying your intuition to something it understands (such as "money" or "amount of nice things") you can certainly say "I am risk averse with respect to X", but you have not shown that X is your utility function, so there's no license to conclude "I (it is rational for me to) violate the VNM axioms".

Are you able to conceive of a utility function, or even a preference ordering, that does not give rise to this sort of preference over distributions? Even in rough terms? If so, I would like to hear it!

No, but that doesn't mean such a thing does not exist!

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-03-26T01:21:47.884Z · LW(p) · GW(p)

Yes, framing effects are irrational, I agree. I'm saying that the mere existence of risk aversion with respect to something does not demonstrate the presence of framing effects or any other kind of irrationality (departure from the VNM axioms).

Well, now, hold on. Dawes is not actually saying that (and neither am I)! The claim is not "risk aversion demonstrates that there's a framing effect going on (which is clearly irrational, and not just in the 'violates VNM axioms' sense)". The point is that risk aversion (at least, risk aversion construed as "preferring less negatively skewed distributions") constitutes departure from the VNM axioms. The independence axiom strictly precludes such risk aversion.

Whether risk aversion is actually irrational upon consideration — rather than merely irrational by technical definition, i.e. irrational by virtue of VNM axiom violation — is what Dawes is questioning.

The argument Dawes is making is simply not valid. He says ...

That is not a good way to characterize Dawes' argument.

I don't know if you've read Rational Choice in an Uncertain World. Earlier in the same chapter, Dawes, introducing von Neumann and Morgenstern's work, comments that utilities are intended to represent personal values. This makes sense, as utilities by definition have to track personal values, at least insofar as something with more utility is going to be preferred (by a VNM-satisfying agent) to something with less utility. Given that our notion of personal value is so vague, there's little else we can expect from a measure that purports to represent personal value (it's not like we've got some intuitive notion of what mathematical operations are appropriate to perform on estimates of personal value, which utilities then might or might not satisfy...). So any VNM utility values, it would seem, will necessarily match up to our intuitive notions of personal value.

So the only real assumption behind those graphs is that this agent's utility function tracks, in some vague sense, an intuitive notion of personal value — meaning what? Nothing more than that this person places greater value on things he prefers, than on things he doesn't prefer (relatively speaking). And that (by definition!) will be true of the utility function derived from his preferences.

It seems impossible that we can have a utility function that doesn't give rise to such preferences over distributions. Whatever your utility function is, we can construct a pair of graphs exactly like the ones pictured (the x-axis is not numerically labeled, after all). But such a preference constitutes independence axiom violation, as mentioned...

Replies from: nshepperd
comment by nshepperd · 2014-03-26T02:13:29.994Z · LW(p) · GW(p)

The point is that risk aversion (at least, risk aversion construed as "preferring less negatively skewed distributions") constitutes departure from the VNM axioms.

No, it doesn't. Not unless it's literally risk aversion with respect to utility.

So any VNM utility values, it would seem, will necessarily match up to our intuitive notions of personal value.

That seems to me a completely unfounded assumption.

Whatever your utility function is, we can construct a pair of graphs exactly like the ones pictured (the x-axis is not numerically labeled, after all).

The fact that the x-axis is not labeled is exactly why it's unreasonable to think that just asking your intuition which graph "looks better" is a good way of determining whether you have an actual preference between the graphs. The shape of the graph is meaningless.

comment by tom_cr · 2014-03-24T15:49:46.861Z · LW(p) · GW(p)

Thanks very much for the taking the time to explain this.

It seems like the argument (very crudely) is that, "if I lose this game, that's it, I won't get a chance to play again, which makes this game a bad option." If so, again, I wonder if our measure of utility has been properly calibrated.

It seems to me like the expected utility of option B, where I might get kicked out of the game, is lower than the expected utility of option A, where this is impossible. Your example of insurance may not be a good one, as one insures against financial loss, but money is not identical to utility.

Nonetheless, those exponential distributions make a very interesting argument.

I'm not entirely sure, I need to mull it over a bit more.

Thanks again, I appreciate it.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-03-24T17:47:21.227Z · LW(p) · GW(p)

Just a brief comment: the argument is not predicated on being "kicked out" of the game. We're not assuming that even the lowest-utility outcomes cause you to no longer be able to continue "playing". We're merely saying that they are significantly worse than average.

Replies from: tom_cr
comment by tom_cr · 2014-03-24T18:18:23.026Z · LW(p) · GW(p)

Sure, I used that as what I take to be the case where the argument would be most easily recognized as valid.

One generalization might be something like, "losing makes it harder to continue playing competitively." But if it becomes harder to play, then I have lost something useful, i.e. my stock of utility has gone down, perhaps by an amount not reflected in the inferred utility functions. My feeling is that this must be the case, by definition (if the assumed functions have the same expectation), but I'll continue to ponder.

The problem feels related to Pascal's wager - how to deal with the low-probability disaster.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-03-24T18:36:02.255Z · LW(p) · GW(p)

I really do want to emphasize that if you assume that "losing" (i.e. encountering an outcome with a utility value on the low end of the scale) has some additional effects, whether that be "losing takes you out of the game", or "losing makes it harder to keep playing", or whatever, then you are modifying the scenario, in a critical way. You are, in effect, stipulating that that outcome actually has a lower utility than it's stated to have.

I want to urge you to take those graphs literally, with the x-axis being Utility, not money, or "utility but without taking into account secondary effects", or anything like that. Whatever the actual utility of an outcome is, after everything is accounted for — that's what determines that outcome's position on the graph's x-axis. (Edit: And it's crucial that the expectation of the two distributions is the same. If you find yourself concluding that the expectations are actually different, then you are misinterpreting the graphs, and should re-examine your assumptions; or else suitably modify the graphs to match your assumptions, such that the expectations are the same, and then re-evaluate.)

This is not a Pascal's Wager argument. The low-utility outcomes aren't assumed to be "infinitely" bad, or somehow massively, disproportionately, unrealistically bad; they're just... bad. (I don't want to get into the realm of offering up examples of bad things, because people's lives are different and personal value scales are not absolute, but I hope that I've been able to clarify things at least a bit.)

Replies from: tom_cr
comment by tom_cr · 2014-03-24T20:11:09.623Z · LW(p) · GW(p)

If you assume.... [y]ou are, in effect, stipulating that that outcome actually has a lower utility than it's stated to have.

Thanks, that focuses the argument for me a bit.

So if we assume those curves represent actual utility functions, he seems to be saying that the shape of curve B, relative to A makes A better (because A is bounded in how bad it could be, but unbounded in how good it could be). But since the curves are supposed to quantify betterness, I am attracted to the conclusion that curve B hasn't been correctly drawn. If B is worse than A, how can their average payoffs be the same?

To put it the other way around, maybe the curves are correct, but in that case, where does the conclusion that B is worse come from? Is there an algebraic formula to choose between two such cases? What if A had a slightly larger decay constant, at what point would A cease to be better?

I'm not saying I'm sure Dawes' argument is wrong, I just have no intuition at the moment for how it could be right.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-03-24T20:45:49.651Z · LW(p) · GW(p)

A point of terminology: "utility function" usually refers to a function that maps things (in our case, outcomes) to utilities. (Some dimension, or else some set, of things on the x-axis; utility on the y-axis.) Here, we instead are mapping utility to frequency, or more precisely, outcomes (arranged — ranked and grouped — along the x-axis by their utility) to the frequency (or, equivalently, probability) of the outcomes' occurrence. (Utility on the x-axis, frequency on the y-axis.) The term for this sort of graph is "distribution" (or more fully, "frequency [or probability] distribution over utility of outcomes").

To the rest of your comment, I'm afraid I will have to postpone my full reply; but off the top of my head, I suspect the conceptual mismatch here stems from saying that the curves are meant to "quantify betterness". It seems to me (again, from only brief consideration) that this is a confused notion. I think your best bet would be to try taking the curves as literally as possible, attempting no reformulation on any basis of what you think they are "supposed" to say, and proceed from there.

I will reply more fully when I have time.

comment by More_Right · 2014-04-24T10:15:12.712Z · LW(p) · GW(p)

I think it is rationally optimal for me to not give any money away since I need all of it to pursue rationally-considered high-level goals. (Much like Eliezer probably doesn't give away money that could be used to design and build FAI --because given the very small number of people now working on the problem, and given the small number of people capable of working on the problem, that would be irrational of him). There's nothing wrong with believing in what you're doing, and believing that such a thing is optimal. ...Perhaps it is optimal. If it's not, then why do it? If money --a fungible asset-- won't help you to do it, it's likely "you're doing it wrong."

Socratic questioning helps. Asking the opposite of a statement, or its invalidation helps.

Most people I've met lack rational high-level goals, and have no prioritization schemes that hold up to even cursory questioning, therefore, they could burn their money or give it to the poor and get a better system-wide "high level" outcome than buying another piece of consumer electronics or whatever else they were going to buy for themselves. Heck, if most people had vastly more money, they'd kill themselves with it --possibly with high glycemic index carbohydrates, or heroin. Before they get to effective altruism, they have to get to rational self-interest, and disavow coercion as a "one size fits all problem solver."

Since that's not going to happen, and since most people are actively involved with worsening the plight of humanity, including many LW members, I'd suggest that a strong dose of the Hippocratic Oath prescription is in order:

First, do no harm.

Sure, the human-level tiny brains are enamored with modern equivalents of medical "blood-letting." But you're an early-adopter, and a thinker, so you don't join them. First, do no harm!

Sure, your tiny brained relatives over for Thanksgiving vote for "tough on crime" politicians. But you patiently explain jury nullification of law to them, indicating that one year prior to marijuana legalization in Colorado by the vote, marijuana was de facto legalized because prosecutors were experiencing too much jury nullification of law to save face while trying to prosecute marijuana offenders. Then, you show them Sanjay Gupta's heartbreaking video documentary about how marijuana prohibition is morally wrong.

You do what you have to to change their minds. You present ideas that challenge them, because they are human beings who need something other than a bland ocean of conformity to destruction and injustice. You help them to be better people, taking the place of "strong benevolent Friendly AI" in their lives.

In fact, for simple dualist moral decisions, the people on this board can function as FAI.

The software for the future we want is ours to evolve, and the hardware designers' to build.

comment by dankane · 2014-03-27T15:53:43.648Z · LW(p) · GW(p)

Another effect: people on LW are massively more likely to describe themselves as effective altruists. My moral ideals were largely formed before I came into contact with LW, but not until I started reading was I introduced to the term "effective altruism".

comment by David_Gerard · 2014-03-22T13:19:52.170Z · LW(p) · GW(p)

The question appears to assume that LW participation is identically equal to improved rationality. Involvement in LW and involvement in EA is pretty obviously going to be correlated given they're closely related subcultures.

If this is not the case: Do you have a measure to hand of "improved rationality" that doesn't involve links to LW?

comment by [deleted] · 2014-04-06T14:35:26.185Z · LW(p) · GW(p)

The principle of indifference. — The idea that from an altruistic point of view, we should care about people who are unrelated to us as much as we do about people who are related to us. For example, in The Life You Can Save: How to Do Your Part to End World Poverty, Peter Singer makes the case that we should show a similar degree of moral concern for people in the developing world who are suffering from poverty as we do to people in our neighborhoods. I'd venture the guess its popularity among rationalists is an artifact of culture or a selection effect rather than a consequence of rationality. Note that concern about global poverty is far more prevalent than interest in rationality (while still being low enough so that global poverty is far from alleviated).

Without deliberately bringing up mind-killy things, I would have to ask, if we tie together Effective Altruism and rationality, why Effective Altruists are not socialists of some sort.

Rawls does not deny the reality of political power, nor does he claim that it has its roots elsewhere than in the economic arrangements of a society. But by employing the models of analysis of the classical liberal tradition and of neo-classical [welfare] economics, he excludes that reality from the pages of his book.

-- Understanding Rawls: A Reconstruction and Critique of a Theory of Justice, by Robert Paul Wolff

I actually picked that up on the recommendation from an LW thread to read about Rawls, but I hope the highlight gets Wolff's point across. Elsewhere, he phrases it roughly as: by default, the patterns of distribution arise directly from the patterns of production, and therefore we can say or do very little about perceived distributional problems if we are willing to change nothing at all about the underlying patterns of production producing (ahaha) the problematic effect.

Or in much simpler words: why do we engage in lengthy examinations of sending charity to people who could look after themselves just fine if we would stop robbing them of resources? The success of GiveDirectly should be causing us to reexamine the common assumption that poor people are poor for some reason other than that they lack property to capitalize for themselves.

Anyway, I'm going to don my flame-proof suit now. (And in my defense, my little giving this year so far has already included $720 to CareerVillage for advising underprivileged youth in the First World and $720 to GiveDirectly for direct transfer to the poor in the Third World. I support interventions that work!)

Replies from: Cyan, Lumifer
comment by Cyan · 2014-04-06T16:04:43.091Z · LW(p) · GW(p)

I would have to ask, if we tie together Effective Altruism and rationality, why Effective Altruists are not socialists of some sort.

I'm a loyal tovarisch of Soviet Canuckistan, and I have to say that doesn't seem like a conundrum to me: there's no direct contradiction between basing one's charitable giving on evidence about charitable organizations' effectivenesses and thinking that markets in which individual are free to act will lead to more preferable outcomes than markets with state-run monopolies/monopsonies.

Replies from: None
comment by [deleted] · 2014-04-06T16:10:12.293Z · LW(p) · GW(p)

The whole point of the second quotation and the paragraph after that was to... Oh never mind, should I just assume henceforth that contrary to its usage in socialist discourse, to outsiders "socialism" always means state-owned monopoly? In that case, what sort of terminology should I use for actual worker control of the means of production, and such things?

Replies from: Cyan
comment by Cyan · 2014-04-07T01:48:06.721Z · LW(p) · GW(p)

"Anarcho-syndicalism" maybe? All's I know is that my socialized health insurance is a state-run oligopsony/monopoly (and so is my province's liquor control board). In any event, if direct redistribution of wealth is the key identifier of socialism, then Milton Friedman was a socialist, given his support for negative income taxes.

Prolly the best thing would be to avoid jargon as much as possible when talking to outsiders and just state what concrete policy you're talking about. For what it's worth, it seems to me that you've used the term "socialism" to refer to two different, conflated, specific policies. In the OP you seem to be talking about direct redistribution of money, which isn't necessarily equivalent to the notion of worker control of the means of production that you introduce in the parent; and the term "socialism" doesn't pick out either specific policy in my mind. (An example of how redistribution and worker ownership are not equivalent: on Paul Krugman's account, if you did direct redistribution right now, you'd increase aggregate demand but not even out ownership of capital. This is because current household consumption seems to be budget-constrained in the face of the ongoing "secular stagnation" -- if you gave poor people a whack of cash or assets right now, they'd (liquidate and) spend it on things they need rather than investing/holding it. )

Replies from: None
comment by [deleted] · 2014-04-07T08:10:09.174Z · LW(p) · GW(p)

For what it's worth, it seems to me that you've used the term "socialism" to refer to two different, conflated, specific policies. In the OP you seem to be talking about direct redistribution of money, which isn't necessarily equivalent to the notion of worker control of the means of production that you introduce in the parent; and the term "socialism" doesn't pick out either specific policy in my mind.

Ah, here's the confusion. No, in the OP I was talking about worker control of the means of production, and criticizing Effective Altruism for attempting to fix poverty and sickness through what I consider an insufficiently effective intervention, that being direct redistribution of money.

Replies from: Cyan
comment by Cyan · 2014-04-07T10:53:18.158Z · LW(p) · GW(p)

Oh, I see. Excellent clarification.

How would you respond to (what I claim to be) Krugman's account, i.e., in current conditions poor households are budget-constrained and would, if free to do so, liquidate their ownership of the means of production for money to buy the things they need immediately? Just how much redistribution of ownership are you imagining here?

Replies from: None
comment by [deleted] · 2014-04-07T11:10:26.491Z · LW(p) · GW(p)

Basically, I accept that critique, but only at an engineering level. Ditto on the "how much" issue: it's engineering. Neither of these issues actually makes me believe that a welfare state strapped awkwardly on top of a fundamentally industrial-capitalist, resource-capitalist, or financial-capitalist system - and constantly under attack by anyone perceiving themselves as a put-upon well-heeled taxpayer to boot - is actually a better solution to poverty and inequality than a more thoroughly socialist system in which such inequalities and such poverty just don't happen in the first place (because they're not part of the system's utility function).

I certainly believe that we have not yet designed or located a perfect socialist system to implement. What I do note, as addendum to that, is that nobody who supports capitalism believes the status quo is a perfect capitalism, and most people who aren't fanatical ideologues don't even believe we've found a perfect capitalism yet. The lack of a preexisting design X and a proof that X Is Perfect do not preclude the existence of a better system, whether redesigned from scratch or found by hill-climbing on piecemeal reforms.

All that lack means is that we have to actually think and actually try -- which we should have been doing anyway, if we wish to act according to our profession to be rational.

Replies from: Cyan
comment by Cyan · 2014-04-07T13:15:08.048Z · LW(p) · GW(p)

Good answer. (Before this comment thread I was, and I continue to be, fairly sympathetic to these efforts.)

Replies from: None
comment by [deleted] · 2014-04-07T13:29:45.964Z · LW(p) · GW(p)

Thanks!

comment by Lumifer · 2014-04-07T04:06:17.773Z · LW(p) · GW(p)

if we tie together Effective Altruism and rationality, why Effective Altruists are not socialists of some sort.

An interesting question :-D How do you define "socialism" in this context?

Replies from: None
comment by [deleted] · 2014-04-07T08:05:36.528Z · LW(p) · GW(p)

I would define it here as any attempt to attack economic inequality at its source by putting the direct ownership of capital goods and resultant products in the hands of workers rather than with a separate ownership class. This would thus include: cooperatives of all kinds, state socialism (at least, when a passable claim to democracy can be made), syndicalism, and also Georgism and "ecological economics" (which tend to nationalize/publicize the natural commons and their inherent rentier interest rather than factories, but the principle is similar).

A neat little slogan would be to say: you can't fix inequality through charitable redistribution, not by state sponsorship nor individual effort, but you could fix it through "predistribution" of property ownership to ensure nobody is proletarianized in the first place. (For the record: "proletarianized" means that someone lacks any means of subsistence other than wage-labor. There are very many well-paid proletarians among the Western salariat, the sort of people who read this site, but even this well-off kind of wage labor becomes extremely problematic when the broader economy shifts -- just look at how well lawyers or doctors are faring these days in many countries!)

I do agree that differences of talent, skill, education and luck in any kind of optimizing economy (so not only markets but anything else that rewards competence and punishes incompetence) will eventually lead to some inequalities on grounds of competence and many inequalities due to network effects, but I don't think this is a good ethical excuse to abandon the mission of addressing the problem at its source. It just means we have to stop pretending to be wise by pointing out the obvious problems and actually think about how to accomplish the goal effectively.

Replies from: Eugine_Nier, Eugine_Nier, Lumifer
comment by Eugine_Nier · 2014-04-09T01:59:55.306Z · LW(p) · GW(p)

For the record: "proletarianized" means that someone lacks any means of subsistence other than wage-labor. There are very many well-paid proletarians among the Western salariat, the sort of people who read this site,

The distinction you're making with the word "proletarianized" doesn't really make sense when the market value of the relevant skills are larger than the cost of the means of production.

but even this well-off kind of wage labor becomes extremely problematic when the broader economy shifts -- just look at how well lawyers or doctors are faring these days in many countries!

Owning the means of production doesn't help here since the broader economic shifts can make your factory obsolete just as easily as they can make your skills obsolete.

comment by Eugine_Nier · 2014-04-08T04:33:02.984Z · LW(p) · GW(p)

I would define it here as any attempt to attack economic inequality at its source by putting the direct ownership of capital goods and resultant products in the hands of workers rather than with a separate ownership class.

That doesn't work in the long term. What happens to a socialized factory when demand for the good it produces decreases? A capitalist factory would lay off some workers, but a socialized factory can't do that, so winds up making uncompetitive products. The result is that the socialized factory will eventually get out-competed by capitalist factories. If you force all factories to be socialized, this will eventually lead to economic stagnation. (By the way, this is not the only problem with socialized factories.)

Replies from: None
comment by [deleted] · 2014-04-08T07:45:54.091Z · LW(p) · GW(p)

but a socialized factory can't do that

Why not? Who says? Are we just automatically buying into everything the old American propagandists say now ;-)?

I've not even specified a model and you're already making unwarranted assumptions. It sounds to me like you've got a mental stop-sign.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-04-09T01:51:17.339Z · LW(p) · GW(p)

I've not even specified a model

But you did refer to real world examples.

comment by Lumifer · 2014-04-07T14:54:46.988Z · LW(p) · GW(p)

That's a reasonable definition.

By the way, do you think that EA should tackle the issue of economic inequality, or does EA assert that itself?

Replies from: None
comment by [deleted] · 2014-04-07T15:01:47.881Z · LW(p) · GW(p)

I think EA very definitely targets both poverty and low quality of life, I think factual evidence shows that inequality appears to have a detectable effect on both poverty (defined even in an absolute sense: less egalitarian populations develop completely impoverished sub-populations more easily) and well-being in general (the well-being effects, surprisingly, show up all across the class spectrum), and that therefore someone who cares about optimizing away absolute poverty and optimizing for well-being should care about optimizing for the level of inequality which generates the least poverty and the most well-being.

Obviously the factual portions of this belief are subject to update, on top of my own innate preference for more egalitarian interactions (which is strong enough that it has never seemed to change). My preferences could tilt me towards seeing/acknowledging one set of evidence rather than another, towards believing that "the good is the true", but I was actually as surprised as anyone when the sociological findings showed that rich people are worse off in less-egalitarian societies.

EDIT: Here is a properly rigorous review, and here is a critique.

Replies from: Lumifer
comment by Lumifer · 2014-04-07T15:17:12.346Z · LW(p) · GW(p)

To make an obvious observation, targeting poverty and targeting economic inequality are very different things. It is clear that EA targets "low quality of life", but my question was whether EA people explicitly target economic inequality -- or they don't and you think they should?

Note that asserting that inequality affects absolute poverty does NOT imply that getting rid of inequality is the best method of dealing with poverty.

Replies from: None
comment by [deleted] · 2014-04-07T15:29:41.457Z · LW(p) · GW(p)

As far as I'm aware, EA people do not currently explicitly target economic inequality. I am attempting to claim that they have instrumental reason to shift towards doing so.

Note that asserting that inequality affects absolute poverty does NOT imply that getting rid of inequality is the best method of dealing with poverty.

It certainly doesn't, but my explanation of capitalism also attempted to show that in this particular system, inequality and absolute poverty share a common cause. Proletarianized Nicaraguan former-peasants are absolutely poor, but their poverty was created by the same system that is churning out inequality -- or so I believe on the weight of my evidence.

If there's a joint of reality I've completely failed to cleave, please let me know.

Replies from: Lumifer
comment by Lumifer · 2014-04-07T15:41:17.342Z · LW(p) · GW(p)

poverty was created by the same system that is churning out inequality

Poverty is not created. Poverty is the default state that you do (or do not) get out of.

Historical evidence shows that capitalist societies are pretty good at getting large chunks of their population out of poverty. The same evidence shows that alternatives to capitalism are NOT good at that. The absolute poverty of Russian peasants was not created by capitalism. Neither has the absolute poverty of the Kalahari Bushmen or the Yanomami. But look at what happened to China once capitalism was allowed in.

Replies from: TheAncientGeek, None
comment by TheAncientGeek · 2014-04-08T08:34:07.998Z · LW(p) · GW(p)

There's evidence that capitalism plus social democracy works to increase well being. You can't infer from that that capitalism is doing all the heavy lifting.

Replies from: Lumifer
comment by Lumifer · 2014-04-08T17:22:58.565Z · LW(p) · GW(p)

There's evidence that capitalism plus social democracy works to increase well being.

There is evidence that capitalism without social democracy works to increase well-being. Example: contemporary China.

comment by [deleted] · 2014-04-07T16:03:08.800Z · LW(p) · GW(p)

You're speaking in completely separate narratives to counter a highly specific scenario I had raised -- and one which has actually happened!

Here is the fairly consistent history of actually existing capitalism, in most societies: an Enclosure Movement of some sort modernizes old property structures, this proletarianizes some of the peasants or other subsistence farmers (that is, it removes them from their traditionally-inhabited land), this creates a cheap labor force which is then used in new industries but who have a lower standard of living than their immediate peasant forebears, this has in fact created both absolute poverty and inequality (compared to the previous non-capitalist system). Over time, the increased productivity raises the mean standard of living, whether or not the former-peasants get access to that new higher standard of living seems to be a function of how egalitarian society's arrangements are at that time; history appears to show that capitalism usually starts out quite radically inegalitarian but is eventually forced to make egalitarian concessions (these being known as "social democracy" or "welfare programs" in political-speak) that, after decades of struggle, finally raise the formerly-peasant now-proletarians above the peasant standard of living.

Note how there is in fact causation here: it all starts when a formerly non-industrial society chooses to modernize and industrialize by shifting from a previous (usually feudal and mostly agricultural) mode of production that involved a complicated system of inter-class concessions to mutual survival, to a new and simplified system of mass production, freehold property titles, and no inter-class concessions whatsoever. It's not natural (or rather, predestined: the past doesn't reach around the present to write the future), it's a function of choices people made, which are themselves functions of previous causes, and so on backwards in time.

All we "radicals" are actually pointing out is that if we've seen this movie before, and know how it goes (or if we're simply moral enough to reason in an acausal or even "acausal + veil of ignorance" mode about the transition), we can make different choices at any point in the history to achieve a more desirable result far more directly.

It all adds up to Genre Savvy.

Replies from: Lumifer
comment by Lumifer · 2014-04-07T16:12:19.453Z · LW(p) · GW(p)

this has in fact created both absolute poverty and inequality (compared to the previous non-capitalist system)

I am sorry, are you claiming that there was neither inequality nor absolute poverty in pre-capitalist societies??

I don't understand what are you trying to say about causality.

we can make different choices at any point in the history

Huh? You can make different choices in the present to affect the future. That's very far from "at any point in history".

to achieve a more desirable result far more directly.

So, any idea why all attempts to do so have ended pretty badly so far?

Replies from: None
comment by [deleted] · 2014-04-07T16:24:20.240Z · LW(p) · GW(p)

I am sorry, are you claiming that there was neither inequality nor absolute poverty in pre-capitalist societies??

I'm claiming, on shakier evidence than previously but still to the best of my own knowledge, that late-feudal societies were somewhat more egalitarian than early-capitalist ones. The peasants were better off than the proletarians: they were poor, but they had homes, wouldn't starve on anyone's arbitrary fiat, and lower population density made disease less rampant.

Huh? You can make different choices in the present to affect the future. That's very far from "at any point in history".

The point being: if we consider this history of capitalism as a "story", then different countries are in different places in the story (including some parts I didn't list in the story because they're just plain not universal). If you know what sort of process is happening to you, you can choose differently than if you were ignorant (this is a truism, but it bares repeating when so many people think economic history is some kind of destiny unaffected by any choice beyond market transactions).

So, any idea why all attempts to do so have ended pretty badly so far?

They haven't. You're raising the most famous failed leftist experiments to salience and falsely generalizing. In fact, in the case of Soviet Russia and Red China, you're basically just generalizing from two large, salient examples. Then there's the question of whether "socialism fails" actually cleaves reality at the joint: was it socialism failing in Russia and China, or totalitarianism, or state-managerialism (remember, I've already dissolved to the level where these are three different things that can combine but don't have to)? Remember, until the post-WW2 prosperity of the social-democratic era, the West was quite worried about how quickly and effectively the Soviets were able to grow their economy, especially their military economy.

In other posts I've listed off quite a lot of different options and engineering considerations for pro-egalitarian and anti-poverty economic optimizations. The fundamental point I'm trying to hammer home, though, is that these are engineering considerations. You do not have to pick some system, like "American capitalism" or "Soviet Communism" or "European social democracy", and treat it as an actual goal. Unfortunately, this is what the mind-killed ideologues of the world do: they say, "Obamacare runs counter to the principles of American capitalism and interferes with the Free Market, therefore it's bad." They fail to dissolve the ethics of social systems into a consequentialist ethics regarding the actual people, and instead just stop thinking.

No non-paradoxical problem is unsolvable; people just don't like thinking hard enough to solve real problems.

Replies from: Lumifer, TheAncientGeek
comment by Lumifer · 2014-04-07T16:44:42.610Z · LW(p) · GW(p)

I'm claiming, on shakier evidence than previously but still to the best of my own knowledge, that late-feudal societies were somewhat more egalitarian than early-capitalist ones. The peasants were better off than the proletarians: they were poor, but they had homes, wouldn't starve on anyone's arbitrary fiat, and lower population density made disease less rampant.

I don't think any of that is true -- they neither "had homes" (using the criteria under which the proletariat didn't), nor "wouldn't starve", and disease wasn't "less rampant", too. You seem to be engaging in romanticizing some imagined pastoral past.

Not to mention that you're talking about human universals so I don't see any reasons to restrict ourselves geographically to Europe or time-wise to the particular moment when pre-capitalist societies were changing over to capitalist. Will you make the same claims with respect to Asian or African societies? And how about comparing peak-feudal to peak-capitalist societies?

If you know what sort of process is happening to you, you can choose differently than if you were ignorant

Well, of course, but I fail to see the implications.

They haven't.

At the whole-society level, they have. Besides Mondragon you might have mentioned kibbutzim which are still around. However neither kibbutzim nor Mondragon are rapidly growing and taking over the world. Kibbutzim are in decline and Mondragon is basically just another corporation, surviving but not anomalously successful.

Can you run coops and anarcho-syndicalist communes in contemporary Western societies? Of course you can! The same way you can run religious cults and new age retreats and whatnot. Some people like them and will join them. Some. Very few.

The fundamental point I'm trying to hammer home, though, is that these are engineering considerations.

I strongly disagree. I think there are basic-values considerations as well as "this doesn't do what you think it does" considerations.

Replies from: None
comment by [deleted] · 2014-04-07T16:54:09.269Z · LW(p) · GW(p)

However neither kibbutzim nor Mondragon are rapidly growing and taking over the world.

I see no reason why an optimal system for achieving the human good in the realm of economics must necessarily conquer and destroy its competitors or, as you put it, "take over the world". In fact, the popular distaste for imperialism rather strongly tells me quite the opposite!

"Capitalism paper-clips more than the alternatives" is not a claim in favor of capitalism.

I don't think any of that is true

Ok? Can we nail this down to a dispute over specific facts and have one of us update on evidence, or do you want to keep this in the realm of narrative?

Can you run coops and anarcho-syndicalist communes in contemporary Western societies? Of course you can! The same way you can run religious cults and new age retreats and whatnot. Some people like them and will join them. Some. Very few.

Have you considered that in most states within the USA, you cannot actually charter a cooperative? There's simply no statute for it.

In those states and countries where cooperatives can be chartered, they're a successful if not rabidly spreading form of business, and many popular brands are actually, when you check their incorporation papers, cooperatives. More so in Europe.

Actually, there's been some small bit of evidence that cooperatives thrive more than "ordinary" companies in a financial crisis (no studies have been done about "all the time", so to speak), because their structure keeps them more detached and less fragile with respect to the financial markets.

I strongly disagree. I think there are basic-values considerations as well as "this doesn't do what you think it does" considerations.

Then I think you should state the basic values you're serving, and argue (very hard, since this is quite the leap you've taken!) that "taking over the world" is a desirable instrumental property for an economic system, orthogonally to its ability to cater to our full set of actual desires and values.

Be warned that, to me, it looks as if you're arguing in favor of Clippy-ness being a virtue.

Replies from: TheAncientGeek, Lumifer
comment by TheAncientGeek · 2014-04-08T08:55:57.679Z · LW(p) · GW(p)

There's an argument that you should have the greatest variety of structures and institutions possible to give yourself Talebian robustness....to prevent everything failing at the same time for the same reasons.

Replies from: None
comment by [deleted] · 2014-04-08T11:09:13.814Z · LW(p) · GW(p)

Yes, there is. This certainly argues in favor of trying out additional institutional forms.

comment by Lumifer · 2014-04-07T17:15:55.402Z · LW(p) · GW(p)

I see no reason why an optimal system for achieving the human good in the realm of economics must necessarily conquer and destroy its competitors or, as you put it, "take over the world".

Because, in the long term, there can be only one. If your "optimal system" does not produce as much growth/value/output as the competition, the competition will grow relatively stronger (exponentially, too) every day. Eventually your "optimal system" will be taken over, by force or by money, or just driven into irrelevance.

Look at, say, religious communities like the Amish. The can continue to exist as isolated pockets as long as they are harmless. But they have no future.

or do you want to keep this in the realm of narrative?

This seems to be a 10,000-feet-level discussion, so I think we can just note our disagreement without getting bogged down in historical minutae.

in most states within the USA, you cannot actually charter a cooperative?

Any particular reason why you can't set it up as a partnership or a corporation with a specific corporate charter?

Besides, I don't think your claim is true. Credit unions are very widespread and they're basically coops. There are mutual savings banks and mutual insurance companies which are, again, basically coops.

Then I think you should state the basic values you're serving,

That line of argument doesn't have anything to do with taking over the world. It is focused on trade-offs between values (specifically, that in chasing economic equality you're making bad trade-offs, of course that depends on what your values are) and on a claim that you're mistaken about the consequences of establishing particular socioeconomic systems.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-04-08T09:38:16.695Z · LW(p) · GW(p)

Note that systems aren't competing to produce $$$, they are competing to produce QoL. Europeans are happy to live in countries an inch away from bankruptcy because they get free healthcare and rich cultural heritage and llow crime....

Note also that societies use each others products and services, and the natural global ecosystem might have niches for herbivores as well as carnivores.

Replies from: Lumifer
comment by Lumifer · 2014-04-08T17:28:49.451Z · LW(p) · GW(p)

Note that systems aren't competing to produce $$$, they are competing to produce QoL.

That depends on your analysis framework. If you're thinking about voluntary migrations, quality of life matters a lot. But if you're thinking of scenarios like "We'll just buy everything of worth in this country", for example, $$$ matter much more. And, of course, if the push comes to shove and the miltiary gets involved...

the natural global ecosystem might have niches for herbivores as well as carnivores.

That's a good point. But every player in an ecosystem must produce some value in order not to die out.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-04-08T18:06:39.197Z · LW(p) · GW(p)

Attempts4 at byouts and world domination tend to produce concerted opposition

Replies from: Lumifer
comment by Lumifer · 2014-04-08T18:14:47.108Z · LW(p) · GW(p)

For a historical example consider what happened to the Americas when the Europeans arrived en masse.

Replies from: bramflakes, TheAncientGeek
comment by bramflakes · 2014-04-08T20:28:04.544Z · LW(p) · GW(p)

It's not that accurate to describe Europeans "conquering" the Americas, more like moving in after the smallpox did most of the dirty work then mopping up the remainder. A better example is Africa, where it was unquestionably deliberate acts of aggression that saw nearly the whole continent subdued.

Replies from: TheAncientGeek, Lumifer
comment by TheAncientGeek · 2014-04-08T21:14:53.122Z · LW(p) · GW(p)

Either way, it's not relevant to the "best" politcal system taking over, because it's about opportunity, force of numbers, technology, and, GERMS.

Replies from: bramflakes
comment by bramflakes · 2014-04-08T21:50:31.839Z · LW(p) · GW(p)

And genes.

Replies from: TheAncientGeek, None
comment by TheAncientGeek · 2014-04-08T21:54:49.956Z · LW(p) · GW(p)

If one genotype took over, that would be fragile. Like pandas . Diversity is robustness.

Replies from: bramflakes
comment by bramflakes · 2014-04-08T22:50:19.756Z · LW(p) · GW(p)

I dunno man, milk digestion worked out well for Indo-Europeans.

comment by [deleted] · 2014-04-08T22:57:45.187Z · LW(p) · GW(p)

If by genes you mean smallpox and hepatitis resistance genes, yes.

comment by Lumifer · 2014-04-08T20:46:04.244Z · LW(p) · GW(p)

That too, but I think Americas are a better example because nowadays the mainstream media is full of bison excrement about how Native Americans led wise, serene, and peaceful lives in harmony with Nature until the stupid and greedy white man came and killed them all.

Replies from: bramflakes
comment by bramflakes · 2014-04-08T21:00:40.663Z · LW(p) · GW(p)

Maybe it's a provincial thing. Europeans get the same or similar thing about our great-grandfathers' treatment of Africans. Here in Britain we get both :/

comment by TheAncientGeek · 2014-04-08T19:21:10.153Z · LW(p) · GW(p)

For a counterexample, see WWII. Sure, overwhelming technological superiority is overwhelming. But that's unlikely to happen again in a globalised world.

Replies from: Nornagest, Lumifer
comment by Nornagest · 2014-04-08T20:16:02.224Z · LW(p) · GW(p)

WWII was, to oversimplify, provoked by a coalition of states attempting regional domination; but their means of doing so were pretty far from the "outcompete everyone else" narrative upthread, and in fact you could view them as being successful in proportion to how closely they hewed to it. I know the Pacific theater best; there, we find Japan's old-school imperialistic moves meeting little concerted opposition until they attacked Hawaii, Hong Kong, and Singapore, thus bringing the US and Britain's directly administered eastern colonies into the war. Pearl Harbor usually gets touted as the start of the war on that front, but in fact Japan had been taking over swaths of Manchuria, Mongolia, and China (in roughly that order) since 1931, and not at all quietly. You've heard of the Rape of Nanking? That happened in 1937, before the Anschluss was more than a twinkle in Hitler's eye.

If the Empire of Japan had been content to keep picking on less technologically and militarily capable nations, I doubt the Pacific War as such would ever have come to a head.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-04-08T20:30:30.127Z · LW(p) · GW(p)

In the modern world, attempts at takeover produce concerted opposition, because the modern world has the techontological and practical mechanisms to concert opposition. There are plenty of examples of takeovers in theaancient world because no one could send the message,"we've been taken oher and you could be next"

Replies from: Nornagest, Lumifer
comment by Nornagest · 2014-04-08T21:05:58.836Z · LW(p) · GW(p)

There are plenty of examples of takeovers in theaancient world because no one could send the message,"we've been taken oher and you could be next"

Funny. I've just finished reading Herodotus's Histories, the second half of which could be described as chronicling exactly that message and the response to it.

(There's a bit more to it, of course. In summary, the Greek-speaking Ionic states of western Turkey rebelled against their Persian-appointed satraps, supported by Athens and its allies; after putting down the revolt, Persia's emperor Darius elected to subjugate Athens and incidentally the rest of Aegean Greece in retaliation. Persia in a series of campaigns then conquered much of Greece before being stopped at Marathon; years later, Darius's son Xerxes decided to go for Round 2 and met with much the same results.)

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-04-08T21:17:06.371Z · LW(p) · GW(p)

And Genghis and Atilla ..

comment by Lumifer · 2014-04-08T20:38:24.011Z · LW(p) · GW(p)

In the modern world, attempts at takeover produce concerted opposition

Remind me, who owns that peninsula in the Black Sea now..?

because no one could send the message,"we've been taken oher and you could be next"

I think you severely underestimate the communication capabilities of the ancient world. You also overestimate the willingness of people to die for somebody else far away.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-04-08T21:09:36.244Z · LW(p) · GW(p)

Remind me, who's not in G8 anymore?

Brits fought in Borneo during WWII. Yout may be succumbeing to Typical Country Fallacy.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-04-09T02:03:36.130Z · LW(p) · GW(p)

Remind me, who's not in G8 anymore?

Remind me why anyone should care about the G8?

comment by Lumifer · 2014-04-08T19:30:18.409Z · LW(p) · GW(p)

But that's unlikely to happen again in a globalised world.

I don't see why not. Besides, in this context we're not talking about world domination, we're talking about assimilating backward societies and spreading to them the light of the technological progress :-D

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-04-08T19:35:22.205Z · LW(p) · GW(p)

Do they see it that way? Did anyone ask them?

And why not is the fact that there are ways of telling what the other people are up to:its called espionage.

comment by TheAncientGeek · 2014-04-08T08:44:17.091Z · LW(p) · GW(p)

There's an object level argument against (kinds of) socialism in that they didn't work, and there's a meta level argument against engineering in general, that societies are too complex and organic for large scale artificial changes to have predictable effects.

Replies from: None
comment by [deleted] · 2014-04-08T11:07:51.025Z · LW(p) · GW(p)

there's a meta level argument against engineering in general, that societies are too complex and organic for large scale artificial changes to have predictable effects.

That's called an Argument From Ignorance. All societies consist mostly, sometimes even exclusively, of large-scale artificial changes. Did you think the cubicle was your ancestral environment?

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-04-08T11:46:41.722Z · LW(p) · GW(p)

I was using artificial to mean top-down or socially engineered.

Replies from: None
comment by [deleted] · 2014-04-08T13:05:44.514Z · LW(p) · GW(p)

Good! So was I. The notion that societies evolve "bottom-up" - by any kind of general will rather than by the fiat and imposition of the powerful - is complete and total mythology.

Replies from: Lumifer
comment by Lumifer · 2014-04-08T17:46:08.383Z · LW(p) · GW(p)

The notion that societies evolve "bottom-up" - by any kind of general will rather than by the fiat and imposition of the powerful - is complete and total mythology.

So tell me, which fiat imposed the collapse of the USSR?

Replies from: None
comment by [deleted] · 2014-04-08T18:26:19.957Z · LW(p) · GW(p)

The committees of the Communist Party, from what I know of history. Who were, you know, the powerful in the USSR.

If you're about to "parry" this sentence into saying, "Haha! Look what happens when you implement leftist ideas!", doing so will only prove that you're not even attempting to thoroughly consider what I am saying, but are instead just reaching for the first ideological weapon you can get against the Evil Threat of... whatever it is people of your ideological stripe think is coming to get you.

Replies from: Lumifer
comment by Lumifer · 2014-04-08T18:45:44.274Z · LW(p) · GW(p)

So, the USSR imploded because the "committees of the Communist Party" willed it to be so..?

I am not sure we live in the same reality.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-04-08T19:32:15.568Z · LW(p) · GW(p)

I find this exchange strange. My take is that Gorbachev attemopted limited reforms, from the top down, which opened a floodgat e of protest, from the bottom up.