2014 Survey of Effective Altruists

post by tog · 2014-05-05T02:32:28.735Z · LW · GW · Legacy · 148 comments

Contents

  Other surveys' results, and predictions for this one
None
148 comments

I'm pleased to announce the first annual survey of effective altruists. This is a short survey of around 40 questions (generally multiple choice), which several collaborators and I have put a great deal of work into and would be very grateful if you took. I'll offer $250 of my own money to one participant.

Take the survey at http://survey.effectivealtruismhub.com/

The survey should yield some interesting results such as EAs' political and religious views, what actions they take, and the causes they favour and donate to. It will also enable useful applications which will be launched immediately afterwards, such as a map of EAs with contact details and a cause-neutral register of planned donations or pledges which can be verified each year. I'll also provide an open platform for followup surveys and other actions people can take. If you'd like to suggest questions, email me or comment.

Anonymised results will be shared publicly and not belong to any individual or organisation. The most robust privacy practices will be followed, with clear opt-ins and opt-outs.

I'd like to thank Jacy Anthis, Ben Landau-Taylor, David Moss and Peter Hurford for their help.

Other surveys' results, and predictions for this one

Other surveys have had intriguing results. For example, Joey Savoie and Xio Kikauka's interviewed 42 often highly active EAs over Skype, and found that they generally had left-leaning parents, donated on average 10%, and were altruistic before becoming EAs. The time they spent on EA activities was correlated with the percentage they donated (0.4), the time their parents spend volunteering (0.3), and the percentage of their friends who were EAs (0.3).

80,000 Hours also released a questionnaire and, while this was mainly focused on their impact, it yielded a list of which careers people plan to pursue: 16% for academia,  9% for both finance and software engineering, and 8% for both medicine and non-profits.  

I'd be curious to hear people's predictions as to what the results of this survey will be. You might enjoy reading or sharing them here. For my part, I'd imagine we have few conservatives or even libertarians, are over 70% male, and have directed most of our donations to poverty charities.

148 comments

Comments sorted by top scores.

comment by Larks · 2014-05-03T16:23:41.669Z · LW(p) · GW(p)

There's a question about other social movements people might associate themselves with. How was the list of suggestions created? At present, the list is very left-wing:

  • Animal rights
  • Environmentalist
  • Feminist
  • LGBTQ
  • Rationalist/LessWrong
  • Transhumanist
  • Skeptic/atheist
  • Other:

Ordinarily this would only be a small problem, but then you ask people about their political views after you've primed them with left-wing examples.

Replies from: ChristianKl, Lumifer, Drayin, tog
comment by ChristianKl · 2014-05-03T19:31:17.175Z · LW(p) · GW(p)

Which social movements would you add to that list?

Replies from: jkaufman
comment by jefftk (jkaufman) · 2014-05-09T17:23:28.680Z · LW(p) · GW(p)

Tea Party?

Replies from: Nornagest, tog
comment by Nornagest · 2014-05-09T18:32:23.384Z · LW(p) · GW(p)

Evangelical Christianity has aspects of a social movement, but I doubt we'd turn up any evangelicals here. Not that this is necessarily a problem if the goal is to avoid Blue/Green priming.

If we're just looking for stuff that isn't stereotypically left-wing, men's rights and free software also come to mind.

Replies from: nda, tog
comment by nda · 2014-05-11T07:18:29.892Z · LW(p) · GW(p)

free software

Agreed. Open source was at least part of my fill in for several questions. edit: to expound.. just so much inherit value in free software – even from the smallest packages or simplest library – that we've all created immeasurable value from, and as technology progresses I really see free software as one of our greatest collective assets.

comment by tog · 2014-05-11T17:28:25.326Z · LW(p) · GW(p)

Evangelical Christianity is a good idea, I'll add it. 'Free software' might be reasonably common, and is an audience EAs could target. I'll look at a list of common write-ins.

comment by tog · 2014-05-11T17:27:12.294Z · LW(p) · GW(p)

Yes, that does count as a movement, I'll add it as clear signalling that we're not assuming people are left-wing (in this year's survey, when I get time to tweak my Perl scripts).

comment by Lumifer · 2014-05-03T17:47:06.466Z · LW(p) · GW(p)

then you ask people about their political views after you've primed them with left-wing examples.

Not to mention all the implications of right-wing politics not making it to the list at all. "No, we don't think anyone can possibly believe that... What are you, a freak?" :-/

Replies from: tog
comment by tog · 2014-05-03T18:55:24.148Z · LW(p) · GW(p)

I can assure you I didn't think that - it was rather that I didn't think of any right-wing (or additional non left-wing) movements that significant numbers might plausibly belong to. But I definitely made a mistake in not trying to think of them more. If you can suggest some, I'll add them.

My predictions (linked in this comment ) did have few conservatives or libertarians. The set of EAs whose views I know contains a few libertarians and no conservatives. However that set contains disproportionately many elite university students, an unrepresentatively lefty group.

I was surprised that there weren't a few more libertarians and conservatives in the LessWrong census.

comment by Drayin · 2014-05-03T19:23:44.037Z · LW(p) · GW(p)

I see Larks' point.

The movement data is action-relevant for me, as I'm spending several hours a week going to meetup groups purely to recruit GiveWell donors. I've found skeptic/atheist groups particularly fertile, and lefty political groups (and 'A' rather than 'E' groups generally) the opposite. I haven't tried any conservative or libertarian groups yet.

Replies from: Eugine_Nier, Eugine_Nier
comment by Eugine_Nier · 2014-05-11T21:08:17.051Z · LW(p) · GW(p)

I haven't tried any conservative or libertarian groups yet.

Given that conservative (I believe especially evangelical groups) donate the most to charity, it's probably worthwhile checking them out.

My understanding is that their current approach to the inefficient charity problem involves organizing trips to the countries in question and having members personally help the charity. While this is clearly not the most efficient approach, it does help with the "most of the money winding up in the hands of middlemen" problem while also generating warm fuzzes.

comment by Eugine_Nier · 2014-05-03T19:38:30.927Z · LW(p) · GW(p)

and lefty political groups (and 'A' rather than 'E' groups generally) the opposite.

That's because lefty and 'A' groups are mostly about signalling one's virtue, thus someone who shows up and starts telling them how none of the 'virtuous' things they've been doing are actually helping people is most certainly not welcome.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-05-06T11:04:31.831Z · LW(p) · GW(p)

Uhm, upvoted the comment, but don't completely agree with the linked article.

It suggests that when fans of something are worried when it becomes too popular, they object against losing their positional good. That's just one possible explanation. Sometimes the fact that X becomes widely popular changes X, and there are people who genuinely preferred the original version. -- As a simple example, imagine that tomorrow million new readers will come to LW; would that be a good thing or a bad thing? Depends on what happens to LW. If the quality of debate remains the same, that it's obviously a huge win, and anyone who resents that is guilty of caring about their positional good too much. On the other hand, the new people could easily shift LW towards the popular (in sense: frequent in population) stuff, so we would get a lot of nonsense sprinkled by LW buzzwords.

I can imagine leftist groups believing they are working "more meta than thou"; solving a problem which taken in isolation doesn't seem so important (compared with the causes effective altruists care about), but would start a huge cascade of improvement afterwards (their model of the world says so, your model doesn't). Making mosquito nets instead is not an improvement according to their model.

Replies from: Gunnar_Zarncke, Eugine_Nier
comment by Gunnar_Zarncke · 2014-05-09T15:53:16.931Z · LW(p) · GW(p)

As a simple example, imagine that tomorrow million new readers will come to LW

The results can already been seen in the Census Survey: There is a small trend toward the mean. The smartest move on to greener pastures.

Replies from: Kawoomba
comment by Kawoomba · 2014-05-09T16:10:30.367Z · LW(p) · GW(p)

Moo?

Edit: Stupid comment, too much reddit today. Infantile regression. I apologize. I disagree with the parent comment ("small trend toward the mean in the census = smartest move on to greener pastures") and meant to poke fun at it by showing the absurd fringe case; only dumb cows remaining (which I'm not, hence my disagreement would be conveyed). Convoluted. Sorry.

Replies from: timujin, Gunnar_Zarncke
comment by timujin · 2014-05-09T16:25:40.853Z · LW(p) · GW(p)

Saw this in recent comments, thought how curious is that there is a context in which this comment is not silly. I was wrong. What did you mean, again?

comment by Gunnar_Zarncke · 2014-05-16T12:22:53.312Z · LW(p) · GW(p)

"small trend toward the mean in the census = smartest move on to greener pastures"

I see those two points to be independently supported by the survey and not to imply each other in any obvious way.

comment by Eugine_Nier · 2014-05-11T21:05:21.917Z · LW(p) · GW(p)

It suggests that when fans of something are worried when it becomes too popular, they object against losing their positional good. That's just one possible explanation. Sometimes the fact that X becomes widely popular changes X, and there are people who genuinely preferred the original version.

That doesn't explain why the new X looks much more like an extreme version of the popular version of X rather than the original X.

comment by tog · 2014-05-03T17:44:57.334Z · LW(p) · GW(p)

Those are both good points. Can you suggest less left-wing movements with which people might identify, or that I could now add to the list just to counteract the priming? My impression is that conservatives and centrists are less 'movementy'!

How strong do you think the priming effect will be, with this audience? Is there literature on that? My Google Fu's defeating me.

comment by Said Achmiz (SaidAchmiz) · 2014-05-02T17:13:51.178Z · LW(p) · GW(p)

I've now checked out the survey, and have a couple of comments (which I put into the comments field and am reposting here). #1 is important, #2 less so:

  1. On moral philosophy: "Consequentialist/utilitarian" should be broken up into something like "Utilitarian" and "Other consequentialist (not utilitarian)", because I am a consequentialist and (probably) not a utilitarian, and that disagreement is one of my main points of contention with the EA movement.

  2. I had no idea how to answer the "political views" question. Are these positions ("left", "centre", etc.) supposed to be on the American (U.S.) political spectrum? That'd be my default assumption, but the British/Canadian spelling suggests otherwise... in any case, at least offer as many options as e.g. the Lesswrong survey did.

Replies from: peter_hurford, None, thebestwecan
comment by Peter Wildeford (peter_hurford) · 2014-05-02T18:35:44.670Z · LW(p) · GW(p)

Those are good points. It would confound things too much to change midstream, but now we'll know better for next year.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2014-05-05T04:07:59.055Z · LW(p) · GW(p)

I'd rather see 'consequentialist' supplemented or replaced by specific questions that get at substantive ethical or meta-ethical disputes in EA and philosophy. 'Utilitarian' and 'deontologist' mean lots of different things to different people, and on their strictest definitions they don't entail a lot of their most interesting or widely cited ideas. Perhaps have an exploratory question one year asking non-utilitarians to write in their main objection to utilitarianism, then convert that into a series of questions the following year.

Replies from: peter_hurford, SaidAchmiz, tog
comment by Peter Wildeford (peter_hurford) · 2014-05-05T17:50:36.158Z · LW(p) · GW(p)

This was something I suggested to Tom because I'd be interested too. But ultimately we thought that only a small group of EAs would really have substantive ethical opinions and we thought to trim things for survey length. We added a box asking for clarifications at the end of the survey to provide more of this outlet.

comment by Said Achmiz (SaidAchmiz) · 2014-05-05T05:45:51.420Z · LW(p) · GW(p)

One of the main objections to utilitarianism, it seems to me, is skepticism about the possibility (or even coherence of the notion) of aggregating utility across individuals. That's one of my main objections, at any rate.

Skepticism about the applicability of the VNM theorem to human preferences is another issue, though that one might be less widespread.

Edit: The SEP describes classic utilitarianism as actual, direct, evaluative, hedonistic, maximizing, aggregative (specifically, total), universal, equal-consideration, agent-neutral consequentialism. I have definite issues with the "actual", "direct", "hedonistic", "aggregative", "total", and "equal-consideration" parts of that. (Though I expect that my issues with "actual" will be shared by a significant portion of those who consider themselves utilitarians here, and my issues with "hedonistic" and "direct" may be as well. That leaves "aggregative"+"total", and "equal-consideration", as the two aspects most likely to be sources of philosophical conflict.)

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-05-05T11:08:44.524Z · LW(p) · GW(p)

Those sound like objections to preference utilitarianism but not hedonistic utilitarianism. Although it's not technically possible yet, measuring the intensity of the positive and negative components of an experience sounds something that ought to be at least possible in principle. And the applicability of the VNM theorem to human preferences becomes irrelevant if you're not interested in preferences in the first place.

Replies from: SaidAchmiz, Vaniver
comment by Said Achmiz (SaidAchmiz) · 2014-05-05T11:31:57.659Z · LW(p) · GW(p)

Yes, true enough[1]; I did not properly separate those objections in my comment. To elaborate:

I object to hedonistic utilitarianism on the grounds that it clearly and grossly fails to capture my moral intuitions or those of anyone else whom I consider not to be evading the question. A full takedown of the "hedonistic" part of "hedonistic utilitarianism" is basically (at least) all of Eliezer's posts about the complexity of value and so forth, and I won't rehash it here.

To be honest, hedonistic utilitarianism seems to me to be so obviously wrong that I'm not even all that interested in having this sort of moral philosophy debate with an effective altruist (or anyone else) who holds such a view. I mean, to start with, my hypothetical interlocutor would have to rebut all the objections raised to hedonistic utilitarianism over the centuries since it's been articulated, including, but not limited to, the aforementioned Lesswrong material.

I object to preference utilitarianism because of the "aggregation of utility" and "possibility of constructing a utility function" issues[2]. I think this is the more interesting objection.

[1] I'm not sure "intensity of the positive and negative components of an experience" is a coherent notion. There may not be a single quantity like that to measure. And even if we can measure something which we think qualifies for the title, it may be measurable only in some more-or-less absolute terms, while leaving open the question of how this hypothetical measured quantity matches up with anything like "utility to this particular experiencer". But, for the sake of the argument, I'm willing to grant that such a quantity can indeed be usefully measured, because this is certainly not my true rejection.

[2] These are my objections to the "preference" component of preference utilitarianism; my objection to classical utilitarianism also includes objections to other components, which I have enumerated in the grandparent.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-05-05T11:44:32.363Z · LW(p) · GW(p)

Two replies:

1) Even if hedonistic utilitarianism would ultimately be wrong as a full description of what a person values, "maximize pleasure while minimizing suffering" can still be a useful heuristic to follow. Yes, following that heuristic to its logical conclusion would mean forcibly rewiring everyone's brains, but that doesn't need to be a problem for as long as forcibly rewiring people's brains isn't a realistic option. HU may still be the best approximation of a person's values in the context of today's world, even if it wasn't the best description overall.

2) The arguments on complexity of value and so on establish that the average person's values aren't correctly described by HU. This still leaves open the possibility of someone only approving of those of their behaviors that serve to promote HU, so there may definitely be individual people who accept HU, due to not sharing the moral intuitions which motivate the objections to it.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-05-05T12:04:16.967Z · LW(p) · GW(p)

On 1): I am skeptical of replies to the effect that "yes, well, X might not be quite right, but it's a useful heuristic, therefore I will go on acting as if X is right". For one thing, a person who makes such a reply usually goes right back to saying "X is right!" (sans qualifiers) as soon as the current conversation ends. Let's get clear on what we actually believe, I generally think; once we've firmly established that, we can look for maximally effective implementations.

For another thing, HU may be the best approximation etc. etc., but that's a claim that at least should be made explicitly, such that it can be examined and argued for; a claim of this importance shouldn't come up only in such tangential discussion branches.

For a third thing, what happens when forcibly rewiring people's brains becomes a realistic option?

On 2): I think there's two issues here. There could indeed be people who accept HU because that's what correctly describes their moral intuitions. (Though I should certainly hope they do not think it proper to impose that moral philosophy on me, or on anyone else who doesn't subscribe to HU!)

"Only approving of those behaviors that serve to promote HU" is, I think, a separate thing. Or at least, I'd need to see the concept expanded a bit more before I could judge. What does this hypothetical person believe? What moral intuitions do they have? What exactly does it mean to "promote" hedonistic utilitarianism?

Replies from: tog, Kaj_Sotala
comment by tog · 2014-05-05T20:09:27.566Z · LW(p) · GW(p)

There could indeed be people who accept HU because that's what correctly describes their moral intuitions. (Though I should certainly hope they do not think it proper to impose that moral philosophy on me, or on anyone else who doesn't subscribe to HU!)

Why would this be improper? Don't that it doesn't follow from any meta-ethical position.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-05-05T20:20:11.393Z · LW(p) · GW(p)

If you say "all that matters is pain and pleasure", and I say "no! I care about other things!", and you're like "nope, not listening. PAIN AND PLEASURE ARE THE ONLY THINGS", and then proceed to enact policies which minimize pain and maximize pleasure, without regard for any of the other things that I care about, and all the while I'm telling you that no, I care about these other things! Stop ignoring them! Other things matter to me! but you're not listening because you've decided that only pain and pleasure can possibly matter to anyone, despite my protestations otherwise...

... well, I hope you can see how that would bother me.

It's not just a matter of us caring about different things. If it were only that, we could acknowledge the fact, and proceed to some sort of compromise. Hedonistic utilitiarians, however, do not acknowledge that it's possible, or that it's valid, to care about things that are not pain or pleasure. All these people who claim to care about all sorts of other things must be misguided! Clearly.

Replies from: tog, Kaj_Sotala
comment by tog · 2014-05-06T23:27:50.469Z · LW(p) · GW(p)

Hedonistic utilitiarians, however, do not acknowledge that it's possible, or that it's valid, to care about things that are not pain or pleasure.

They may think it's incorrect if they're realists, or cognitivists of some other form. But this has nothing to do with their being HUs, only with their being cognitivists.

[Description of situation] ... well, I hope you can see how that would bother me.

Here are 3 non-exhaustive ways in which the situation you described could be bothersome:

(i) If your first order ethical theory (as opposed to your meta-ethics), perhaps combined with very plausible facts about human nature, requires otherwise. For instance if it speaks in favour of toleration or liberty here.

(ii) If you're a cognitivist of the sort who thinks she could be wrong, it could increase your credence that you're wrong.

(iii) If you'd at least on reflection give weight to the evident distress SaidAchmiz feels in this scenario, as most HUs would.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-05-07T01:08:43.491Z · LW(p) · GW(p)

Hedonistic utilitiarians, however, do not acknowledge that it's possible, or that it's valid, to care about things that are not pain or pleasure.

They may think it's incorrect if they're realists, or cognitivists of some other form. But this has nothing to do with their being HUs, only with their being cognitivists.

No, I don't think this is right. I think you (and Kaj_Sotala) are confusing these two questions:

  1. Is it correct to hold an ethical view that is something other than hedonistic utilitarianism?
  2. Does it make any sense to intrinsically value anything other than pleasure, or intrinsically disvalue things other than pain?

#1 is a meta-ethical question; moral realism or cognitivism may lead you to answer "no", if you're a hedonistic utilitarian. #2 is an ethical question; it's about the content of hedonistic utilitarianism.

If I intrinsically care about, say, freedom, that's not an ethical claim. It's just a preference. "Humans may have preferences about things other than pain/pleasure, and those preferences are morally important" is an ethical claim which I might formulate, about that preference that I have.

Hedonistic utilitarianism tells me that my aforementioned preference is incoherent or mistaken, and that in fact I do not have any preferences (or any preferences that are morally important or worth caring about) other than preferences about pleasure/pain.

Moral realism (which, as blacktrance correctly notes, is implied by any utilitarianism) may lead a hedonistic utilitarian to say that my aforementioned ethical claim is incorrect.

As for your scenarios, I'm not sure what you meant by listing them. My point was that my scenario, which describes a situation involving a hypothetical me, Said Achmiz, would be bothersome to me, Said Achmiz. Is it really not clear why it would be?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-05-07T07:38:35.167Z · LW(p) · GW(p)

If I intrinsically care about, say, freedom, that's not an ethical claim. It's just a preference. [...]

Hedonistic utilitarianism tells me that my aforementioned preference is incoherent or mistaken, and that in fact I do not have any preferences (or any preferences that are morally important or worth caring about) other than preferences about pleasure/pain.

Ethical subjectivism (which I subscribe to) would say that "ethical claims" are just a specific subset of our preferences; indeed, I'm rather skeptical of the notion of there being a distinction between ethical claims and preferences in the first place. But HU wouldn't necessarily say that someone's preference for something else than pleasure or pain would be mistaken - if it's interpreted within a subjectivist framework, HU is just a description of preferences that are different. See my response to blacktrance.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-05-07T08:18:41.600Z · LW(p) · GW(p)

But HU wouldn't necessarily say that someone's preference for something else than pleasure or pain would be mistaken - if it's interpreted within a subjectivist framework, HU is just a description of preferences that are different.

I really don't think that this is correct. If this were true, first of all, hedonistic utilitarianism would simply reduce to preference utilitarianism. In actual fact, neither view is merely about one's own terminal values.

If someone, personally, cares only about pain and pleasure, but acknowledges that other people may have other things as terminal values, and thinks that The Good lies in satisfying everyone's preferences maximally — which, for themselves, means maximizing pleasure and minimizing pain, and for other people may mean other things — then that person is not a hedonistic utilitarian. They are a preference utilitarian. Referring to them as an HU is simply not correct, because that's not how the term is used in the philosophical literature.

On the other hand, if someone cares only about pain and pleasure — both theirs and other peoples' — and would prefer that everyone's pleasure be maximized and everyone's pain be minimized; but this person is not a moral realist, and has no opinion on what constitutes The Good or thinks there's no fact of the matter about whether an act is right or wrong; well, then this person is not a utilitarian at all. Again, describing this person as a hedonistic or any other kind of utilitarian completely fails to match up with how the term is used in the philosophical literature.

As for ethical subjectivism — uh, I don't think that's an actual thing. I'd not heard of anything by that name until today. I don't like going by wikipedia's definitions of philosophical principles, so I tried tracking it down to a source, such as perhaps a major philosopher espousing the view or at least describing it coherently. No such luck. Take a look at that list of references on its wikipedia page; two are to a single book (written in 1959 by some guy I've never heard of — have you? — and the shortness whose wikipedia page suggests that he wasn't anyone interesting), and one is to a barely-related page that mentions the thing once, in passing, by a different name. I'm not convinced. As best I can tell, it's a label that some modern-day historians of philosophy have used to describe... a not-quite-consistent family of views. (Divine command theory, for one.)

But let's attempt to take it at face value. You say:

Someone could be an ethical subjectivist and say that utilitarianism is the theory that best describes their particular attitudes, or at least that subset of their attitudes that they endorse.

Very well. Are their attitudes correct, do they think? If they say there's no fact of the matter about that, then they're not a utilitiarian. "Utilitiarianism" is a quite established term in the literature. You can't just apply it to any old thing.

Of course, this is Lesswrong; we don't argue about definitions; we're interested in what people actually think. However in this case I think getting our terms straight is important, for two reasons:

  1. When most people say they're utilitarians, they mean it in the usual sense, I think. So to understand what's going on in these discussions, and in the heads of the people we're talking to, we need to know what is the usual sense.

  2. If you hold some view which is not one of the usual views with commonly-known terms, you shouldn't call it by one of the commonly-known terms, because then I won't have any idea what you're talking about and we'll keep getting into comment threads like this one.

Replies from: Kaj_Sotala, Kaj_Sotala
comment by Kaj_Sotala · 2014-05-07T10:36:18.729Z · LW(p) · GW(p)

On the other hand, if someone cares only about pain and pleasure — both theirs and other peoples' — and would prefer that everyone's pleasure be maximized and everyone's pain be minimized; but this person is not a moral realist, and has no opinion on what constitutes The Good or thinks there's no fact of the matter about whether an act is right or wrong; well, then this person is not a utilitarian at all. Again, describing this person as a hedonistic or any other kind of utilitarian completely fails to match up with how the term is used in the philosophical literature.

You may be right to say that my use of "utilitarian" is different from how it's conventionally used in the literature

... though, I just looked at the SEP entry on Consequentialism, and I note that aside for the title of one book in the bibliography, nowhere in the article is the word "realism" even mentioned. Nor does there seem to be an entry in the list of claims making up classic utilitarianism that would seem to require moral realism. I guess you could kind of interpret one of these three conditions as requiring moral realism:

Universal Consequentialism = moral rightness depends on the consequences for all people or sentient beings (as opposed to only the individual agent, members of the individual's society, present people, or any other limited group).

Equal Consideration = in determining moral rightness, benefits to one person matter just as much as similar benefits to any other person (= all who count count equally).

Agent-neutrality = whether some consequences are better than others does not depend on whether the consequences are evaluated from the perspective of the agent (as opposed to an observer).

... but it doesn't seem obvious to me why someone who was both an ethical subjectivist couldn't say that "I'm a classical utiliarian, in that (among other things) the best description of my ethical system is that I think that the goodness of an action should be determined based on how it affects all sentient beings, that benefits to one person matter just as much as similar benefits to others, and that the perspective of the people evaluating the consequences doesn't matter. Though of course others could have ethical systems that were not well described by these items, and that wouldn't make them wrong".

Or maybe the important part in your comment was the part "...but this person is not a moral realist, and has no opinion on what constitutes The Good"? But a subjectivist doesn't say that he has no opinion on what constitutes The Good: he definitely has an opinion, and there may clearly be a right and wrong answer with regard to the kind of actions that are implied by his personal moral system; it's just that the thing that constitutes The Good will be different for people with different moral systems.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-05-07T14:05:28.118Z · LW(p) · GW(p)

Consequenialism supplies a realistic ontology, since it's goods are facts about the real world, and utilitarian supplies an objective epistemology, since different utilitarians of the same stripe can converge. That adds up to some of the ingredients of realism, but not all of them. What is specifically lacking is an justification of comsequentialist ends as being objectively good, and not just subjectively desirable.

Replies from: tog
comment by tog · 2014-05-08T17:24:10.047Z · LW(p) · GW(p)

Consequenialism supplies a realistic ontology, since it's goods are facts about the real world,

For this to make it realist, the fact that the truth of those facts has value would also have to be mind-independent. Even subjectivists typically value facts about the external world (e.g. their pleasure).

comment by Kaj_Sotala · 2014-05-07T09:07:50.906Z · LW(p) · GW(p)

Ethical subjectivism is also discussed in the Stanford Encyclopedia of Philosophy.

(I like this quote from that article, btw: "So many debates in philosophy revolve around the issue of objectivity versus subjectivity that one may be forgiven for assuming that someone somewhere understands this distinction.")

You may be right to say that my use of "utilitarian" is different from how it's conventionally used in the literature; I'm pretty unfamiliar with the actual ethical literature. But if we have people who have the attitude of "I want to take the kinds of actions that maximally increase pleasure and maximally reduce suffering and I'm a moral realist" and people who have the attitude of "I want to take the kinds of actions that maximally increase pleasure and maximally reduce suffering and I'm a moral non-realist", then it feels a little odd to have different terms for them, given that they probably have more in common with each other (with regard to the actions that they take and the views that they hold) than e.g. two people who are both moral realists but differ on consequentialism vs. deontology.

At least in a context where we are trying to categorize people into different camps based on what they think we should actually do, it would seem to make sense if we just called both the moral realist and moral non-realist "utilitarians", if they both fit the description of a utilitarian otherwise.

comment by Kaj_Sotala · 2014-05-06T05:13:48.253Z · LW(p) · GW(p)

Hedonistic utilitiarians, however, do not acknowledge that it's possible, or that it's valid, to care about things that are not pain or pleasure. All these people who claim to care about all sorts of other things must be misguided!

I don't think that hedonistic utilitarianism necessarily implies moral realism. Some HUs will certainly tell you that the people who morally disagree with them are misguided, but I don't see why the proportion of HUs who think so (vs. the proportion of HUs who think that you are simply caring about different things) would need to be any different than it would be among the adherents of any other ethical position.

Maybe you meant your comment to refer specifically to the kinds of HUs who would impose their position on you, but even then the moral realism doesn't follow. You can want to impose your values on others despite thinking that values are just questions of opinion. For instance, there are things that I consider basic human rights and I want to impose the requirement to respect them on every member of every society, even though there are people who would disagree with that requirement. I don't think that the people who disagree are misguided in any sense, I just think that they value different things.

Replies from: SaidAchmiz, blacktrance
comment by Said Achmiz (SaidAchmiz) · 2014-05-07T01:20:13.030Z · LW(p) · GW(p)

I agree with blacktrance's reply to you, and also see my reply to tog in a different subthread for some commentary. However, I'm sufficiently unsure of what you're saying to be certain that your comment is fully answered by either of those things. For example:

HUs who think that you are simply caring about different things

If you [the hypothetical you] think that it's possible to care (intrinsically, i.e. terminally) about things other than pain and pleasure, then I'm not quite sure how you can remain a hedonistic utilitarian. You'd have to say something like: "Yes, many people intrinsically value all sorts of things, but those preferences are morally irrelevant, and it is ok to frustrate those preferences as much as necessary, in order to minimize pain and maximize pleasure." You would, in other words, have to endorse a world where all the things that people value are mercilessly destroyed, and the things they most abhor and despise come to pass, if only this world had the most pleasure and least pain.

Now, granted, people sometimes endorse the strangest things, and I wouldn't even be surprised to find someone on Lesswrong who held such a view, but then again I never claimed otherwise. What I said was that I should hope those people do not impose such a worldview on me.

If I've misinterpreted your comment and thereby failed to address your points, apologies; please clarify.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-05-07T08:50:29.485Z · LW(p) · GW(p)

If you [the hypothetical you] think that it's possible to care (intrinsically, i.e. terminally) about things other than pain and pleasure, then I'm not quite sure how you can remain a hedonistic utilitarian.

Well, if you're really curious about how one could be a hedonistic utilitarian while also thinking that it's possible to care intrinsically about things other than pain and pleasure, one could think something like:

"So there's this confusing concept called 'preferences' that seems to be a general term for all kinds of things that affect our behavior, or mental states, or both. Probably not all the things that affect our behavior are morally important: for instance, a reflex action is a thing in a person's nervous system that causes them to act in a certain way in certain situations, so you could kind of call that a preference to act in such a way in such a situation, but it still doesn't seem like a morally important one.

"So what does make a preference morally important? If we define a preference as 'an internal disposition that affects the choices that you make', it seems like there would exist two kinds of preferences. First there are the ones that just cause a person to do things, but which don't necessarily cause any feelings of pleasure or pain. Reflexes and automated habits, for instance. These don't feel like they'd be worth moral consideration any more than the automatic decisions made by a computer program would.

"But then there's the second category of preferences, ones that cause pleasure when they are satisfied, suffering when they are frustrated, or both. It feel like pleasure is a good thing and suffering is a bad thing, so that makes it good to satisfy the kinds of preferences that are produce pleasure when satisfied, as well as bad to frustrate the kinds of preferences that cause suffering when frustrated. Aha! Now I seem to have found a reasonable guideline for the kinds of preferences that I should care about. And of course this goes for higher-order preferences as well: if someone cares about X, then trying to change that preference would be a bad thing if they had a preference to continue caring about X, such that they would feel bad if someone tried to change their caring about X.

"And of course people can have various intrinsic preferences for things, which can mean that they do things even though that doesn't produce them any suffering or pleasure. Or it can mean that doing something gives them pleasure or lets them avoid suffering by itself, even when doing that something doesn't lead to any other consequence. The first kind of intrinsic preference I already concluded was morally irrelevant; the second kind is worth respecting, again because violating it would cause suffering, or reduce pleasure, or both. And I get tired of saying something clumsy like 'increasing pleasure and decreasing suffering' all the time, so let's just call that 'increasing well-being' for short.

"Now unfortunately people have lots of different intrinsic preferences, and they often conflict. We can't satisfy them all, as nice as it would be, so I have to choose my side. Since I chose my favored preferences on the basis that pleasure is good and suffering is bad, it would make sense to side with the preferences that, in the long term, produce the greatest amount of well-being in the world. For instance, some people may want the freedom to lie and cheat and murder, whereas other people want to have a peaceful and well-organized society. I think the preferences for living in peace will lead to greater well-being in the long term, so I will side with them, even if that means that the preferences of the sociopaths and murderers will be frustrated.

"Now there's also this kind of inconvenient issue that if we rewire people's brains so that they'll always experience the maximal amount of pleasure, then that will produce more well-being in the long run, even if those people don't currently want to have their brains rewired. I previously concluded that I should side with kinds of preferences that produce the greatest amount of well-being in the world, and the preference of 'let's rewire everyone's brains' does seem to produce by far the greatest amount of well-being in the world. So I should side with that preference, even though it goes against the intrinsic preferences of a lot of other people, but so did the decision to impose a lawful and peaceful society on the sociopaths and murderers, so that's okay by me.

"Of course, other people may disagree, since they care about different things than pain and pleasure. And they're not any more or less right - they just have different criteria for what counts as a moral action. But if it's either them imposing their worldview on me, or me imposing my worldview on them, well, I'd rather have it be me imposing mine on them."

I wouldn't even be surprised to find someone on Lesswrong who held such a view, but then again I never claimed otherwise. What I said was that I should hope those people do not impose such a worldview on me.

Right, I wasn't objecting to your statement of not wanting to have such a worldview imposed on you. I was only objecting to the statement that hedonistic utilitarians would necessarily have to think that others were misguided in some sense.

comment by blacktrance · 2014-05-06T23:44:03.987Z · LW(p) · GW(p)

Any form of utilitarianism implies moral realism, as utilitarianism is a normative ethical theory and normative ethical theories presuppose moral realism.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-05-07T07:35:24.633Z · LW(p) · GW(p)

I feel that this discussion is rapidly descending into a debate over definitions, but as a counter-example, take ethical subjectivism, which is a form of moral non-realism and which Wikipedia defines as claiming that:

  1. Ethical sentences express propositions.
  2. Some such propositions are true.
  3. Those propositions are about the attitudes of people.

Someone could be an ethical subjectivist and say that utilitarianism is the theory that best describes their particular attitudes, or at least that subset of their attitudes that they endorse.

Replies from: blacktrance, TheAncientGeek
comment by blacktrance · 2014-05-07T15:24:56.066Z · LW(p) · GW(p)

Someone could be an ethical subjectivist and want to maximize world utility, but such a person would not be a utilitarian, because utilitarianism holds that other people should maximize world utility. If you merely say "I want to maximize world utility and others to do the same", that is not utilitarianism - a utilitarian would say that you ought to maximize world utility, even if you don't want to, and it's not a matter of attitudes. Yes, this is arguing over definitions to some extent, but it's important because I often see this kind of confusion about utilitarianism on LW.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-05-07T15:51:15.058Z · LW(p) · GW(p)

Could you provide a reference for that? At least the SEP entry on the topic doesn't clearly state this. I'm also unsure of what difference this makes in practice - I guess we could come up with a new word for all the people who are both moral antirealist and utilitarian-aside-for-being-moral-antirealists, but I'm not sure if the difference in their behavior and beliefs is large enough for that to be worth it.

Replies from: TheAncientGeek, blacktrance
comment by TheAncientGeek · 2014-05-07T20:30:23.678Z · LW(p) · GW(p)

Non egoistic subjectivists?

comment by blacktrance · 2014-05-07T19:48:26.989Z · LW(p) · GW(p)

The SEP entry for consequentialism says it "is the view that normative properties depend only on consequences", implying a belief in normative properties, which means moral realism.

If you want to describe people's actions, a utilitarian and a world-utility-maximizing non-realist would act similarly, but there would be differences in attitude: a utilitarian would say and feel like he is doing the morally right thing and those who disagree with him are in error, whereas the non-realist would merely feel like he is doing what he wants and that there is nothing special about wanting to maximize world utility - to him, it's just another preference, like collecting stamps or eating ice cream.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-05-07T20:36:38.850Z · LW(p) · GW(p)

This is getting way too much into a debate over definitions so I'll stop after this comment, but I'll just point out that, among professional philosophers, there is no correlation between endorsing consequentialism and endorsing moral realism.

Replies from: blacktrance
comment by blacktrance · 2014-05-07T20:53:27.004Z · LW(p) · GW(p)

A non-consequentialist could be a moral realist as well, such as if they were a deontologist, so it's not a good measurement.

Also, consequentialism and moral realism aren't always well-defined terms.

Edit: That survey's results are strange. Twenty people answered that they're moral realists but non-cognitivists, though moral realism is necessarily cognitivist.

comment by TheAncientGeek · 2014-05-07T20:27:45.574Z · LW(p) · GW(p)

That doesn't mean utilitarianism is subjective. Rather, it means any subjective idea could correspond to objective truth.

comment by Kaj_Sotala · 2014-05-05T15:15:56.617Z · LW(p) · GW(p)

Let's get clear on what we actually believe, I generally think; once we've firmly established that, we can look for maximally effective implementations.

For another thing, HU may be the best approximation etc. etc., but that's a claim that at least should be made explicitly

I agree that it would often be good to be clearer about these points.

For a third thing, what happens when forcibly rewiring people's brains becomes a realistic option?

At that point the people who consider themselves hedonistic utilitarians might come up with a theory that says that forcible wireheading is wrong and switch to calling themselves supporters of that theory. Or they could go on calling themselves HUs despite not forcibly wireheading anyone, in the same way that many people call themselves utilitarians today despite not actually giving most of their income away. Or some of them could decide to start working towards efforts to forcibly wirehead everyone, in which case they'd become the kinds of people described by my reply 2).

"Only approving of those behaviors that serve to promote HU" is, I think, a separate thing. Or at least, I'd need to see the concept expanded a bit more before I could judge.

By this, I meant to say "only approve of whatever course of action HU says is the best one".

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-05-05T21:09:23.834Z · LW(p) · GW(p)

At that point ... [various possibilities]

Yeah, I meant that as a normative "what then", not an empirical one. I agree that what you describe are plausible scenarios.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-05-06T05:03:22.372Z · LW(p) · GW(p)

In that case, I'm unsure of what kind of an answer you were expecting (unless the "what then" was meant as a rhetorical question, but even then I'm slightly unsure of what point it was making).

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-05-06T20:00:35.334Z · LW(p) · GW(p)

Yes, the "what then" was rhetorical. If I had to express my point non-rhetorically, it'd be something like this:

If you take a position which gives ethically correct results only until such time as some (reasonably plausible) scenario comes to pass, then maybe your position isn't ethical in the first place. "This ethical framework gives nonsensical or monstrous results in edge cases [of varying degrees of edge-ness]" is, after all, a common and quite justified criticism of ethical frameworks.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-05-07T08:03:13.004Z · LW(p) · GW(p)

If you take a position which gives ethically correct results only until such time as some (reasonably plausible) scenario comes to pass, then maybe your position isn't ethical in the first place. "This ethical framework gives nonsensical or monstrous results in edge cases [of varying degrees of edge-ness]" is, after all, a common and quite justified criticism of ethical frameworks.

It is a point against the framework, certainly. But so far nobody has developed an ethical framework that would have no problems at all, so at the moment we can only choose the framework that's the least bad.

(Assuming that we wish to choose one in the first place, of course - I do think that there is merit in just accepting that they're all flawed and then not choosing to endorse any single one.)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-05-07T08:21:55.873Z · LW(p) · GW(p)

(Assuming that we wish to choose one in the first place, of course - I do think that there is merit in just accepting that they're all flawed and then not choosing to endorse any single one.)

Well, that's been my policy so far, certainly. Some are worse than others, though. "This ethical framework breaks in catastrophic, horrifying fashion, creating an instant dystopia, as soon as we can rewire people's brains" is pretty darn bad.

Replies from: Fronken
comment by Fronken · 2014-05-07T09:09:31.991Z · LW(p) · GW(p)

... can't we rewire brains right now? We just ... don't.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-05-07T09:20:18.654Z · LW(p) · GW(p)

Well, we must not be hedonistic utilitarians then, right? Because if we were, and we could, we would.

Edit: Also, what the heck are you talking about?

Replies from: Fronken
comment by Fronken · 2014-07-03T14:54:54.364Z · LW(p) · GW(p)

Also, what the heck are you talking about?

Wireheading. The term is not a metaphor, and it's not a hypothetical. You can literally stick a wire into someone's pleasure centers and activate them, using only non-groundbreaking neuroscience.

It's been tested on humans, but AFAIK no-one has ever felt compelled to go any further.

(Yeah, seems like it might be evidence. But then, maybe akrasia...)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-07-03T17:30:05.382Z · LW(p) · GW(p)

Where and what are these "pleasure centers", exactly?

comment by Vaniver · 2014-05-05T11:14:17.565Z · LW(p) · GW(p)

Although it's not technically possible yet, measuring the intensity of the positive and negative components of an experience sounds something that ought to be at least possible in principle.

I don't see how having a quantitative, empirical measure which is appropriate for one individual helps you with comparisons across individuals. Do we really want to make people utility monsters because their neural currents devoted to measuring happiness have a higher amperage?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-05-05T11:25:26.135Z · LW(p) · GW(p)

I was assuming that the measure would be valid across individuals. I wouldn't expect the neural basis of suffering or pleasure to vary so much that you couldn't automatically adapt it to the brains in question.

Do we really want to make people utility monsters because their neural currents devoted to measuring happiness have a higher amperage?

Well yes, hedonistic utilitarianism does make it possible in principle that Felix ends up screwing us over, but that's an objection to hedonistic utilitarianism rather than the measure.

Replies from: Vaniver
comment by Vaniver · 2014-05-05T11:38:11.425Z · LW(p) · GW(p)

I was assuming that the measure would be valid across individuals.

I mean, the measure is going to be something like an EEG or an MRI, where we determine the amount of activity in some brain region. But while measuring the electrical properties of that region is just an engineering problem, and the units are the same from person to person, and maybe even the range is the same from person to person, that doesn't establish the ethical principle that all people deserve equal consideration (or, in the case of range differences or variance differences, that neural activity determines how much consideration one deserves).

Well yes, hedonistic utilitarianism does make it possible in principle that Felix ends up screwing us over, but that's an objection to hedonistic utilitarianism rather than the measure.

It's not obvious to me that all agents deserve the same level of moral consideration (i.e. I am open to the possibility of utility monsters), but it is obvious to me that some ways of determining who should be the utility monsters are bad (generally because they're easily hacked or provide unproductive incentives).

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-05-05T12:07:43.883Z · LW(p) · GW(p)

Well it's not like people would go around maximizing the amount of this particular pattern of neural activity in the world: they would go around maximizing pleasure in the-kinds-of-agents-they-care-about, where the pattern is just a way of measuring and establishing what kinds of interventions actually do increase pleasure. (We are talking about humans, not FAI design, right?) If there are ways of hacking the pattern or producing it in ways that don't actually correlate with pleasure (of the kind that we care about), then those can be identified and ignored.

Replies from: Vaniver
comment by Vaniver · 2014-05-05T12:43:54.890Z · LW(p) · GW(p)

Well it's not like people would go around maximizing the amount of this particular pattern of neural activity in the world

Depending on your view of human psychology, this doesn't seem like that bad a description, so long as we're talking about people only maximizing their own circuitry. (Maximizing is probably wrong, rather than keeping it within some reference range.)

We are talking about humans, not FAI design, right?

That's what I had that in mind, yeah.


My core objection, which I think lines up with SaidAchmiz's, is that even if there's the ability to measure people's satisfaction objectively (so that we can count the transparency problem as solved), that doesn't tell us how to make satisfaction tradeoffs between individuals.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2014-05-05T15:01:07.122Z · LW(p) · GW(p)

even if there's the ability to measure people's satisfaction objectively (so that we can count the transparency problem as solved), that doesn't tell us how to make satisfaction tradeoffs between individuals.

I agree with this. I was originally only objecting to the argument that aggregating utility between individuals would be impossible or incoherent, but I do not have an objection to the argument that the mapping from subjective states to math is underspecified. (Though I don't see this as a serious problem for utilitarianism: it only means that different people will have different mappings rather than there being a single unique one.)

Replies from: SaidAchmiz, tog
comment by Said Achmiz (SaidAchmiz) · 2014-05-05T18:15:08.427Z · LW(p) · GW(p)

I was originally only objecting to the argument that aggregating utility between individuals would be impossible or incoherent

Er, hang on. If this is your objection, I'm not sure that you've actually said what's wrong with said argument. Or do you mean that you were objecting to the applicability of said argument to hedonistic utilitarianism, which is how I read your comments?

Replies from: Kaj_Sotala, Kaj_Sotala
comment by Kaj_Sotala · 2014-05-06T05:29:27.703Z · LW(p) · GW(p)

To add to my "yes": I agree with the claim that aggregating utility between individuals seems to be possibly incoherent in the context of preference utilitarianism. Indeed, if we define utility in terms of preferences, I'm even somewhat skeptical of the feasibility of optimizing the utility of a single individual over their lifetime: see this comment.

comment by Kaj_Sotala · 2014-05-05T18:23:24.099Z · LW(p) · GW(p)

Or do you mean that you were objecting to the applicability of said argument to hedonistic utilitarianism

Yes.

comment by tog · 2014-05-05T17:53:41.646Z · LW(p) · GW(p)

Kaj, is there somewhere you lay out your ethical views in more detail?

Replies from: tog, Kaj_Sotala
comment by tog · 2014-05-05T17:54:26.158Z · LW(p) · GW(p)

Ditto for Vaniver and Said.

Replies from: Vaniver, SaidAchmiz, SaidAchmiz
comment by Vaniver · 2014-05-05T19:21:24.695Z · LW(p) · GW(p)

Ditto for Vaniver and Said.

I approve of virtuous acts, and disapprove of vicious ones.

In terms of labels, I think I give consequentialist answers to the standard ethical questions, but I think most character improvement comes from thinking deontologically, because of the tremendous amount of influence our identities have on our actions. If one thinks of oneself as humble, that has many known ways of making one act differently. One's abstract, far mode views are likely to only change one's speech, not one's behavior. Thus, I don't put all that much effort into theories of ethics, and try to put effort instead into acting virtuously.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-05-05T19:54:01.340Z · LW(p) · GW(p)

Interestingly, it seems our views are complementary, not contradictory. I would (I think) be willing to endorse what you said as a recipe for implementing the views I describe.

comment by Said Achmiz (SaidAchmiz) · 2014-05-05T18:57:32.851Z · LW(p) · GW(p)

There is no such centralized place, no; I've alluded to my views in comments here and there over the past year or so, but haven't gone laid them out fully. (Then again, I'm a member of no movements that depend heavily on any ethical positions. ;)

Truth be told — and I haven't disguised this — my ethical views are not anywhere near completely fleshed-out. I know the general shape, I suppose, but beyond that I'm more sure about what I don't believe — what objections and criticisms I have to other people's views — than about what I do believe. But here's a brief sketch.

I think that consequentialism, as a foundational idea, a basic approach, is the only one that makes sense. Deontology seems to me to be completely nonsensical as a grounding for ethics. Every seemingly-intelligent deontologist to whom I've spoken (which, admittedly, is a small number — a handful of people here in LessWrong) has appeared to be spouting utter nonsense. Deontology has its uses (see Bostrom's "An Infinitarian Challenge to Aggregative Ethics", and this post by Eliezer, for examples), but there it's deployed for consequentialist reasons: we think it'll give better results. I've seen the view expressed that virtue ethics is descriptively correct as an account of how human minds implement morality, and (as a result) prescriptively valid as a recommendation of how to implement your morality in your own mind once you've decided on your object-level moral views, and that seems like a more-or-less reasonable stance to take. As an actual philosophical grounding for morality, virtue ethics is nonsense, but perhaps that's fine, given the above. Consequentialism actually makes sense. Consequences are the only things that matter? Well, yes. What else could there be?

As far as varieties of consequentialism go... I think intended and foreseeable consequences matter when evaluating the moral rightness of an act, not actual consequences; judging based on actual consequences seems utterly useless, because then you can't even apply decision theory to the problem of deciding how to act. Judging on actual consequences also utterly fails to accord with my moral intuitions, while judging on intended and foreseeable consequences fits quite well.

I tend toward rule consequentialism rather than act consequentialism; I ask not "what would be the consequences of such an act?", but "what sort of world would it be like, where [a suitably generalized class of] people acted in this [suitably generalized] way? Would I want to live in such a world?", or something along those lines. I find act consequentialism to be too often short-sighted, and open to all sorts of dilemmas to which rule consequentialism simply does not fall prey.

I take seriously the complexity of value, and think that hedonistic utilitiarianism utterly fails to capture that complexity. I would not want to live in a world ruled by hedonistic utilitiarians. I wouldn't want to hand them control of the future. I generally think that preferences are what's important, and ought to be satisfied — I don't think there's any such thing as intrinsically immoral preferences (not even the preference to torture children), although of course one might have uninformed preferences (no, Mr. Example doesn't really want to drink that glass of acid; what he wants is a glass of beer, and his apparent preference for acid would dissolve immediately, were he apprised of the facts); and satisfying certain preferences might introduce difficult conflicts (the fellow who wants to torture children — well, if satisfying his preferences would result in actual children being actually tortured, then I'm afraid we couldn't have that). "I prefer to kill myself because I am depressed" is genuinely problematic, however. That's an issue that I think about often.

All that seems like it might make me a preference utilitiarian, or something like it, but as I've said, I'm highly skeptical about the possibility or even coherence of aggregating utility across individuals, not to mention the fact that I don't think my own preferences adhere to the VNM axioms, and so it may not even be possible to construct a utility function for all individuals. (The last person with whom I was discussing this stopped commenting on Lesswrong before I could get hold of my copy of Rational Choice in an Uncertain World, but now I've got it, and I'm willing to discuss the matter, if anyone likes.)

I don't think it's obvious that all beings that matter, matter equally. I don't see anything wrong with valuing my mother much more than I value a randomly selected stranger in Mongolia. It's not just that I do, in fact, value my mother more; I think it's right that I should. My family and friends more than strangers; members of my culture (whatever that means, which isn't necessarily "nation" or "country" or any such thing, though these things may be related) more than members of other cultures... this seems correct to me. (This seems to violate both the "equal consideration" and "agent-neutrality" aspects of classical utilitarianism, to again tie back to the SEP breakdown.)

As far as who matters — to a first approximation, I'd say it's something like "beings intelligent and self-aware enough to consciously think about themselves". Human-level intelligence and subjective consciousness, in other words. I don't think animals matter. I don't think unborn children matter, nor do infants (though there are nonetheless good reasons for not killing them, having to do with bright lines and so forth; similar considerations may protect the severely mentally disabled, though this is a matter which requires much further thought).

Do these thoughts add up to a coherent ethical system? Unlikely. They're what I've got so far, though. Hopefully you find them at least somewhat useful, and of course feel free to ask me to elaborate, if you like.

comment by Said Achmiz (SaidAchmiz) · 2014-05-07T07:36:50.550Z · LW(p) · GW(p)

Out of curiosity, what was your reason for asking about my ethical views in detail? I did somewhat enjoy writing out that comment, but I'm curious as to whether you were planning to go somewhere with this.

Replies from: tog
comment by tog · 2014-05-08T06:56:35.860Z · LW(p) · GW(p)

I'm glad you enjoyed it, as you're right I didn't go anywhere - I got distracted by other thing. But it was partly a sort of straw poll to supplement the survey, and partly connected to these concerns: http://lesswrong.com/lw/k60/2014_survey_of_effective_altruists/aw1p

comment by Kaj_Sotala · 2014-05-05T18:25:43.375Z · LW(p) · GW(p)

No big systematic overview, though several comments and posts of mine touch upon different parts of them. Is there anything in particular that you're interested in?

Replies from: tog
comment by tog · 2014-05-06T23:40:58.747Z · LW(p) · GW(p)

If I could ask two quick questions, it'd be whether you're a realist and whether you're a cognitivist. The preponderance of those views within EA is what I've heard debated most often. (This is different from what first made me ask, but I'll drop that.)

I know Jacy Anthis - thebestwecan on LessWrong - has an argument that realism combined with the moral beliefs about future generations typical among EAs suggests that smarter people in the future will work out a more correct ethics, and that this should significantly affect our actions now. He rejects realism, and think this is a bad consequence. I think it actually doesn't depend on realism, but rather on most forms of cognitivism, for instance ones on which our coherent extrapolated view is correct. He plans to write about this.

Replies from: Kaj_Sotala, thebestwecan
comment by Kaj_Sotala · 2014-05-07T07:27:54.408Z · LW(p) · GW(p)

Definitely not a realist. I haven't looked at the exact definitions of these terms very much, but judging from the Wikipedia and SEP articles that I've skimmed, I'd call myself an ethical subjectivist (which apparently does fall under cognitivism).

comment by thebestwecan · 2014-05-07T00:05:43.515Z · LW(p) · GW(p)

I believe the prevalence of moral realism within EA is risky and bad for EA goals for several reasons. One of which is that moral realists tend to believe in the inevitability of a positive far-future (since smart minds will converge on the "right" morality), which tends to make them focus on ensuring the existence of the far future at the cost of other things.

If smart minds will converge on the "right" morality, this makes sense, but I severely doubt that is true. It could be true, but that possibility certainly isn't worth sacrificing other goals of improvement.

And I think trying to figure out the "right" morality is a waste of resources for similar reasons. CEA has expressed the views I argue against here, which has other EAs and me concerned.

comment by tog · 2014-05-05T17:45:24.665Z · LW(p) · GW(p)

Can you suggest some? These could go into next year's survey, though we're keeping that short - more likely they'd go into a followup that Ben Landau-Taylor of Leverage Research is running.

comment by [deleted] · 2014-05-11T04:42:22.180Z · LW(p) · GW(p)

Why are you taking the effective altruists survey?

Replies from: SaidAchmiz
comment by thebestwecan · 2014-05-02T18:53:26.585Z · LW(p) · GW(p)

I think it'd be interesting to know more about the specific ethical views of ethically-minded EAs, but the majority of EAs are not well-versed enough to make Utilitarianism vs. Other Consequentialism distinctions. It's good to make a big survey like this as easy to fill out as possible.

Same thing about the "political views" point, although there are standards for left vs. right across countries: http://en.wikipedia.org/wiki/Left%E2%80%93right_politics

Replies from: SaidAchmiz, tog
comment by Said Achmiz (SaidAchmiz) · 2014-05-05T06:18:27.169Z · LW(p) · GW(p)

the majority of EAs are not well-versed enough to make Utilitarianism vs. Other Consequentialism distinctions

I think that's a problem! (I discuss in this comment some reasons why.)

Replies from: jkaufman
comment by jefftk (jkaufman) · 2014-05-09T17:52:07.725Z · LW(p) · GW(p)

Whether or not it's a problem, a survey is not a good place to address it. You have to ask questions people will be able to easily answer if you want to get useful data.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-05-10T01:11:45.859Z · LW(p) · GW(p)

You have to ask questions people will be able to easily answer if you want to get useful data.

That's true, but it is also an inherently problematic approach if (as will almost certainly be the case when it comes to issues of ethics, politics, etc.) the things you really want to know are not easily elicited by questions that people will be able to easily answer, and vice versa — the questions that people can easily answer don't actually tell you what you really want to know about those people's views, attitudes, etc.

In any case, what I meant wasn't that "EAs are not well-versed enough in moral philosophy" is a problem for the survey — what I meant was that it's a problem for the EA movement.

comment by tog · 2014-05-02T21:00:18.734Z · LW(p) · GW(p)

I agree about consequentialism. Also, at that level of detail I can't see a way it's action-relevant (whereas if most EAs say they have no knowledge of ethical theories, that suggests a non-philosophical audience is more receptive than some have thought).

We should have explained that political terms were what you'd naturally describe yourself as in your country. Do people think most will have interpreted them thus? If so, we can cross-tabulate them against country.

If not, would this make many people more than one point out along the spectrum? I'd have thought that an American who describes themselves as 'left' is at least 'centre left' in Europe, and so on.

Replies from: Kaj_Sotala, Drayin
comment by Kaj_Sotala · 2014-05-15T06:55:00.873Z · LW(p) · GW(p)

If not, would this make many people more than one point out along the spectrum?

Quite possibly. At least in Finland, the word "left" refers to people who tend to have at least a rough familiarity with actual Marxist theories and still endorse many of them, and tend to use the word "capitalism" as a negative term. It also includes actual outright communists who want to go to a planned economy, though they're a fringe group even here and mostly dying out. Still, it's my impression that "Left" means something very much more to right in the US.

I've frequently heard it said that the average American leftist would be considered a clear right-winger in Finland, though I don't have enough familiarity with the exact positions of American leftists to be able to tell whether that's true.

Replies from: TheOtherDave
comment by TheOtherDave · 2014-05-15T16:29:46.365Z · LW(p) · GW(p)

I don't have enough familiarity with the exact positions of American leftists to be able to tell whether that's true.

It's hard to say anything coherent about the U.S. "left" and "right" without antagonizing both groups, but my $0.02:

  • I'd characterize the typical U.S. leftist as not really having the foggiest clue about Marx beyond his having some vaguely important relationship to Soviet-style communism, and as not having a clear stance regarding communism or capitalism... either because they actively support a mixed economy, or because they are confused about economics. (I don't mean to imply here that Americans who do have a clear stance aren't confused.)

  • While outright communists are generally considered "left" in the U.S., much as outright fascists are generally considered "right" (though some disagree), neither group is terribly relevant; they exist mostly as extremes to rhetorically compare our political opponents to. "So-and-so is a communist/fascist" gets said a lot, but if one were to respond to that claim by discussing various points of non-congruence with communism or fascism this would likely be seen as sophistry rather than on-point analysis.

  • The "left" tends to support government intervention to enforce equal treatment of some genders, ethnicities and sexual orientations, to enforce wealth distribution, and to provide communal access to various goods (of which the most fractious right now is health insurance, which has become a proxy for health care). Domestically, this intervention is usually framed in terms of government-regulated markets rather than straight-up government control of the means of production or distribution, although there are exceptions.

  • Also, the "left" is generally associated with minimizing restrictions on abortion and contraception, maximizing restrictions on firearms, unionizing labor, increasing the political influence of feminism (and "social justice" more generally), and decreasing the political influence of Christianity (and religion more generally), and decreasing support for the military, while the "right" is generally associated with the opposites, though to my mind these are more like historical accidents that could have gone either way.

  • The Democratic Party is seen as "left" and more popular in high-density urban areas; the Republican Party is seen as "right" and popular in more rural areas. There's a substantial group of "independent" voters but they tend to support one party over the other.

Replies from: Nornagest
comment by Nornagest · 2014-05-15T17:00:20.715Z · LW(p) · GW(p)

I'd characterize the typical U.S. leftist as not really having the foggiest clue about Marx beyond his having some vaguely important relationship to Soviet-style communism, and as not having a clear stance regarding communism or capitalism... either because they actively support a mixed economy, or because they are confused about economics.

I think this is true, but with the caveat that a lot of the memes circulating among educated leftists in the US are basically Marxian in their approach to class and economics. Usually not orthodox Marxist, though, and they fall well short of cohering into a complete Marxian analysis anywhere outside of sociology departments and the odd punk show.

Joe Left is generally not aware of this. Joe Right probably has a confused idea of the relation ("communist" is a dirty word in the US, so right-wing news outlets don't miss opportunities to use it), but is unaware of the Marxian/Marxist distinction and thinks it makes Joe Left an outright commie.

Replies from: TheOtherDave
comment by TheOtherDave · 2014-05-15T18:18:33.380Z · LW(p) · GW(p)

I don't know enough about Marxianism (either orthodox or heterodox) to have a useful opinion about how popular Marxian memes are among the US left (or, for that matter, the US right), but I certainly agree that that's a different question than how well informed J Left is about Marx, and an interesting one.

comment by Drayin · 2014-05-03T01:38:41.605Z · LW(p) · GW(p)

I'm not so sure, in terms of their actual policies I hear the British Conservatives are pretty close to the US Democrats. They're cutting services for the poor, but to a level above that found in the US. That does typically show inclinations similar to those of US Republicans, but it could also reflect a view about the optimal end level of services similar to some Democrats. So I guess it depends on what it shows most often, and whether those inclinations are most informative for the purposes of understanding people (eg in this survey).

comment by mare-of-night · 2014-05-12T14:14:05.004Z · LW(p) · GW(p)

Taking this was an interesting feeling. In particular, being asked (even anonymously) about donations and other concrete actions in a context where donating a lot is the norm. The scene in HP:MOR where the phoenix asks Hermione who she's saved comes to mind. That is, being asked just made it very obvious that I believe I should be an effective altruist, but from my actions it doesn't look like I am one. I have reasons for that, but it's still worrying, since I don't have much evidence that I won't just change my mind once I do have money.

For what it's worth, I just set up a bunch of email reminders throughout my last semester to make sure I put some kind of donation plan in place by the time I start working (even if it's "nevermind, I was wrong about my values").

Replies from: mare-of-night
comment by mare-of-night · 2015-07-26T15:56:32.788Z · LW(p) · GW(p)

That was a weird feeling; I didn't realize that this was my own comment, and only checked the username when that last paragraph seemed eerily familiar.

As a follow-up: I got a good full-time job starting in January 2015. I've got 10% of post-tax earnings from my internships set aside in a savings account to donate when Givewell announces 2015 recommendations, and I'll add 5% of this year's pre-tax salary to that donation also. Nothing actually donated yet, but it seems really unlikely that I won't do it. I'm planning to keep donating 5% of pre-tax as a token amount for the next few years, and have a few plans for how I might be able to donate more later. I was several months late in deciding to do this and setting up the savings account, so my reminder emails didn't work perfectly, but in the end I did it.

comment by Peter Wildeford (peter_hurford) · 2014-05-02T17:02:57.638Z · LW(p) · GW(p)

Exciting to see that Peter Singer took the survey!

Replies from: KnaveOfAllTrades
comment by KnaveOfAllTrades · 2014-05-02T17:09:14.654Z · LW(p) · GW(p)

Was it definitely Peter Singer?

Replies from: peter_hurford, thebestwecan
comment by Peter Wildeford (peter_hurford) · 2014-05-02T18:51:45.189Z · LW(p) · GW(p)

As nearest as we can tell, but we're reaching out to verify.

comment by thebestwecan · 2014-05-02T18:54:42.908Z · LW(p) · GW(p)

Yes, I contacted him personally to fill it out. We used personal contacts as much as possible to avoid biased sampling (as many EAs don't frequent online forums like LW and Facebook).

Replies from: tog
comment by tog · 2014-05-02T21:00:43.146Z · LW(p) · GW(p)

[confused comment, ignore]

Replies from: thebestwecan
comment by thebestwecan · 2014-05-06T23:46:59.342Z · LW(p) · GW(p)

Uh, I contacted him. Tom, this is on the survey planning document :P

comment by Gunnar_Zarncke · 2014-05-02T06:34:02.148Z · LW(p) · GW(p)

It is not clear whether non-EAs (whatever that exactly means) should participate in this survey. My first reaction was: "I'm not really an EA. Should I take the survey? Maybe not."

I'd think as many people as possible should take this survey to avoid selection biases.

EDIT: I took the survey,

Replies from: tog
comment by tog · 2014-05-02T16:23:53.614Z · LW(p) · GW(p)

Agreed that as many people as possible should take it. The first question asks whether you self-identify as an 'EA', and clarifies that we'd also like the responses of those who don't.

Replies from: Lumifer
comment by Lumifer · 2014-05-02T16:32:49.518Z · LW(p) · GW(p)

For people who are not EAs, a lot of questions make little sense.

Replies from: peter_hurford, thebestwecan
comment by Peter Wildeford (peter_hurford) · 2014-05-02T17:00:04.520Z · LW(p) · GW(p)

If the questions don't make sense, then either answer them as best you can or don't answer them. We're just looking to make sure that we minimize as much as possible our "I'm not really that EA, so I won't take the survey" sample bias.

Replies from: mare-of-night
comment by mare-of-night · 2014-05-12T13:16:14.737Z · LW(p) · GW(p)

I appreciate that you did this - I wanted to give you information, but I'm also not very EA and kind of insecure about that, so I probably would have quit midway through the survey if there were too many questions that seemed like they weren't for me.

comment by thebestwecan · 2014-05-02T18:55:59.605Z · LW(p) · GW(p)

Yes. Many non-EA results will include lots of "unsure/unfamiliar with the options" responses.

comment by Oscar_Cunningham · 2014-05-01T15:56:29.964Z · LW(p) · GW(p)

The question "When did you first hear the term 'effective altruism'?" is tricky because that term was only invented in late 2011, after many of us had heard about effective altruism itself.

Replies from: tog, thebestwecan
comment by tog · 2014-05-01T16:21:55.536Z · LW(p) · GW(p)

Yes - 2012 in practice. To make the question precise, it clarifies that it refers to the term. It would also be interesting to know when people first head of EA avant la lettre - this could mean many things, but hearing of an EA org certainly counts. For my part I heard of GWWC in 2010, from Pablo Stafforini (benthamite here). I read Peter Unger's book Living High and Letting Die in about 2002, which argues for giving large amounts to effective charities, and is perhaps the first mention of Earning to Give.

Replies from: thebestwecan
comment by thebestwecan · 2014-05-01T16:23:42.048Z · LW(p) · GW(p)

I think some people might have us beat 300 years for EtG ;)

http://www.jefftk.com/p/history-of-earning-to-give-iii-john-wesley

comment by thebestwecan · 2014-05-01T16:19:14.777Z · LW(p) · GW(p)

Effective Altruism was used several years before CEA adopted the term. If you heard it before that time, please put the earlier date. However, yes, many people will put dates after CEA's adoption (or even after Singer's TED Talk, which seems to be the final galvanization of the term).

Replies from: tog, jkaufman
comment by tog · 2014-05-01T16:53:21.529Z · LW(p) · GW(p)

Are you sure it was used beforehand Jacy? Are there instances you can remember?

Replies from: thebestwecan
comment by thebestwecan · 2014-05-01T17:36:01.690Z · LW(p) · GW(p)

It was used in the Felicifia community, although it wasn't used as definitively as it is now. Although 'strategic altruism' was more common although that wasn't as catchy. It was also just used in casual conversation.

I could be wrong though.

Replies from: DjangoCorte, tog, jkaufman
comment by DjangoCorte · 2014-05-01T17:55:18.883Z · LW(p) · GW(p)

This 'official' account gives the impression that no term had much common currency, apart from the jokey 'super-hardcore do-gooder' before the end of 2011. I can't comment about whether other branches of the community used terms in a similar way- I've never heard of felicifia. http://www.effective-altruism.com/the-history-of-the-term-effective-altruism/

Replies from: Drayin
comment by Drayin · 2014-05-03T19:27:15.502Z · LW(p) · GW(p)

lukeprog (Luke Muehlhauser) objects to CEA's claim that EA grew primarily out of Giving What We Can at http://www.effectivealtruism.org/#comments [? · GW] :

This was a pretty surprising sentence. Weren’t LessWrong & GiveWell growing large, important parts of the community before GWWC existed? It wasn’t called “effective altruism” at the time, but it was largely the same ideas and people.

Replies from: thebestwecan, DjangoCorte
comment by thebestwecan · 2014-05-06T23:49:17.320Z · LW(p) · GW(p)

I agree with Luke here. CEA seems to often overstate its role in the EA movement (another example at http://centreforeffectivealtruism.org/).

comment by DjangoCorte · 2014-05-04T09:04:29.386Z · LW(p) · GW(p)

I certainly agree that effective altruism existed long before GWWC.

The discussion I'm addressing though is about the origin of the term "effective altruist."

comment by tog · 2014-05-01T17:49:11.516Z · LW(p) · GW(p)

That's interesting, especially if someone can find a link. Here's a date-based Google search, though a cursory glance doesn't reveal any references where the term itself was included before 2012:

https://www.google.ca/search?q=%22effective+altruism%22&client=firefox-a&hs=1mw&rls=org.mozilla%3Aen-US%3Aofficial&channel=sb&sa=X&ei=pohiU5jjINSyyASUgIGgBQ&ved=0CB0QpwUoBg&source=lnt&tbs=cdr%3A1%2Ccd_min%3A1%2F1%2F2008%2Ccd_max%3A12%2F31%2F2011&tbm=

comment by jefftk (jkaufman) · 2014-05-09T17:43:11.366Z · LW(p) · GW(p)

It was used in the Felicifia community

The first mention I find on the Felicifia site is from 2012.

(As a check, the first entry I find for "suffering" is 2007.)

(And trying this search with Felicifia's search tool gives "The following words in your search query were ignored because they are too common words: altruism effective.")

comment by jefftk (jkaufman) · 2014-05-09T17:33:46.438Z · LW(p) · GW(p)

Effective Altruism was used several years before CEA adopted the term.

When I wrote "A Name For A Movement?" in March 2012, "Effective Altruism" was a name in circulation, but other names like "Smart Giving" and "Optimal Philanthropy" were more common.

Replies from: tog
comment by tog · 2014-05-11T17:31:08.976Z · LW(p) · GW(p)

For people who haven't ready CEA's post on this, that was after their vote in December 2011. When I participated in it I don't remember anyone discussing it as already in use; I'd expect someone would remember this if it was in use.

comment by SoerenMind · 2014-05-08T12:39:35.840Z · LW(p) · GW(p)

Just to let you guys know: Like with the LW survey, I wouldn't have minded to fill in an optional 'extended section'. I imagine you made the survey shorter in order not to scare people off.

Replies from: tog
comment by tog · 2014-05-08T17:26:43.110Z · LW(p) · GW(p)

Thanks, that's helpful to know. Jacy Anthis suggested that, and I was the main person keeping it short. I was going to link at the end to a follow-up survey Ben Landau Taylor was listing, but it wasn't ready in time.

In general, how did people find the length of the survey, would they have filled in more, and would they have followed a link to more questions?

Replies from: Drayin, SoerenMind
comment by Drayin · 2014-05-08T17:37:31.925Z · LW(p) · GW(p)

Knowing nothing about the survey before I would have filled in a much longer survey but then I'm a survey junkie I even got a long way into the 45 minute Yale survey.

comment by SoerenMind · 2014-05-12T11:46:06.125Z · LW(p) · GW(p)

When I had already started the survey as I said I wouldn't have minded to fill in more. If it had been previously announced to be a longer survey I imagine the initial barrier would have been higher for many though.

Personally, I would have filled it in even if it was longer since I think it's important. But with a different topic that could have made me not fill it in.

comment by tog · 2014-05-03T02:19:08.402Z · LW(p) · GW(p)

I'd love to hear thoughts connected to the LessWrong censuses: comparisons, lessons learnt, feedback on our survey, thoughts on how EAs and LessWrongers may differ, etc. The censuses have been going on a long time, and have a lot of data, so this would be interesting.

Replies from: Drayin
comment by Drayin · 2014-05-03T02:25:09.846Z · LW(p) · GW(p)

Can anyone involved in the census say whether it reached people wholly or mainly thought a post on http://lesswrong.com/promoted/ ? That'd be pretty powerful if it can get 1500+ responses - it would be great if this post could be promoted too, as many people are putting a lot of effort into sharing the EA survey widely! How can we make promotion happen?

comment by KatWoods (ea247) · 2014-05-01T18:58:12.950Z · LW(p) · GW(p)

I predict there will be around 35% of people supporting meta and x-risk causes (like 80k, GWWC operating costs, MIRI, FHI etc).

comment by tog · 2014-05-01T14:38:30.250Z · LW(p) · GW(p)

[Thread for making and discussing predictions]

To expand on my predictions, I think that global poverty will be the most popular cause except among those who say they heard of EA through LessWrong (whose numbers I'll be interested to see). I also think that skepticism/atheism will be the other social movement with which most identify, and atheism the most popular religious position. In the link Jacy Anthis has given a full set of predictions to test his accuracy.

Replies from: Pat, Drayin, kdbscott, thebestwecan
comment by Pat · 2014-05-01T22:49:46.973Z · LW(p) · GW(p)

Here are my predictions (on Prediction Book):

Replies from: tog
comment by tog · 2014-05-01T23:12:26.073Z · LW(p) · GW(p)

Fun site! Once you register, it lets you assign probabilities to predictions others have made.

comment by Drayin · 2014-05-03T01:44:18.698Z · LW(p) · GW(p)

I predict:

  • utilitarianism's the most common philosophy
  • a clear majority will be non-religious, and respondents often identify with skepticism/atheism as a social movement
  • a clear majority are left wing
  • most respondents are under 30, with 50% students
  • people often heard of EA through Peter Singer

And the most significant outcome:

  • There will be many non-students without significant donations, which in my view is not a good thing at all
comment by kdbscott · 2014-05-01T23:02:57.928Z · LW(p) · GW(p)

Good point about LW affiliation - in addition I would add that results are highly dependent on how the survey is distributed. This makes large predictions difficult, but more specific predictions (like >80% of LW affiliations will identify as atheist/agnostic) might be the way to go.

I'm still getting familiar with this community, but I suppose it's a fun exercise so I've added some thoughts to the excel sheet.

Replies from: Drayin
comment by Drayin · 2014-05-03T01:46:06.062Z · LW(p) · GW(p)

Yes, the survey asks where you heard of it itself, and what groups you're a member of, and where you first heard of EA: LessWrong is a candidate for each. So you can make predictions for specific groups.

comment by thebestwecan · 2014-05-01T16:21:44.894Z · LW(p) · GW(p)

I definitely agree LW affiliation will be a major predictor of other results. Perhaps I should have made two sets of predictions (one for LW folks, one for others). - Jacy

Replies from: Drayin, EricHerboso
comment by Drayin · 2014-05-01T18:00:09.824Z · LW(p) · GW(p)

One thing that would be really interesting is comparing EA-LW folks with both the standard EA answers and the standard LW survey answers.

comment by EricHerboso · 2014-05-01T19:41:50.778Z · LW(p) · GW(p)

Just to be clear, it wouldn't be "LW affiliation"; it would be "heard of EA through LW". I'm sure there are quite a few like me who learned about LW through EA, not the other way around.

Replies from: tog
comment by tog · 2014-05-01T20:13:31.625Z · LW(p) · GW(p)

There are questions both about whether you're a LessWrong member and whether you first heard of EA through LessWrong, so we can get data on both.

comment by Said Achmiz (SaidAchmiz) · 2014-05-01T20:11:12.616Z · LW(p) · GW(p)

What qualifies one as an effective altruist for the purposes of this survey? Is it "self-identifies as an effective altruist"? Or something else?

Also:

were altruistic before becoming EAs

This phrase strongly suggests that the EA community needs to more clearly describe what it is they mean when they use the terms "altruism" and "effective altruism" (as I've commented before).

Replies from: tog
comment by tog · 2014-05-01T20:16:56.014Z · LW(p) · GW(p)

What qualifies one as an effective altruist for the purposes of this survey? Is it "self-identifies as an effective altruist"?

Yes, the second question is:

Could you, however loosely, be described as 'an EA'? Answer no if you are not familiar with the term 'EA', which stands for 'Effective Altruist'. This question is not asking if you are altruistic and value effectiveness, but rather whether you loosely identify with the existing 'EA' identity.

This phrase strongly suggests that the EA community needs to more clearly describe what it is they mean when they use the terms "altruism" and "effective altruism" (as I've commented before).

What would you suggest? I take 'altruistic' to generally mean 'acts partly for the good of others, and is willing to make sacrifices for this end'. There's then a decent behavioural test for whether people were altruistic beforehand. There's no clear definition of being EA, besides accepting some sufficient number of EA ideas.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2014-05-01T22:31:53.588Z · LW(p) · GW(p)

This question is not asking if you are altruistic and value effectiveness, but rather whether you loosely identify with the existing 'EA' identity.

I judge this to be a problematic criterion. See this comment, esp. starting with "To put this another way ...", for why I think so.

What would you suggest? I take 'altruistic' to generally mean 'acts partly for the good of others, and is willing to make sacrifices for this end'.

That does seem like a reasonable definition, but in that form it seems rather too vague to be useful for the purposes of constructing a behavioral test. We'd have to at least begin to sketch out what sorts of acts we mean (literally any act that benefits anyone else in any way?), and what sorts of sacrifices, and how willing, etc.

There's no clear definition of being EA, besides accepting some sufficient number of EA ideas.

Quite so. My contention is that there's a distinct separation between, on the one hand, the general idea that we should be altruistic (in whatever sense we decide is meaningful and useful) and that we should seek to optimize the effectiveness of our altruism, and on the other hand, the loose community of people who share certain values, certain approaches to ethics, etc. (as I outline in the above-linked comment), which are not necessarily causally or conceptually entangled with the former (more general) idea.

This is problematic for various reasons, I think. I won't clutter this thread by starting a debate on those reasons (unless asked), but I think it's at least important (and relevant to endeavors like this survey) to recognize this distinction.

Replies from: tog
comment by tog · 2014-05-01T22:59:09.895Z · LW(p) · GW(p)

I judge this to be a problematic criterion. See this comment, esp. starting with "To put this another way ...", for why I think so.

That comment makes a lot of sense. It depends what we use the criterion for. In the survey, it's to gather information, and it's for precisely this reason that I chose not to ask if people were 'EAs' in your loose sense - almost everyone would say yes. I'm curious as to what uses do you think the criterion's problematic for.

My contention is that there's a distinct separation between, on the one hand, the general idea that we should be altruistic (in whatever sense we decide is meaningful and useful) and that we should seek to optimize the effectiveness of our altruism, and on the other hand, the loose community of people who share certain values, certain approaches to ethics, etc. (as I outline in the above-linked comment), which are not necessarily causally or conceptually entangled with the former (more general) idea.

It's a matter of a degree, but in the EA context (which sets a high bar), I personally call people 'altruistic' if (but not only if) they've donated >=10% of a real income for over a year or they've consistently spent over an hour a week doing something they'd otherwise rather not do to help others.

My contention is that there's a distinct separation between, on the one hand, the general idea that we should be altruistic (in whatever sense we decide is meaningful and useful) and that we should seek to optimize the effectiveness of our altruism, and on the other hand, the loose community of people who share certain values, certain approaches to ethics, etc. (as I outline in the above-linked comment), which are not necessarily causally or conceptually entangled with the former (more general) idea.

That's right, if by 'conceptually entangled' you mean 'necessarily connected', or even 'commonly accepted by both groups of people'. For example, I believe utilitarianism's widely accepted by EAs (though the survey may show otherwise!), but not entangled with merely valuing altruism and the effectiveness of altruism.

This is problematic for various reasons, I think. I won't clutter this thread by starting a debate on those reasons (unless asked), but I think it's at least important (and relevant to endeavors like this survey) to recognize this distinction.

I see no harm in thread-cluttering, at least here - go for it.

Replies from: SaidAchmiz, SaidAchmiz, Drayin
comment by Said Achmiz (SaidAchmiz) · 2014-05-01T23:29:32.005Z · LW(p) · GW(p)

This is problematic for various reasons, I think. I won't clutter this thread by starting a debate on those reasons (unless asked), but I think it's at least important (and relevant to endeavors like this survey) to recognize this distinction.

I see no harm in thread-cluttering, at least here - go for it.

Well, one issue is recruiting/evangelism/outreach/PR/etc. If you want to convince people[1] to both be altruistic and to attempt to optimize their altruism (i.e., the general form of the "effective altruism" concept), it does not do to conflate that general form with your specific form (which involves the specific, idiosyncratic ideas I listed in that comment I linked — a particular form of utilitarianism, a particular set of values including e.g. the welfare of animals, etc.).

Take me, for instance. I find the general concept to be almost obvious. (I'm an altruistic person by temperament, though I remain agnostic on whether certain forms of direct action are in fact the best way to bring about the sort of world toward which such action is ostensibly aimed, as compared with e.g. a more libertarian approach. As for the "effective" part — well, duh.) However, if you were to say: "Hey, Said Achmiz, want to join this-and-such EA group / organization / etc.? Or donate to it? Or otherwise contribute to its success?" I would demur, because in my experience, groups and organizations that self-identify as EA tend to have the aforementioned specific form of EA as their aim — and I have significant disagreements with many components of that specific form.

If you (this hypothetical organization) do not make it clear that you have, as your goal, the general form of effective altruism, and that the specific form is merely one way in which your members express it, then I won't join/contribute/etc.

If you in fact have only the specific, and not the general, form as your goal, then not only will I not join, but I will be quite cross about the fact that you would thereby be appropriating the term "effective altruism" (which would otherwise describe a perfectly reasonable concept with which I agree and a general ethical and practical stance which I support), and using it to describe something which I do not support and about which I have strong reservations, and leaving me (and others like me) without what would otherwise be the best term for a position I do support.

I have another concern, which I will discuss in a sibling comment.

Edit: Whoops, forgot to resolve the footnote:

[1] When I say "convince people", I mean both convincing non-altruists to become altruistic, and convincing ineffective altruists ("I'm a high-powered lawer who spends every weeknight volunteering at my local soup kitchen, while giving no money to charity!") to be more effective in their altruism. I realize these two aims may require different approaches; I think those differences are tangential to my points here.

comment by Said Achmiz (SaidAchmiz) · 2014-05-05T06:16:04.234Z · LW(p) · GW(p)

Here is the promised other issue I see with the conflation of the general[1] and specific[2] forms of effective altruism.

You do not actually ever argue for the ideas making up that specific form.

It seems to go like this:

"We all think being altruistic is good, right? Of course we do. And we think it's important to be effective in our altruism, don't we? Of course. Good! Now, onwards to the fight for animal rights, the saving of children in Africa, the application of utilitarian principles to our charity work, and all the rest."

Now, as I say in my other comments, one issue is that potential newcomers to the movement might assent to those first two questions, but to the "Now, onwards ..." say — "whoa, whoa, where did that suddenly come from?". But the other issue is that it seems like you yourselves haven't given much thought to those positions. How do you know they're right, those philosophical and moral ideas? A lot of EA writing seems not to even consider the question! It's not like these are obvious principles you're assuming — many intelligent people, on LessWrong and elsewhere, do not agree with them!

Of course I don't actually think you've simply accepted these ideas out of some sort of blind go-alonging with some liberal crowd. This is LessWrong; I think better of you folks than that. (Although some EA-ers without an LW-or-similar background may well have given the matter just as little thought as that.) Presumably, you were, at some point, convinced of these ideas, in some way, by some arguments or evidence or considerations.

But I have no idea what those considerations are. I have no idea what convinced you; I don't know why you believe what you believe, because you hardly even acknowledge that you believe these things. In most EA writings I've seen, they are breezily assumed. That is not good for the epistemic health of the movement, I think.

I think it would be good to have some effort to clearly delineate the ideas that are held by, and commonly taken as background assumptions by, the majority of people in the EA movement; to acknowledge that these are nontrivial philosophical and moral positions, which are not shared by all people or even all who identify as rationalists; to explain how it was that you[3] became convinced of these ideas; and to lay out some arguments for said ideas, for potential disagreers to debate, if desired.

[1] "Being altruistic is good, and we should be effective in our altruistic actions."
[2] The specific cluster of ideas held by a specific community of people who describe themselves as the EA community.
[3] By "you" I don't necessarily mean you, personally, but: as many prominent figures in the EA movement as possible, and more generally, anyone who undertakes to write things intended to build the EA movement, recruit, etc.

Replies from: tog
comment by tog · 2014-05-05T17:43:50.723Z · LW(p) · GW(p)

Now, onwards to the fight for animal rights, the saving of children in Africa, the application of utilitarian principles to our charity work, and all the rest.

Global poverty don't generally state or imply utilitarianism or similar views, though x-riskers do (at least those who value non-existent people). I personally favour global poverty charities, and am quite tentative in my attitudes to many mainstream ethical theories, and don't think being more so would affect my donations (though being less so might).

But the other issue is that it seems like you yourselves haven't given much thought to those positions. How do you know they're right, those philosophical and moral ideas?

The degree of thought varies a lot, sure. I agree that people should spend more time on them when they're action relevant, as they are for people who'd act to prevent x-risk if they accepted them.

In most EA writings I've seen, they are breezily assumed.

Breezy assumption isn't optimal, but detailed writing about ethical theory isn't either.

comment by Drayin · 2014-05-03T01:47:22.519Z · LW(p) · GW(p)

It's a matter of a degree, but in the EA context (which sets a high bar), I personally call people 'altruistic' if (but not only if) they've donated >=10% of a real income for over a year or they've consistently spent over an hour a week doing something they'd otherwise rather not do to help others.

I apply a similarly high bar for altruism - many EAs don't count as altruistic based on this.