Four Focus Areas of Effective Altruism

post by lukeprog · 2013-07-09T00:59:40.963Z · LW · GW · Legacy · 55 comments

Contents

  Focus Area 1: Poverty Reduction
  Focus Area 2: Meta Effective Altruism
  Focus Area 3: The Long-Term Future
  Focus Area 4: Animal Suffering
  Other focus areas
  Working together
None
55 comments

It was a pleasure to see all major strands of the effective altruism movement gathered in one place at last week's Effective Altruism Summit.

Representatives from GiveWell, The Life You Can Save, 80,000 Hours, Giving What We Can, Effective Animal AltruismLeverage Research, the Center for Applied Rationality, and the Machine Intelligence Research Institute either attended or gave presentations. My thanks to Leverage Research for organizing and hosting the event!

What do all these groups have in common? As Peter Singer said in his TED talk, effective altruism "combines both the heart and the head." The heart motivates us to be empathic and altruistic toward others, while the head can "make sure that what [we] do is effective and well-directed," so that altruists can do not just some good but as much good as possible.

Effective altruists (EAs) tend to:

  1. Be globally altruisticEAs care about people equally, regardless of location. Typically, the most cost-effective altruistic cause won't happen to be in one's home country.
  2. Value consequences: EAs tend to value causes according to their consequences, whether those consequences are happiness, health, justice, fairness and/or other values.
  3. Try to do as much good as possible: EAs don't just want to do some good; they want to do (roughly) as much good as possible. As such, they hope to devote their altruistic resources (time, money, energy, attention) to unusually cost-effective causes. (This doesn't necessarily mean that EAs think "explicit" cost effectiveness calculations are the best method for figuring out which causes are likely to do the most good.)
  4. Think scientifically and quantitatively: EAs tend to be analytic, scientific, and quantitative when trying to figure out which causes actually do the most good.
  5. Be willing to make significant life changes to be more effectively altruistic: As a result of their efforts to be more effective in their altruism, EAs often (1) change which charities they support financially, (2) change careers, (3) spend significant chunks of time investigating which causes are most cost-effective according to their values, or (4) make other significant life changes.

Despite these similarities, EAs are a diverse bunch, and they focus their efforts on a variety of causes.

Below are four popular focus areas of effective altruism, ordered roughly by how large and visible they appear to be at the moment. Many EAs work on several of these focus areas at once, due to uncertainty about both facts and values.

Though labels and categories have their dangers, they can also enable chunking, which has benefits for memory, learning, and communication. There are many other ways we might categorize the efforts of today's EAs; this is only one categorization.


Focus Area 1: Poverty Reduction

Here, "poverty reduction" is meant in a broad sense that includes (e.g.) economic benefit, better health, and better education.

Major organizations in this focus area include:

In addition, some well-endowed foundations seem to have "one foot" in effective poverty reduction. For example, the Bill & Melinda Gates Foundation has funded many of the most cost-effective causes in the developing world (e.g. vaccinations), although it also funds less cost-effective-seeming interventions in the developed world.

In the future, poverty reduction EAs might also focus on economic, political, or research-infrastructure changes that might achieve poverty reduction, global health, and educational improvements more indirectly, as when Chinese economic reforms lifted hundreds of millions out of poverty. Though it is generally easier to evaluate the cost-effectiveness of direct efforts than that of indirect efforts, some groups (e.g. GiveWell Labs and The Vannevar Group) are beginning to evaluate the likely cost-effectiveness of these causes.  


Focus Area 2: Meta Effective Altruism

Meta effective altruists focus less on specific causes and more on "meta" activities such as raising awareness of the importance of evidence-based altruism, helping EAs reach their potential, and doing research to help EAs decide which focus areas they should contribute to.

Organizations in this focus area include:

Other people and organizations contribute to meta effective altruism, too. Paul Christiano examines effective altruism from a high level at Rational Altruist. GiveWell and others often write about the ethics and epistemology of effective altruism in addition to focusing on their chosen causes. And, of course, most EA organizations spend some resources growing the EA movement.  


Focus Area 3: The Long-Term Future

Many EAs value future people roughly as much as currently-living people, and think that nearly all potential value is found in the well-being of the astronomical numbers of people who could populate the long-term future (Bostrom 2003; Beckstead 2013). Future-focused EAs aim to somewhat-directly capture these "astronomical benefits" of the long-term future, e.g. via explicit efforts to reduce existential risk.

Organizations in this focus area include:

Other groups study particular existential risks (among other things), though perhaps not explicitly from the view of effective altruism. For example, NASA has spent time identifying nearby asteroids that could be an existential threat, and many organizations (e.g. GCRI) study worst-case scenarios for climate change or nuclear warfare that might result in human extinction but are more likely to result in "merely catastrophic" damage.

Some EAs (e.g. Holden KarnofskyPaul Christiano) have argued that even if nearly all value lies in the long-term future, focusing on nearer-term goals (e.g. effective poverty reduction or meta effective altruism) may be more likely to realize that value than more direct efforts.

 

Focus Area 4: Animal Suffering

Effective animal altruists are focused on reducing animal suffering in cost-effective ways. After all, animals vastly outnumber humans, and growing numbers of scientists believe that many animals consciously experience pleasure and suffering.

The only organization of this type so far (that I know of) is Effective Animal Activism, which currently recommends supporting The Humane League and Vegan Outreach.

Edit: There is now also Animal Ethics, Inc.

Major inspirations for those in this focus area include Peter Singer, David Pearce, and Brian Tomasik.  


Other focus areas

I could perhaps have listed "effective environmental altruism" as focus area 5. The environmental movement in general is large and well-known, but I'm not aware of many effective altruists who take environmentalism to be the most important cause for them to work on, after closely investigating the above focus areas. In contrast, the groups and people named above tend to have influenced each other, and have considered all these focus areas explicitly. For this reason, I've left "effective environmental altruism" off the list, though perhaps a popular focus on effective environmental altruism could arise in the future.

Other focus areas could later come to prominence, too.


Working together

I was pleased to see the EAs from different strands of the EA movement cooperating and learning from each other at the Effective Altruism Summit. Cooperation is crucial for growing the EA movement, so I hope that even if it’s not always easy, EAs will "go out of their way" to cooperate and work together, no matter which focus areas they’re sympathetic to.

55 comments

Comments sorted by top scores.

comment by Michelle_Z · 2013-07-08T17:40:02.227Z · LW(p) · GW(p)

I was going to post something about this in the open thread, but this post just popped up.

I've been putting together a club for Effective Altruism on my campus (Cavaliers for Effective Altruism), and I'm stuck. I can run fundraisers and donate the money to a charity Givewell supports. My college has a system for donating to charities and fundraising, so that isn't a problem.

The difficulty is getting other people interested in the club and teaching my club-members rationality, so the club continues existing after I graduate. I originally thought teaching people rationality wouldn't be necessary, but the couple friends I mentioned this to have no idea what I'm talking about when I explain how effective altruism works. They don't have the same intuitions that I do, so it sounds odd to them. It was around then I realized that I need my club-members to know some rationality. Are there any resources/guides out there for that kind of thing?

I know LessWrong is one of those resources, but I doubt many people will listen to me if I say "This week's club-homework is to read x post from this blog." I have a couple vague ideas for slipping this information into casual conversation, but they're vague ideas. And it's hard to impart enough information through casual conversation, anyway. I think I could try doing both (have people read specific articles/books and bring it up in casual conversation,) but that brings me back to the original problems: I have no idea how to teach rationality, and people don't respect me enough to listen to me if I tell them they need to know something.

I know some people here have experience in teaching rationality, so I'm fishing for any advice. My two major concerns are: -How to bridge the inference gap between myself and my club members (where do I even start?) and if there are any other ways to teach rationality beyond the two I mentioned.

Replies from: CarlShulman, jkaufman, Raemon, Claire, William_Quixote
comment by CarlShulman · 2013-07-08T22:00:01.858Z · LW(p) · GW(p)

Giving What We Can has local chapters. They do a lot of speaker events, social events, games, etc. If you go to the Giving What We Can website they try to keep someone ready to chat with people at all times.

Replies from: Michelle_Z
comment by Michelle_Z · 2013-07-09T01:02:18.580Z · LW(p) · GW(p)

Thanks! I'll check that out.

comment by jefftk (jkaufman) · 2013-07-08T18:41:02.121Z · LW(p) · GW(p)

Talking to THINK might be helpful. They coordinate a bunch of EA meetups at various schools. They have a set of "modules" that you could do meetups around.

Replies from: Michelle_Z
comment by Michelle_Z · 2013-07-08T19:39:22.857Z · LW(p) · GW(p)

This looks very useful! Thank you!

comment by Raemon · 2013-07-08T20:34:20.799Z · LW(p) · GW(p)

At the summit, I gave a talk on community building. One of my main thesis was that I think it's actually better to do a rationality/self-improvement club that is also an Effective Altruism club than an EA club that's also a rationality club. You'll get people who don't just self identify as world savers (and who can, over time, be influenced by the world savers)

The self-imp/rationality group I run begins sessions by talking about our successes from the previous week, and ends with setting goals from the previous week. This means the thing that gets positively reinforced via social pressure is actually doing things, whereas with EA it's easy to simply reward signaling.

Replies from: Michelle_Z, DubiousTwizzler
comment by Michelle_Z · 2013-07-09T01:01:49.221Z · LW(p) · GW(p)

That's a good idea. I could try to advertise it that way, since I'm having major issues finding a single person at my college interested in effective altruism. I might be wrong, but do you think it would be harder to get people interested in rationality, or to get them interested in effective altruism? My priors tell me that charity > rationalism in many people's minds, but I'm not sure.

EDIT: I decided to go with the rationality club idea. There's no real advantage in my original plan compared to opening a THINK club, which is basically the same idea except I can do more fun things with it. Thanks for the advice!

comment by DubiousTwizzler · 2013-07-12T21:25:26.458Z · LW(p) · GW(p)

I'm a student interested in building a Rationality/Effective Altruism Club. Was this talk recorded? Because I would be interested in watching/reading it, if you have a YouTube link, etc.

Replies from: Raemon
comment by Raemon · 2013-07-12T23:21:25.813Z · LW(p) · GW(p)

It was not recorded, but I plan to write it up soon.

comment by Claire · 2013-07-09T01:59:19.971Z · LW(p) · GW(p)

Try giving game. http://www.givingwhatwecan.org/blog/2013-06-02/how-giving-games-can-spread-the-word-about-smarter-charity-choices-0

Avoid "teaching" and instead set up conversations and activities that introduce these ideas. Many people resist it their peers try to "educate" them. Look for movie, comics, webshorts, etc. that can start off the conversation in the right direction.

Remember that it will take people time to become comfortable with these ideas. Look to make progress over the course of months and years, not hours.

Good luck!

comment by William_Quixote · 2013-07-08T17:57:39.642Z · LW(p) · GW(p)

I've noticed similar situations as well. The sequences did a pretty good job conveying information to me, but I'm a math guy who grew up reading scif and watching animie so I'm about as close to the target demographic as its possible to be. I've often wished for a less flavorful more generic / corporate version of the content in the sequences that I could point people outside the target demographic towards.

comment by Adriano_Mannino · 2013-07-13T03:10:21.797Z · LW(p) · GW(p)

Thanks, Luke, great overview! Just one thought:

Many EAs value future people roughly as much as currently-living people, and therefore think that nearly all potential value is found in the well-being of the astronomical numbers of people who could populate the far future

This suggests that alternative views are necessarily based on ethical time preference (and time preference seems irrational indeed). But that's incorrect. It's possible to care about the well-being of everyone equally (no matter their spatio-temporal coordinates) without wanting to fill the universe with happy people. I think there is something true and important about the slogan "Make people happy, not happy people", although explaining that something is non-trivial.

Replies from: Nick_Beckstead, lukeprog
comment by Nick_Beckstead · 2013-07-20T18:04:24.668Z · LW(p) · GW(p)

It's not really clear to me that negative utilitarians and people with person-affecting views need to disagree with the quoted passage as stated. These views focus primarily on the suffering aspect of well-being, and nearly all of the possible suffering is found in the astronomical numbers of people who could populate the far future.

To elaborate, in my dissertation, I assume--like most people would--that a future where humans have great influence would be a good thing. But I don't argue for that and some people might disagree. If that's the only thing you disagree with me about, it seems you actually still end up accepting my conclusion that what matters most is making humanity's long-term future development go as well as possible. It's just that you end up focusing on different aspects of making the long-term future development go as well as possible.

Replies from: Adriano_Mannino
comment by Adriano_Mannino · 2013-08-03T04:22:52.685Z · LW(p) · GW(p)

Hi Nick, thanks! I do indeed fully agree with your general conclusion that what matters most is making our long-term development go as well as possible. (I had something more specific in mind when speaking of "Bostrom's and Beckstead's conclusions" here, sorry about the confusion.) In fact, I consider your general conclusion very obvious. :) (What's difficult is the empirical question of how to best affect the far future.) The obviousness of your conclusion doesn't imply that your dissertation wasn't super-important, of course - most people seem to disagree with the conclusion. Unfortunately and sadly, though, the utility of talking about (affecting) the far future is a tricky issue too, given fundamental disagreements in population ethics.

I don't know that the "like most people would" parenthesis is true. (A "good thing" maybe, but a morally urgent thing to bring about, if the counterfactual isn't existence with less well-being, but non-existence?) I'd like to see some solid empirical data here. I think some people are in the process of collecting it.

Do you not argue for that at all? I thought you were going in the direction of establishing an axiological and deontic parallelism between the "wretched child" and the "happy child".

The quoted passage ("all potential value is found in [the existence of] the well-being of the astronomical numbers of people who could populate the far future") strongly suggests a classical total population ethics, which is rejected by negative utilitarianism and person-affecting views. And the "therefore" suggests that the crucial issue here is time preference, which is a popular and incorrect perception.

Replies from: Nick_Beckstead
comment by Nick_Beckstead · 2013-08-03T08:34:46.052Z · LW(p) · GW(p)

Do you not argue for that at all? I thought you were going in the direction of establishing an axiological and deontic parallelism between the "wretched child" and the "happy child".

I do some of that in chapter 4. I don't engage with speculative arguments that the future will be bad (e.g. the dystopian scenarios that negative utilitarians like to discuss) or make my case by appealing to positive trends of the sort discussed by Pinker in Better Angels. Carl Shulman and I are putting together some thoughts on some of these issues at the moment.

The quoted passage ("all potential value is found in [the existence of] the well-being of the astronomical numbers of people who could populate the far future") strongly suggests a classical total population ethics, which is rejected by negative utilitarianism and person-affecting views. And the "therefore" suggests that the crucial issue here is time preference, which is a popular and incorrect perception.

Maybe so. I think the key is how you interpret the word "value." If you interpret as "only positive value" then negative utilitarians disagree but only because they think there isn't any possible positive value. If you interpret it as "positive or negative value" I think they should agree for pretty straightforward reasons.

comment by lukeprog · 2013-07-13T03:17:00.478Z · LW(p) · GW(p)

Okay, I removed the "therefore."

Replies from: Adriano_Mannino
comment by Adriano_Mannino · 2013-07-15T21:16:32.420Z · LW(p) · GW(p)

OK, but why would the sentence start with "Many EAs value future people roughly as much as currently-living people" if there wasn't an implied inferential connection to nearly all value being found in astronomical far future populations? The "therefore" is still implicitly present.

It's not entirely without justification, though. It's true that the rejection of a (very heavy) presentist time preference/bias is necessary for Bostrom's and Beckstead's conclusions. So there's weak justification for your "therefore": The rejection of presentist time preference makes the conclusions more likely.

But it's by no means sufficient for them. Bostrom and Beckstead need the further claim that bringing new people into existence is morally important and urgent.

This seems to be the crucial point. So I'd rather go for something like: "Many (Some?) EAs value/think it morally urgent to bring new people (with lives worth living) into existence, and therefore..."

The moral urgency of preventing miserable lives (or life-moments) is less controversial. People like Brian Tomasik place much more (or exclusive) importance on the prevention of lives not worth living, i.e. on ensuring the well-being of everyone that will exist rather than on making as many people exist as possible. The issue is not whether (far) future lives count as much as lives closer to the present. One can agree that future lives count equally, and also agree that far future considerations dominate the moral calculation (empirical claims enter the picture here). But one may disagree on "Omelas and Space Colonization", i.e. on how many lives worth living are needed to "outweigh" or "compensate for" miserable ones (which our future existence will inevitably also produce, probably in astronomical numbers, assuming astronomical population expansion). So it's possible to agree that future lives count equally and that far future considerations dominate but to still disagree on the importance of x-risk reduction or more particular things such as space colonization.

Replies from: Wei_Dai, lukeprog
comment by Wei Dai (Wei_Dai) · 2013-07-19T06:30:58.764Z · LW(p) · GW(p)

But one may disagree on "Omelas and Space Colonization", i.e. on how many lives worth living are needed to "outweigh" or "compensate for" miserable ones (which our future existence will inevitably also produce, probably in astronomical numbers, assuming astronomical population expansion).

A superintelligent Singleton (e.g., FAI) can guarantee a minimum standard of living for everyone who will ever be born or created, so I don't understand why you think astronomical population expansion inevitably produces miserable lives.

Also, I note that space colonization can produce an astronomical number of QALYs even assuming no population growth, by letting currently existing people continue to live after all the negentropy in our solar system has been exhausted.

Replies from: Adriano_Mannino
comment by Adriano_Mannino · 2013-08-03T06:55:49.250Z · LW(p) · GW(p)

Yes, it can. But a Singleton is not guaranteed; and conditional on the future existence of a Singleton, friendliness is not guaranteed. What I meant was that astronomical population expansion clearly produces an astronomical number of most miserable, tortured lives in expectation.

Lots of dystopian future scenarios are possible. Here are some of them.

How many happy people for one miserable existence? - I take the zero option very seriously because I don't think that (anticipated) non-existence poses any moral problem or generates any moral urgency to act, while (anticipated) miserable existence clearly does. I don't think it would have been any intrinsic problem whatsoever had I never been born; but it clearly would have been a problem had I been born into miserable circumstances.

But even if you do believe that non-existence poses a moral problem and creates an urgency to act, it's not clear yet that the value of the future is net positive. If the number of happy people you require for one miserable existence is sufficiently great and/or if dystopian scenarios are sufficiently likely, the future will be negative in expectation. Beware optimism bias, illusion of control, etc.

Replies from: CarlShulman, Wei_Dai
comment by CarlShulman · 2013-08-03T07:36:31.517Z · LW(p) · GW(p)

Lots of dystopian future scenarios are possible. Here are some of them. But even if you do believe that non-existence poses a moral problem and creates an urgency to act, it's not clear yet that the value of the future is net positive.

Even Brian Tomasik, the author of that page, says that if one trades off pain and pleasure at ordinary rates the expected happiness of the future exceeds the expected suffering, by a factor of between 2 and 50.

Replies from: Brian_Tomasik, Adriano_Mannino
comment by Brian_Tomasik · 2013-08-03T10:05:25.581Z · LW(p) · GW(p)

The 2-50 bounds seem reasonable for EV(happiness)/EV(suffering) using normal people's pleasure-pain exchange ratios, which I think are insane. :) Something like accepting 1 minute of Medieval torture for a few days/weeks of good life.

Using my own pleasure-pain exchange ratio, the future is almost guaranteed to be negative in expectation, although I maintain ~30% chance that it would still be better for Earth-based life to colonize space to prevent counterfactual suffering elsewhere.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2013-08-03T11:44:17.905Z · LW(p) · GW(p)

It is worth pointing out that by 'insane', Brian just means 'an exchange rate that is very different from the one I happen to endorse.' :-) He admits that there is no reason to favor his own exchange rate over other people's. (By contrast, some of these other people would argue that there are reasons to favor their exchange rates over Brian's.)

Replies from: Adriano_Mannino
comment by Adriano_Mannino · 2013-08-04T02:00:33.321Z · LW(p) · GW(p)

Also, still others (such as David Pearce) would argue that there are reasons to favor Brian's exchange rate. :)

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2013-08-04T14:44:55.569Z · LW(p) · GW(p)

That's incorrect. David Pearce claims that pains below a certain intensity can't be outweighed by any amount of pleasure. Both Brian and Dave agree that Dave is a (threshold) negative utilitarian whereas Brian is a negative-leaning utilitarian.

Replies from: Adriano_Mannino
comment by Adriano_Mannino · 2013-08-10T15:27:13.395Z · LW(p) · GW(p)

Not so sure. Dave believes that pains have an "ought-not-to-be-in-the-world-ness" property that pleasures lack. And in the discussions I have seen, he indeed was not prepared to accept that small pains can be outweighed by huge quantities of pleasure. Brian was oscillating between NLU and NU. He recently told me he found the claim convincing that such states as flow, orgasm, meditative tranquility, perfectly subjectively fine muzak, and the absence of consciousness were all equally good.

Replies from: davidpearce
comment by davidpearce · 2014-07-28T07:53:42.281Z · LW(p) · GW(p)

Can preference utilitarians, classical utilitarians and negative utilitarians hammer out some kind of cosmological policy consensus? Not ideal by anyone's lights, but good enough? So long as we don't create more experience below "hedonic zero" in our forward light-cone, NUs are untroubled by wildly differing outcomes. There is clearly a tension between preference utilitarianism and classical utilitarianism; but most(?) preference utilitarians are relaxed about having hedonic ranges shifted upwards - perhaps even radically upwards - if recalibration is done safely, intelligently and conservatively - a big "if", for sure. Surrounding the sphere of sentient agents in our Local Supercluster(?) with a sea of hedonium propagated by von Neumann probes or whatever is a matter of indifference to most preference utilitarians and NUs but mandated(?) by CU.

Is this too rosy a scenario?

comment by Adriano_Mannino · 2013-08-04T02:07:43.606Z · LW(p) · GW(p)

Regarding "people's ordinary exchange rates", I suspect that in cases people clearly recognize as altruistic, the rates are closer to Brian's than to yours. In cases they (IMO confusedly) think of as "egoistic", the rates may be closer to yours. - This provides an argument that people should end up with Brian upon knocking out confusion.

Replies from: CarlShulman
comment by CarlShulman · 2013-08-04T03:15:30.791Z · LW(p) · GW(p)

I suspect that in cases people clearly recognize as altruistic, the rates are closer to Brian's than to yours.

Which cases did you have in mind?

People generally don't altruistically favor euthanasia for pets with temporarily painful but easily treatable injuries (where recovery could be followed with extended healthy life). People are not eager to campaign to stop the pain of childbirth at the expense of birth. They don't consider a single instance of torture worse than many deaths depriving people of happy lives. They favor bringing into being the lives of children that will contain some pain.

comment by Wei Dai (Wei_Dai) · 2013-08-03T09:34:24.838Z · LW(p) · GW(p)

Thanks for the explanation. I was thrown by your usage of the word "inevitable" earlier, but I think I understand your position now. (EDIT: Deleted rest of this comment which makes a point that you already discussing with Nick Beckstead.)

comment by lukeprog · 2013-07-17T05:51:02.184Z · LW(p) · GW(p)

Right, the first clause is there as a necessary but not sufficient part of the standard reason for focusing on the far future, and the sentence works now that I've removed the "therefore."

The reason I'd rather not phrase things as "morally urgent to bring new people into existence" is because that phrasing suggests presentist assumptions. I'd rather use a sentence with non-presentist assumptions, since presentism is probably rejected by a majority of physicists by now, and also rejected by me. (It's also rejected by the majority of EAs with whom I've discussed the issue, but that's not actually noteworthy because it's such a biased sample of EAs.)

Replies from: Adriano_Mannino
comment by Adriano_Mannino · 2013-07-19T01:08:48.927Z · LW(p) · GW(p)

Is it the "bringing into existence" and the "new" that suggests presentism to you? (Which I also reject, btw. But I don't think it's of much relevance to the issue at hand.) Even without the "therefore", it seems to me that the sentence suggests that the rejection of time preference is what does the crucial work on the way to Bostrom's and Beckstead's conclusions, when it's rather the claim that it's "morally urgent/required to cause the existence of people (with lives worth living) that wouldn't otherwise have existed", which is what my alternative sentence was meant to mean.

Replies from: lukeprog
comment by lukeprog · 2013-07-19T01:51:40.118Z · LW(p) · GW(p)

I confess I'm not that motivated to tweak the sentence even further, since it seems like a small semantic point, I don't understand the advantages to your phrasing, and I've provided links to more thorough discussions of these issues, for example Beckstead's dissertation. Maybe it would help if you explained what kind of reasoning you are using to identify which claims are "doing the crucial work"? Or we could just let it be.

Replies from: Adriano_Mannino
comment by Adriano_Mannino · 2013-07-19T04:32:31.608Z · LW(p) · GW(p)

Yeah, I've read Nick's thesis, and I think the moral urgency of filling the universe with people is the more important basis of his conclusion than the rejection of time preference. The sentence suggests that the rejection of time preference is most important.

If I get him right, Nick agrees that the time issue is much less important than you suggested in your recent interview.

Sorry to insist! :) But when you disagree with Bostrom's and Beckstead's conclusions, people immediately assume that you must be valuing present people more than future ones. And I'm constantly like: "No! The crucial issue is whether the non-existence of people (where there could be some) poses a moral problem, i.e. whether it's morally urgent to fill the universe with people. I doubt it."

Replies from: lukeprog
comment by lukeprog · 2013-07-19T06:18:42.253Z · LW(p) · GW(p)

Okay, so we're talking about two points: (1) whether current people have more value than future people, and (2) whether it would be super-good to create gazillions of super-good lives.

My sentence mentions both of those, in sequence: "Many EAs value future people roughly as much as currently-living people [1], and think that nearly all potential value is found in the well-being of the astronomical numbers of people who could populate the far future [2]..."

And you are suggesting... what? That I switch the order in which they appear, so that [2] appears before [1], and is thus emphasized? Or that I use your phrase "morally urgent to" instead of "nearly all potential value is found in..."? Or something else?

Replies from: Adriano_Mannino
comment by Adriano_Mannino · 2013-08-03T06:23:33.763Z · LW(p) · GW(p)

Sorry for the delay!

I forgot to clarify the rough argument for why (1) "value future people equally" is much less important or crucial than (2) "fill the universe with people" here.

If you accept (2), you're almost guaranteed to be on board with where Bostrom and Beckstead are roughly going (even if you valued present people more!). It's hardly possible to then block their argument on normative grounds, and criticism would have to be empirical, e.g. based on the claim that dystopian futures may be likelier than commonly assumed, which would decrease the value of x-risk reduction.

By contrast, if you accept (1), it's still very much an open question whether you'll be on board.

Also, intrinsic time preference is really not an issue among EAs. The idea that spatial and temporal distance are irrelevant when it comes to helping others is a pretty core element of the EA concept. What is an issue, though, is the question of what helping others actually means (or should mean). Who are the relevant others? Persons? Person-moments? Preferences? And how are they relevant? Should we ensure the non-existence of suffering? Or promote ecstasy too? Prevent the existence of unfulfilled preferences? Or create fulfilled ones too? Can you help someone by bringing them into existence? Or only by preventing their miserable existence/unfulfilled preferences? These issues are more controversial than the question of time preference. Unfortunately, they're of astronomical significance.

I don't really know if I'm suggesting any further specific change to the wording - sorry about that. It's tricky... If you're speaking to non-EAs, it's important to emphasize the rejection of time preference. But there shouldn't be a "therefore", which (in my perception) is still implicitly there. And if you're speaking to people who already reject time preference, it's even more important to make it clear that this rejection doesn't imply "fill the universe with people". One solution could be to simply drop the reference to the (IMO non-decisive) rejection of time preference and go for something like: "Many EAs consider the creation of (happy) people valuable and morally urgent, and therefore think that nearly all potential value..."

Beckstead might object that the rejection of heavy time preference is important to his general conclusion (the overwhelming importance of shaping the far future). But if we're talking that level of generality, then the reference to x-risk reduction should probably go or be qualified. For sufficiently negative-leaning EAs (such as Brian Tomasik) believe that x-risk reduction is net negative.

Perhaps the best solution would be to expand the section and start by mentioning how the (EA-uncontroversial) rejection of time preference is relevant to the overwhelming importance of shaping the far future. Once we've established that the far future likely dominates, the question arises how we should morally affect the far future. Depending on this question, very different conclusions can result e.g. with regard to the importance and even the sign of x-risk reduction.

Replies from: lukeprog
comment by lukeprog · 2013-08-03T08:00:34.387Z · LW(p) · GW(p)

I don't want to expand the section, because that makes it stand out more than is compatible with my aims for the post. And since the post is aimed at non-EAs and new EAs, I don't want to drop the point about time preference, as "intrinsic" time-discounting is a common view outside EA, especially for those with a background in economics rather than philosophy. So my preferred solution is to link to a fuller discussion of the issues, which I did (in particular, Beckstead's thesis). Anyway, I appreciate your comments.

comment by homunq · 2013-07-13T17:03:19.970Z · LW(p) · GW(p)

"Thinking quantitatively" is poor shorthand for good rational practice. Of course a rationalist shouldn't neglect quantitative thought; that leads to fuzz. But purely quantitative evaluation is just as bad; it leads to No-Child-Left-Behind-style teaching-to-the-test and, worse, testing-to-the-test (choosing metrics based on reliability over applicability).

I think that there are signs of that in the choice of four areas. It's not just that "effective environmental activism" didn't make the cut; what about politics itself? Rational improvements in political systems are incredibly easy to imagine; approval voting, for instance, is a tiny, simple change compared to plurality voting, yet would eliminate a number of senseless biases in politics. And politics is important; as any evil overlord knows, the goal is to take over the world. But it's very hard to quantify political progress objectively, and easy to get into mind-killing arguments, so it seems the whole issue just gets covered by an ick field for rationalists.

So, the question becomes: do you want to talk about what aspiring effective altruists do do, or what they should do? If it's the former, fine. If it's the latter, I think you have to start from more basic principles.

comment by Michael Wiebe (Macaulay) · 2013-07-09T21:45:47.732Z · LW(p) · GW(p)

In the future, poverty reduction EAs might also focus on economic, political, or research-infrastructure changes that might achieve poverty reduction, global health, and educational improvements more indirectly, as when Chinese economic reforms lifted hundreds of millions out of poverty.

I'd like to see more discussion of economic growth and effective altruism. Something that can lift hundreds of millions of people out of poverty is something that should definitely be investigated. (See also Lant Pritchett's distinction between linear and transformative philanthropy.)

comment by [deleted] · 2015-06-28T06:44:50.972Z · LW(p) · GW(p)

this was an unhelpful comment, removed and replaced by this comment

Belief updating (bayes) underwrites a steady stream of novelty, and thus joy, and therefore, assuming belief updating is real – joy in the merely real is plausible

Instantiate the environment, including self, identify changes plausible for each component, select the object for that change which maximises utility. If that happens to be the AI itself, then it will do that. done, recursively improving AI solved

I think I may be experiencing a psychotic episode :( Sorry for any unusual post...forget it I'll fix this up when I'm back.

Replies from: ChristianKl
comment by ChristianKl · 2015-06-28T11:15:24.874Z · LW(p) · GW(p)

I understand why effective poverty reduction is a focus area, but why effective health improvement more generally.

Because health interventions tend to do a lot for poverty as well. Healthy people can work much better than poor people. Having children in which society invested resources die to malaria is bad for the economy. It also leads to woman getting more children to make sure that some survive.

For instance, Expanding immunisation coverage for children is Givewell's number 1 priority proven health interventions . However, none of the recommended charities are remotely immunisation programs. Why is that?

Likely because GiveWell thinks that existing institutions already spent enough money on that task or because GiveWell isn't aware of charities with room for funding in that area that it recommends. GiveWell only recommends charities that are transparent enough to have open data about their effectiveness.

comment by lukeprog · 2013-11-24T15:26:51.958Z · LW(p) · GW(p)

Edit: Changed "far future" to "long-term future".

comment by lukeprog · 2013-07-08T05:51:57.900Z · LW(p) · GW(p)

On Facebook, Eliezer suggested an alternate name for practitioners of effective altruism: "Ravenclaw Gryffindors."

Elizabeth Synclair replied: "What? Clearly any effective altruist worth their salt is a Ravenpuff."

(To explain: Hogwarts Houses.)

Replies from: someonewrongonthenet, BrienneYudkowsky, DanielLC, elharo, wedrifid
comment by someonewrongonthenet · 2013-07-09T03:03:09.911Z · LW(p) · GW(p)

Gryffindors are brave, which is useful for fighting oppression...but, you know, terrorism and stuff also requires bravery.

Ravenclaws are curious, which can be used to help people, but it can be used for other things as well.

Slytherins are ambitious, and the story has done enough to illustrate the dual nature of that trait.

Hufflepuffs ... they aren't just one thing. Conscientiousness, Loyalty, and Agreeableness all seem to play a role, but are those things really so strongly correlated? Does this "wholesome" nature finally add up to altruism? Does this altruism extend outside the in-group? I think Rowling was going for some sort of hearty, homespun, down-to-earth archetype there... not sure if it would ever be measurable in a single psychometric variable. If I was writing her story, I'd probably settle on "Loyalty", as in valuing friends and loved ones, with the other two traits just being common behavioral side effects of having this value. In which case, it takes a far-sighted Hufflepuff to extend those feelings of friendship to all intelligent beings...while a near sighted one might become nationalism.

comment by LoganStrohl (BrienneYudkowsky) · 2013-07-18T05:01:16.312Z · LW(p) · GW(p)

Robby Bensinger cleverly expended upon this, describing the various motivations for effective altruism as "slytherfuzzies", "ravenfuzzies", "gryffinfuzzies ", and "hufflefuzzies".

comment by DanielLC · 2013-07-09T06:06:58.886Z · LW(p) · GW(p)

You have to be a Hufflepuff to want to help. You have to be a Ravenclaw to be smart enough to do so. You have to be Slytherin to realize what you can do. And I guess you have to be Gryffindor, to do something nobody else does.

While we're at it, what elements would you need?

Generosity, obviously.

I don't think the Elements of Harmony can help much beyond that.

Replies from: Osiris
comment by Osiris · 2013-07-11T00:01:00.770Z · LW(p) · GW(p)

Honesty in one's dealings is always important. As a member of ROTLCON staff (brony convention in Colorado), I am often asked difficult questions about helping people through our charity auction. Lying is not an option, if one expects to donate, or to accept donations. Kindness? Given how the show seems to show it off in Fluttershy, I would guess that kindness includes one's understanding and acceptance of other people. Saving a people by destroying something else means knowing exactly what you destroy, and seeing its value--perhaps, the destruction can be avoided. Only one example of kindness as shown in the show, of course. Loyalty--uncertain. Laughter--as a convention, the thing I'm working on is about fun. But, it is also an attempt to throw money at the problem in the best way possible (something we're just figuring out, by the way, so we will be applying the above article and related advice to altruism). So, also uncertain, but there is a connection for me and my fellow con staff.

comment by elharo · 2013-07-08T14:52:59.173Z · LW(p) · GW(p)

Being effective at almost anything can benefit from the virtues of all 4 houses. I.e. Slitherclaw Gryffinpuffs.

comment by wedrifid · 2013-07-08T09:50:29.668Z · LW(p) · GW(p)

Elizabeth Synclair replied: "What? Clearly any effective altruist worth their salt is a Ravenpuff."

We could use a few more Slitherpuffs too.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-07-09T00:56:46.105Z · LW(p) · GW(p)

Grytherplaw, ideally - or whatever the portmanteau of all four houses would be.

Replies from: wedrifid
comment by wedrifid · 2013-07-09T05:28:25.822Z · LW(p) · GW(p)

Grytherplaw, ideally - or whatever the portmanteau of all four houses would be.

Normalpersononmodafinil.

comment by ILikeLogic · 2013-07-21T15:32:34.117Z · LW(p) · GW(p)

I think maybe I'd prefer to maximize my personal satisfaction in my charitable efforts. The knowledge that I may do more good some other way won't substitute for the charitable action that will leave me feeling most satisfied based on my normal human emotions, irrational though they may be.

comment by [deleted] · 2013-07-09T12:48:40.369Z · LW(p) · GW(p)

"EAs care about people equally, regardless of location. Typically, the most cost-effective altruistic cause won't happen to be in one's home country."

So which comes first, caring about everyone equally, or wanting to be effective as an altruist? Many ineffective altruists might feel like propping up African dictators just so that there will be a nice supply of desperately poor people to cheaply "save" over and over again is a bad thing. I guess the effective altruist knows better.

Can I be an ultra-effective altruist if I choose to value every life equally? I can save billions or trillions of bacterial and fungal lives just by not ever cleaning my house! Can you selfish bastards who only care about conscious beings compete with that? The other day my wife wanted to end a huge number of lives with some monistat just because her hoo ha was itchy. But I showed her the UEA way and now the world is an immensely better place! message me if you want to donate.

comment by RyanCarey · 2013-07-08T13:27:50.335Z · LW(p) · GW(p)

Great, Luke!

I think we should include Global Happiness Organisation as an effective altruist charity; one that rides across these four categories.

Replies from: lukeprog
comment by lukeprog · 2013-07-08T19:01:12.489Z · LW(p) · GW(p)

So I clicked through to their page, and the top blog post is about how the GHO has named Carrie Underwood the US Happiness Promoter of the Year, and then goes on to explain that she donates to (what appear to be) particularly inefficient means of producing happiness.

The next post after that is about how GHO has named some magazine editor the Swedish Happiness Promoter of the Year — again, probably not a particularly efficient way to spread happiness.

Looking through the various 'About' pages, I see lots of altruism but not much claim of 'effective.' Can you say more about why you think they should be added to the post above?

Replies from: RyanCarey
comment by RyanCarey · 2013-07-09T04:19:38.586Z · LW(p) · GW(p)

This seems reasonable. I guess they're either not effective, or not providing evidence that they're effective.

Their stated goals are altruistic and consequentialist, with concern for both animals and the distant future. They're operated by utilitarians like Ludwig Lindstrom, James Evans, and Jasper Ostman, supported by Peter Singer; they want cultured meat, and seem to want to apply scientific research and measurement to improving welfare (this is the most promising of their policy proposals) I guess as you say, they're altruistic only.

These activities plausibly belong in an EA portfolio, so I hope they can lift their game!

(If anyone from GHO can provide further information, this seems to be a suitable time and place.)