Giving What We Can, 80,000 Hours, and Meta-Charity

post by wdmacaskill · 2012-11-15T20:34:54.680Z · LW · GW · Legacy · 184 comments

Contents

184 comments

Disclaimer: I’m somewhat nervous about posting this, for fear of down-voting on my first LW post, given that this post explicitly talks in a positive light about organisations that I have helped to set up. But I think that the topic is of interest to LW-ers, and I’m hoping to start a rational discussion. So here it goes…

Hi all,

Optimal philanthropy is a common discussion topic on LW. It’s also previously been discussed whether ‘meta-charities’ like GiveWell — that is, charities that attempt to move money to other charities, or assess the effectiveness of other charities — might end up themselves being excellent or even optimal giving opportunities.

Partly on the basis of the potentially high cost-effectiveness of meta-charity, I have co-founded two such charities: Giving What We Can and 80,000 Hours. Both are now open to taking donations (info here for GWWC and here for 80k). In what follows I’ll explain why one might think of Giving What We Can or 80,000 Hours as a good giving opportunity. It’s of course very awkward to talk about the reasons in favour of donating to one’s own organization, and the risk of bias is obvious, so I’ll just briefly describe the basic argument, and then leave the rest for discussion. I hope I manage to give an honest picture, rather than just pitching my own favourite idea: we really want to do the most good that we can with marginal resources, so if LW members think that giving to meta-charity in general, or GWWC or 80k in particular, is a bad idea, that’s important for us to know. So please don’t be shy in raising comments, questions, or criticism. If you find yourself being critical, please try to suggest ways in which GWWC or 80k could either change its activities or provide more information such that your criticisms would be addressed.

What is Giving What We Can?

Giving What We Can encourages people to give more and to give more effectively to causes that fight poverty in the developing world.  It encourages people to become a member of the organisation and pledge to give at least 10% of their income to the charities that best fight extreme poverty, and it provides information on its website about how people can give as cost-effectively as possible.

What is 80,000 Hours?

80,000 Hours provides evidence-based advice on careers aiming to make a difference, through its website and through on-one-one advice sessions. It encourages people to use their careers in an effective way to make the world a significantly better place, and aims to help its members to be more successful in their chosen careers. It provides a community and network for those convinced by its ideas.

What are the main differences between the two?

The primary differences are that 80,000 Hours focuses on how you should spend your time (especially which career you should choose), whereas Giving What We Can focuses on how you should spend your money. Giving What We Can is focused on global poverty, whereas 80,000 Hours is open to any plausibly high-impact cause.

Why should I give to either?

The basic idea is that each of the organisations generates a multiplier on one’s donations. By giving $1 to Giving What We Can to fundraise for the best global poverty charities, one ultimately moves significantly more than $1 to the best global poverty charities.  By giving $1 to 80,000 Hours to improve the effectiveness of students’ career paths, one ultimately moves significantly more than $1’s worth of human and financial resources to a range of high-impact causes, including global poverty, animal welfare improvement, and existential risk mitigation.

How are you testing this?

Last March we did an impact assessment for Giving What We Can. Some more info is available here, and I can provide much more information, including the calculations, upon request. As of last March, we’d invested $170 000’s worth of volunteer time into Giving What We Can, and had moved $1.7 million to GiveWell or GWWC top-recommended development charities, and raised a further $68 million in pledged donations.  Taking into account the facts that some proportion of this would have been given anyway, there will be some member attrition, and not all donations will go to the very best charities (and using data for all these factors when possible), we estimate that we had raised $8 in realised donations and $130 in future donations for every $1’s worth of volunteer time invested in Giving What We Can. We will continue with such impact assessments, most likely on an annual basis.

We have less data available for 80,000 Hours, but things seem if anything more promising. A preliminary investigation (data from 26 members, last May) suggested that the average member was pledging $1mn; 34% of were planning to donate to existential risk mitigation, 61% to global poverty reduction. Member recruitment currently stands at roughly one per day. 25% of our members state that their career has been ‘significantly changed’ by 80,000 Hours. A little more information is available here.

Why might I be unconvinced?

Here are a few considerations that I think are important (and of course that’s not to say there aren’t others).

First, the whole idea of meta-charity is new, and therefore not as robustly tested as other activities. Even if you find the idea of meta-charity compelling, you could plausibly reason that most compelling arguments to new and optimistic conclusions have been false in the past, an so on inductive grounds treat this one with suspicion.

Second, you might have a very high discount rate. Giving $1 to either GWWC or 80k generates benefits in the future. So working out its cost-effectiveness involves an estimate of how one should value future donations versus donations now. That’s a tricky question to answer, and if you have a high enough discount rate, then the investment won’t be worth it.

Third, you might just think that other organisations are better. You might think that other organisations are better at resource-generation (even if that’s not their declared aim). Or you might think that it’s better just to focus on more direct means of making an impact.

Finally, you might just have a prior against the idea that one can get a significant multiplier on one’s donations to top charities. (One might ask: if the idea of meta-charity is so good, why don’t many more meta-charities exist than currently do?) So you might need to see a lot more hard data (perhaps verified by independent sources) before being convinced.

184 comments

Comments sorted by top scores.

comment by katydee · 2012-11-11T04:46:00.622Z · LW(p) · GW(p)

I am skeptical of 80,000 hours and the general concept of "earning to give" because I suspect very few people will actually be able to execute this correctly. What tracking programs (if any) do you have to ensure that people actually follow up on their plans?

That being said, your cause seems a noble one and I wish you well.

Replies from: wdmacaskill
comment by wdmacaskill · 2012-11-11T17:20:20.112Z · LW(p) · GW(p)

Thanks for this, this is a common response to earning to give. However, we already have a number of success stories: people who have started their EtG jobs and are loving them.

It's rare that someone had their heart set on a particular career, such as charity work, then completely changes their plans and begins EtG. Rather, much more common is that someone is thinking "I really want to do [lucrative career X], but I should do something more ethical" or that they think "I'm undecided between lucrative career X, and other careers Y and Z; all look like good options." It's much easier to convince these people.

We certainly want to track behaviour. We will have an annual survey of members, to find out what they are doing, and how much they are giving, and so on. If someone really isn't complying with the spirit of 80k, or with their stated goals, then we'll ask them to leave.

Replies from: katydee
comment by katydee · 2012-11-11T19:42:34.979Z · LW(p) · GW(p)

I'm not surprised that people are doing this now, but I will be surprised if most of them are still doing it in five years, much less in the actual long term.

That being said, if the organization can maintain recruitment of new people, a lot of good will still be done even under this assumption.

comment by Gedusa · 2012-11-10T00:22:36.443Z · LW(p) · GW(p)

Possible consideration: meta-charities like GWWC and 80k cause donations to causes that one might not think are particularly important. E.g. I think x-risk research is the highest value intervention, but most of the money moved by GWWC and 80k goes to global poverty or animal welfare interventions. So if the proportion of money moved to causes I cared about was small enough, or the meta-charity didn't multiply my money much anyway, then I should give directly (or start a new meta-charity in the area I care about).

A bigger possible problem would be if I took considerations like the poor meat eater problem to be true. In that case, donating to e.g. 80k would cause a lot of harm even though it would move a lot of money to animal welfare charities, because it causes so much to go to poverty relief, which I could think was a bad thing. It seems like there are probably a few other situations like this around.

Do you have figures on what the return to donation (or volunteer time) is for 80,000 hours? i.e. is it similar to GWWC's $138 of donations per $1 of time invested? It would be helpful to know so I could calculate how much I would expect to go to the various causes.

Replies from: wdmacaskill, Viliam_Bur, MTGandP, Giles, juliawise
comment by wdmacaskill · 2012-11-10T23:33:13.181Z · LW(p) · GW(p)

Hey,

80k members give to a variety of causes. When we surveyed, 34% were intending to give to x-risk, and it seems fairly common for people who start thinking about effective altruism to ultimately think that x-risk mitigation is one of or the most important cause area. As for how this pans out with additional members, we'll have to wait and see. But I'd expect $1 to 80k to generate significantly more than $1's worth of value even for existential risk mitigation alone. It certainly has done so far.

We did a little bit of impact-assessment for 80k (again, with a sample of 26 members). When we did, the estimates were even more optimistic than for GWWC. But we'd like to get firmer data set before going public with any numbers.

Though I was deeply troubled by the poor meater problem for some time, I've come to the conclusion that it isn't that bad (for utilitarians - I think it's much worse for non-consequentialists, though I'm not sure).

The basic idea is as follows. If I save the life of someone in the developing world, almost all the benefit I produce is through compounding effects: I speed up technological progress by a tiny margin, giving us a little bit more time at the end of civilisation, when there are far more people. This benefit dwarfs the benefit to the individual whose life I've saved (as Bostrom argues in the first half of Astronomical Waste). Now, I also increase the amount of animal suffering, because the person whose life I've saved consumes meat, and I speed up development of the country, which means that the country starts factory farming sooner. However, we should expect (or, at least, I expect) factory farming to disappear within the next few centuries, as cheaper and tastier meat substitutes are developed. So the increase in animal suffering doesn't compound in the same way: whereas the benefits of saving a life continue until the humanity race (or its descendants) dies out, the harm of increasing meat consumption ends only after a few centuries (when we move beyond farming).

So let's say the benefit to the person from having their live saved is N. The magnitude of the harm from increasing factory farming might be a bit more than N: maybe -10N. But the benefit from speeding up technological progress is vastly greater than that: 1000N, or something. So it's still a good thing to save someone's life in the developing world. (Though of course, if you take the arguments about x-risk seriously, then alleviating global poverty is dwarfed by existential risk mitigation).

Replies from: John_Maxwell_IV, MTGandP
comment by John_Maxwell (John_Maxwell_IV) · 2012-11-11T01:53:30.776Z · LW(p) · GW(p)

Is saving someone from malaria really the most cost-effective way to speed technological progress per dollar? Seems like you might well be better off loaning money on kiva.org or some completely different thing. (Edit: Jonah Sinick points me to 1, 2, 3, 4 regarding microfinance.)

Some thoughts from Robin Hanson on how speeding technological progress may affect existential risks: http://www.overcomingbias.com/2009/12/tiptoe-or-dash-to-future.html. I'd really like to see more analysis of this.

Replies from: wdmacaskill
comment by wdmacaskill · 2012-11-11T17:02:26.259Z · LW(p) · GW(p)

It would be good to have more analysis of this.

Is saving someone from malaria really the most cost-effective way to speed technological progress per dollar?

The answer is that I don't know. Perhaps it's better to fund technology directly. But the benefit:cost ratio tends to be incredibly high for the best developing world interventions. So the best developing world health interventions would at least be contenders. In the discussion above, though, preventing malaria doesn't need to be the most cost-effective way of speeding up technological progress. The point was only that that benefit outweighs the harm done by increasing the amount of farming.

comment by MTGandP · 2012-11-11T00:32:22.792Z · LW(p) · GW(p)

The basic idea is as follows. If I save the life of someone in the developing world, almost all the benefit I produce is through compounding effects: I speed up technological progress by a tiny margin, giving us a little bit more time at the end of civilisation, when there are far more people. This benefit dwarfs the benefit to the individual whose life I've saved (as Bostrom argues in the first half of Astronomical Waste). Now, I also increase the amount of animal suffering, because the person whose life I've saved consumes meat, and I speed up development of the country, which means that the country starts factory farming sooner. However, we should expect (or, at least, I expect) factory farming to disappear within the next few centuries, as cheaper and tastier meat substitutes are developed. So the increase in animal suffering doesn't compound in the same way: whereas the benefits of saving a life continue until the humanity race (or its descendants) dies out, the harm of increasing meat consumption ends only after a few centuries (when we move beyond farming).

This is purely speculative. You have not presented any evidence that (a) the compounding effects of donating money to alleviate poverty outweigh the direct effects, or that (b) this does not create enough animal suffering to outweigh the benefits. And it still ignores the fact that animal welfare charities are orders of magnitude more efficient than human charities.

The magnitude of the harm from increasing factory farming might be a bit more than N: maybe -10N.

It's almost certainly more like -10,000N. One can determine this number by looking at the suffering caused by eating different animal products as well as the number of animals eaten in a lifetime (~21000).

Replies from: wdmacaskill, bryjnar
comment by wdmacaskill · 2012-11-11T16:58:48.807Z · LW(p) · GW(p)

On (a). The argument for this is based on the first half of Bostrom's Astronomical Waste. In saving someone's life (or some other good economic investment), you move technological progress forward by a tiny amount. The benefit you produce is the difference you make at the end of civilisation, when there's much more at stake than there is now.

It's almost certainly more like -10,000N I'd be cautious about making claims like this. We're dealing with tricky issues, so I wouldn't claim to be almost certain about anything in this area. The numbers I used in the above post were intended to be purely illustrative, and I apologise if they came across as being more definite than that.

Why might I worry about the -10,000N figure? Well, first, the number you reference is the number of animals eaten in a lifetime by an American - the greatest per capita meat consumers in the world. I presume that the number is considerably smaller for those in developing countries, and there is considerably less reliance on factory farming.

Even assuming we were talking about American lives, is the suffering that an American causes 10,000 times as great as the happiness of their lives? Let's try a back of the envelope calculation. Let's accept that 21000 figure. I can't access the original source, but some other digging suggests that this breaks down into: 17,000 shellfish, 1700 other fish, 2147 chickens, with the rest constituting a much smaller number. I'm really not sure how to factor in shellfish and other fish: I don't know if they have lives worth living or not, and I presume that most of these are farmed, so wouldn't have existed were it not for farming practices. At any rate, from what I know I suspect that factory farmed chickens are likely to dominate the calculation (but I'm not certain). So let's focus on the chickens. The average factory farmed chicken lives for 6 weeks, so that's 252 factory farmed chicken-years per American lifetime. Assuming the average American lives for 70 years, one American life-year produces 3.6 factory farmed chicken years. What should our tradeoff be between producing factory farmed chicken-years and American human-years? Perhaps the life of the chicken is 10x as bad as the American life is good (that seems a high estimate to me, but I really don't know): in which case we should be willing to shorten an American's life by 10 years in order to prevent one factory-farmed chicken-year. That would mean that, if we call one American life a good of unit 1, the American's meat consumption produces -36 units of value.

In order to get this estimate up to -10 000 units of value, we'd need to multiply that trade-off of 277: we should be indifferent between producing 2770 years of American life and preventing the existence of 1 factory farmed chicken-year (that is, we should be happy letting 4 vegan American children die in order to prevent 1 factory farmed chicken-year). That number seems too high too me; if you agree, perhaps you think that fish or shellfish suffering is the dominant consideration. Or you might bring in non-consequentialist considerations; as I said above, I think that the meat eater problem is likely more troubling for non-consequentialists.

At any rate, this is somewhat of a digression. If one thought that meat eater worries were strong enough that donating to GWWC or 80k was a net harm, I would think that a reasonable view (and one could give further arguments in favour of it, that we haven't discussed), though not my own one for the reasons I've outlined. We knew that something animal welfare focused had been missing from CEA for too long and for that reason set up Effective Animal Activism - currently a sub-project of 80k, but able to accept restricted donations and, as it grows, likely to become an organisation in its own right. So if one thinks that animal welfare charities are likely to be the most cost-effective charities, and one finds the meta-charity argument plausible, then one might consider giving to EAA.

Replies from: MTGandP
comment by MTGandP · 2012-11-11T18:43:36.848Z · LW(p) · GW(p)

I think that calculation makes sense and the -36 number looks about right. I had actually done a similar calculation a while ago and came up with a similar number. I suppose my guess of -10,000 was too hasty.

It may actually be a good deal higher than 36 depending on how much suffering fish and shellfish go through. This is harder to say because I don't understand the conditions in fish farms nearly as well as chicken farms.

comment by bryjnar · 2012-11-11T00:38:56.865Z · LW(p) · GW(p)

It's almost certainly more like -10,000N. One can determine this number by looking at the suffering caused by eating different animal products as well as the number of animals eaten in a lifetime (~21000).

I think Will is assuming that animal suffering has a fairly low moral weight compared to human suffering. Obviously, considerations like this scale directly depending on how you weight that. But I think most people would agree that animal suffering is worth less than human suffering, it's just a question of whether the multiplier is 1/10, 1/100, 0, or what.

Replies from: army1987, Pablo_Stafforini, MTGandP
comment by A1987dM (army1987) · 2012-11-11T09:45:15.670Z · LW(p) · GW(p)

Even if one assigned exactly zero terminal value to non-sapient beings (as IIRC EY does), it takes a hella more resources to grow 2000 kcal's worth of lamb than to grow 2000 kcal's worth of soy, and if everyone wanted to live on the diet of an average present-day American I don't think the planet could handle that; so until we find a way to cheaply grow meat in a lab/terraform other planets, eating meat amounts to defecting in an N-player Prisoner's Dilemma. (But the conclusion “...and therefore we should let people born from the wrong vagina die from malaria so they won't eat meat” doesn't feel right to me.)

(EDIT: I'm not fully vegetarian myself, though like the author of the linked post I eat less meat than usual and try to throw away as little food as possible.)

(Edited to remove the mention of the Tragedy of Commons -- turns out I was using that term in a non-standard way.)

Replies from: Larks
comment by Larks · 2012-11-11T17:41:04.598Z · LW(p) · GW(p)

It's not the tragedy of the commons because farms are privately owned. There might be some aspects like that (e.g. climate change) but "resources used" is in general a problem whose costs are fully internalised and can thus be dealt with by the price system.

Replies from: army1987
comment by A1987dM (army1987) · 2012-11-11T21:02:38.620Z · LW(p) · GW(p)

I don't know much economics so I might be talking through my ass, but doesn't consuming more meat cause the price of meat to increase if the cost of producing meat stays constant, incentivizing farmers to produce more meat? (The extreme example is that if nobody ate meat nobody would produce meat as they would have no-one to sell it to, and if everybody only ate meat nobody would grow grains for human consumption.) And what about government subsidies?

Replies from: Larks
comment by Larks · 2012-11-11T21:37:00.560Z · LW(p) · GW(p)

Yes, the price would go up until no-one else wanted to eat meat. No extra planets required, and no market failure.

Replies from: army1987
comment by A1987dM (army1987) · 2012-11-11T21:50:05.107Z · LW(p) · GW(p)

Still trying to wrap my head around this... [Off to read Introduction to Economic Analysis by R. Preston McAfee. Be back later.]

Replies from: Larks
comment by Larks · 2012-11-11T22:13:50.200Z · LW(p) · GW(p)

Tragedies of the commons only occur when the costs of your decisions are bourne by you. But that's not the case here; buying more meat means you have to pay more, compensating the farmer for the increased use of his resources.

Yes, you slightly increase the cost of meat to everyone else. You also slightly reduce the price of the other things you would otherwise have spent your money on. But it is precisely this price-raising effect that prevents us from accidentally needing three earths: long before that, the price would have risen sufficiently high that no-one else would want to eat meat. This is the market system working exactly as it should.

If it were the case that meat farming caused unusually large amounts of pollution, there might be a tragedy of the commons scenario. But it would have nothing to do with the amount of resources required to make the meat.

Replies from: army1987
comment by A1987dM (army1987) · 2012-11-12T11:06:06.834Z · LW(p) · GW(p)

The idea that eating stuff that requires 100 units of energy to be grown when I could easily live on stuff that requires 1 unit of energy instead is totally unproblematic so long as I pay for it still sounds very counter-intuitive to me. I think I have an idea of what's going on, but I'm going to finish that introductory economics textbook first because I might be badly out of whack.

Replies from: Larks
comment by Larks · 2012-11-13T09:45:47.837Z · LW(p) · GW(p)

It's problematic only to the extent that you could otherwise have spent the money on even more useful things.

Replies from: army1987
comment by A1987dM (army1987) · 2012-11-13T11:12:56.033Z · LW(p) · GW(p)

30,000 kcal's worth of soy arguably is more useful than 100 kcal's worth of lamb. That's my point.

Replies from: Strange7, Larks
comment by Strange7 · 2012-11-14T00:18:08.605Z · LW(p) · GW(p)

The grain has a higher mass and lower value-density, so you're going to have a harder time shipping it long distances at a worthwhile price.

Replies from: gwern, army1987
comment by gwern · 2012-11-14T00:26:55.711Z · LW(p) · GW(p)

You'll also need to pay for either the live lamb to be shipped (very troublesome) or for refrigerated lamb cuts in smaller refrigerator cars which is both more expensive than a big metal bucket for grain and also much more time-sensitive and perishable (arranging continued power for refrigeration). I'm not sure how the transportation costs would net out.

comment by A1987dM (army1987) · 2012-11-14T10:37:12.443Z · LW(p) · GW(p)

Does that outweigh the two orders of magnitude (according to the numbers given in the blog post linked to in the ancestor) between the energy cost of growing them? There likely are foodstuffs more energy-dense than grains but nowhere near as energy-expensive as meat. (Well, there's seed oil, but I don't think one could have a reasonably balanced diet getting most of the calories from there so that doesn't count.)

Replies from: Strange7
comment by Strange7 · 2012-11-15T00:58:56.555Z · LW(p) · GW(p)

Given that meat is being produced and shipped around on a commercial scale, I'd say some people value meat more than enough to outweigh the increased cost of production, yes. Consider that there are factors other than energy in food quality, such as amino acid ratios.

Replies from: army1987
comment by A1987dM (army1987) · 2012-11-15T11:36:04.008Z · LW(p) · GW(p)

Given that meat is being produced and shipped around on a commercial scale,

ISTM that meat is usually produced relatively near where it's sold, probably because of what Gwern says.

I'd say some people value meat more than enough to outweigh the increased cost of production, yes.

That's not what I meant to ask. You said something about grains costing more energy for shipment per unit food energy value which as far as I could tell had nothing to do with how much people valued stuff. What I meant to ask was whether you think that, counting both production and shipment, meat costs less energy per unit food energy value than grains, because that's what your comment seemed to imply. (And while I'm not sure what you were using “some people value X” as an argument for, keep in mind that some people are willing to spend tens or sometimes even hundreds of dollars for a ticket to a football match -- not to mention stuff like heroin.)

Consider that there are factors other than energy in food quality,

I think those are vastly overrated -- for almost any ‘reasonable’ diet composition, they are second-order effects at best. They certainly don't outweigh two orders of magnitude between food energy values. (Of course, people like to advertise their cheese as only containing 10% of fat without telling you the total food energy value of 100 grams of the stuff, so this point is rarely emphasized.) I'm going to add links to earlier comments of mine where I talk about this, when I find them.

such as amino acid ratios.

It is possible to get quite decent amino acid ratios from a vegetarian diet, or even from a vegan diet (though it's harder). (This is probably one of the reason why I picked soy rather than oats as an example, even though the latter has an even higher food energy value per energy cost.)

comment by Larks · 2012-11-13T17:48:18.993Z · LW(p) · GW(p)

If you'd rather have lots of soy why did you buy the lamb? Economics can't save you from making irrational decisions.

You might say that you prefer the lamb but poor people would prefer the lamb, and society is biased in favour of poor people. But then this is a distribution of initial wealth problem, as all efficient outcomes can be achieved by a competitive equilibrium - not a tragedy of the commons problem at all.

Replies from: army1987
comment by A1987dM (army1987) · 2012-11-13T21:20:03.736Z · LW(p) · GW(p)

You might say that you prefer the lamb but poor people would prefer the lamb, and society is biased in favour of poor people.

Er... The first instance of "lamb" was supposed to be "soy" and both instances of "poor" were supposed to be "rich", right?

Replies from: Larks
comment by Larks · 2012-11-14T00:07:31.765Z · LW(p) · GW(p)

Eeek. slightly clearer"

You prefer the resources be spent on lamb for you to eat, but poor people prefer that you bought soy because then there'd be leftover resources to be spent on soy for them. Also, your welfare calculations are generally biased in favour of poor people because of diminishing returns to money.

Replies from: army1987
comment by A1987dM (army1987) · 2012-11-14T10:27:19.033Z · LW(p) · GW(p)

You prefer the resources be spent on lamb for you to eat, but poor people prefer that you bought soy because then there'd be leftover resources to be spent on soy for them.

Yes (provided that's a generic “you”). If you wouldn't call that a tragedy of commons, then the two of us are just using the term with two slightly different meanings.

Replies from: ChristianKl, Larks
comment by ChristianKl · 2012-11-16T13:11:03.658Z · LW(p) · GW(p)

Different people in the Western world spent a different amount of their resources on buying foot. If I spent 150€ instead of 300€ on buying food, the food industry has less resources to produce food. I don't automatically donate those 150€ on buying foot for people in the third world.

The EU produces so much foot that it delibrately throws food away to raise food prices. Simply shipping surplus food to Africa had the problem of wrecking their food markets. It also produces transportation costs. As a result we do ship some of the surplus food to Africa and simply throw away other food.

Soy is cheaper than meat. When you propose that people buy soy instead of buying meat you propose to defund the agricultural sector.
If the EU wanted to produce more food than it does currently it could move more economic resources into the agricultural sector.

comment by Larks · 2012-11-14T10:43:43.658Z · LW(p) · GW(p)

It only applies to shared resources

Replies from: army1987
comment by A1987dM (army1987) · 2012-11-14T10:49:54.410Z · LW(p) · GW(p)

Yes, that's the original meaning. I was using it in the generalized sense of ‘N-player prisoner's dilemma where N is large’, which I think I've seen before on LW.

comment by Pablo (Pablo_Stafforini) · 2012-11-11T05:36:18.612Z · LW(p) · GW(p)

I think Will is assuming that animal suffering has a fairly low moral weight compared to human suffering.

I don't think Will is making any such assumption. His argument does not rely on any moral claim about the relative importance of human versus non-human forms of suffering, but instead rests on an empirical claim about the indirect effects that saving a human life has on present non-human animals, on the one hand, and on future sentient beings, on the other hand. He acknowledges that the benefit to the person whom we save might be outweighed by the harm done to the animals this person will consume. But he adds that saving this life will also speed up technological progress, and as a consequence increase the number of future posthuman life-years to a much greater degree than it increases the expected number of future animal life-years. As he writes, "whereas the benefits of saving a life continue until the humanity race (or its descendants) dies out, the harm of increasing meat consumption ends only after a few centuries (when we move beyond farming)."

Of course, someone like Brian Tomasik might counter that, by increasing present meat consumption, we are contributing to the spread of "speciesist" memes. Such memes, by influencing future decision-makers with the power to do astronomical amounts of evil, might actually have negative effects that last indefinitely.

Replies from: wdmacaskill, bryjnar
comment by wdmacaskill · 2012-11-11T17:03:40.010Z · LW(p) · GW(p)

Thanks benthamite, I think everything you said above was accurate.

comment by bryjnar · 2012-11-11T06:34:31.145Z · LW(p) · GW(p)

I was only addressing the point I directly quoted, where MTGandP was questioning the multiplicative factor that Will suggested. I was merely pointing out why that might look low!

I agree that the argument is still pretty much in force even if you put animals pretty much on parity.

comment by MTGandP · 2012-11-11T03:53:50.557Z · LW(p) · GW(p)

I think most people give way too small a multiplier to the weight of animal suffering. A non-human animal may not be able to suffer in all the same ways that a human can, but it is still sufficiently conscious such that its experiences in a factory farm are probably comparable to what a human's experiences would be in the same situation.

Replies from: PeterisP
comment by PeterisP · 2012-11-26T23:33:35.992Z · LW(p) · GW(p)

What should be objective grounds for such a multiplier? Not all suffering is valued equally. Excluding self-suffering (which is so much subjectively different) from the discussion, I would value the suffering of my child as more important than the suffering of your child. And vice versa.

So, for any valuation that would make sense to me (so that I would actually use that method to make decisions), there should be some difference between multipliers for various beings - if the average homo sapiens would be evaluated with a coefficient of 1, then some people (like your close relatives or friends) would be >1, and some would be <1. Animals (to me) would clearly be <1 as illustrated by a simple dilemma - if I had to choose to kill a cow to save a random man, or to kill a random man to save a cow, I'd favor the man in all cases without much hesitation.

So an important question is, what should be a reasonable basis to quantitatively compare a human life versus (as an example) cow lifes - one-to-ten? one-to-thousand? one-to-all-the-cows-in-the-world? Frankly, I've got no idea. I've given it some thought but I can't imagine a way how to get to an order of magnitude estimate that would feel reasonable to me.

Replies from: MTGandP, MugaSofer
comment by MTGandP · 2012-11-27T01:08:47.582Z · LW(p) · GW(p)

I wouldn't try to estimate the value of a particular species' suffering by intuition. Intuition is, in a lot of situations, a pretty bad moral compass. Instead, I would start from the simple assumption that if two beings suffer equally, their suffering is equally significant. I don't know how to back up this claim other than this: if two beings experience some unpleasant feeling in exactly the same way, it is unfair to say that one of their experiences carries more moral weight than the other.

Then all we have to do is determine how much different beings suffer. We can't know this for certain until we solve the hard problem of consciousness, but we can make some reasonable assumptions. A lot of people assume that a chicken feels less physical pain than a human because it is stupider. But neurologically speaking, there does not appear to be any reason why intelligence would enhance the capacity to feel pain. Hence, the physical pain that a chicken feels is roughly comparable to the pain that a human feels. It should be possible to use neuroscience to provide a more precise comparison, but I don't know enough about that to say more.

Top animal-welfare charities such as The Humane League probably prevent about 100 days of suffering per dollar. The suffering that animals experience in factory farms is probably far worse (by an order of magnitude or more) than the suffering of any group of humans that is targeted by a charity. If you doubt this claim, watch some footage of what goes on in factory farms.

As a side note, you mentioned comparing the value of a cow versus a human. I don't think this is a very useful comparison to make. A better comparison is the suffering of a cow versus a human. A life's value depends on how much happiness and suffering it contains.

Replies from: MugaSofer
comment by MugaSofer · 2012-11-27T01:23:01.488Z · LW(p) · GW(p)

A life's value depends on how much happiness and suffering it contains.

I personally treat lives as valuable in and of themselves. It's why I don't kill sad people, I try to make them happier.

The suffering that animals experience in factory farms is probably far worse (by an order of magnitude or more) than the suffering of any group of humans that is targeted by a charity. If you doubt this claim, watch some footage of what goes on in factory farms.

Most people would argue that animals are less capable of experiencing suffering and thus the same amount of pain is worth less in an animal than a human.

EDIT:

Then all we have to do is determine how much different beings suffer. We can't know this for certain until we solve the hard problem of consciousness, but we can make some reasonable assumptions. A lot of people assume that a chicken feels less physical pain than a human because it is stupider. But neurologically speaking, there does not appear to be any reason why intelligence would enhance the capacity to feel pain.

Do you also support tiling the universe with orgasmium? Genuinely curious.

Replies from: MTGandP
comment by MTGandP · 2012-11-27T03:16:59.390Z · LW(p) · GW(p)

I personally treat lives as valuable in and of themselves.

Why? What sort of life has value? Does the life of a bacterium have inherent value? How about a chicken? Does a life have finite inherent value? How do you compare the inherent value of different lives?

It's why I don't kill sad people, I try to make them happier.

Killing people makes them have 0 happiness (in practice, it actually reduces the total happiness in the world by quite a bit because killing someone has a lot of side effects.) Making people happy gives them positive happiness. Positive happiness is better than 0 happiness.

Most people would argue that animals are less capable of experiencing suffering and thus the same amount of pain is worth less in an animal than a human.

I don't care what most people think. The majority is wrong about a lot of things. I believe that non-human animals [1] experience pain in roughly the same way that humans do because that's where the evidence seems to point. What most people think about it does not come into the equation.

Do you also support tiling the universe with orgasmium?

Probably. I'm reluctant to make a change of that magnitude without considering it really, really carefully, no matter how sure I may be right now that it's a good thing. If I found myself with the capacity to do this, I would probably recruit an army of the world's best thinkers to decide if it's worth doing. But right now I'm inclined to say that it is.

[1] Here I'm talking about animals like pigs and chickens, not animals like sea sponges.

Replies from: MugaSofer
comment by MugaSofer · 2012-11-27T03:35:25.088Z · LW(p) · GW(p)

I personally treat lives as valuable in and of themselves.

Why? What sort of life has value? Does the life of a bacterium have inherent value? How about a chicken? Does a life have finite inherent value? How do you compare the inherent value of different lives?

I must admit I am a tad confused here, but intelligence or whatever seems a good rule of thumb.

It's why I don't kill sad people, I try to make them happier.

Killing people makes them have 0 happiness (in practice, it actually reduces the total happiness in the world by quite a bit because killing someone has a lot of side effects.) Making people happy gives them positive happiness. Positive happiness is better than 0 happiness.

Oh, yes. Nevertheless, even if it would increase net happiness, I don't kill people. Not for the sake of happiness alone and all that.

Most people would argue that animals are less capable of experiencing suffering and thus the same amount of pain is worth less in an animal than a human.

I don't care what most people think. The majority is wrong about a lot of things. I believe that non-human animals [1] experience pain in roughly the same way that humans do because that's where the evidence seems to point. What most people think about it does not come into the equation.

The same way, sure. But introspection suggests I don't value it as much depending on how conscious they are (probably the same as intelligence.)

Do you also support tiling the universe with orgasmium?

Probably. I'm reluctant to make a change of that magnitude without considering it really, really carefully, no matter how sure I may be right now that it's a good thing. If I found myself with the capacity to do this, I would probably recruit an army of the world's best thinkers to decide if it's worth doing. But right now I'm inclined to say that it is.

Have you read "Not for the Sake of Happiness (Alone)"? Human values are complicated.

Replies from: MTGandP
comment by MTGandP · 2012-11-27T04:56:13.441Z · LW(p) · GW(p)

I must admit I am a tad confused here, but intelligence or whatever seems a good rule of thumb.

  1. I was asking questions to try to better understand where you're coming from. Do you mean the questions were confusing?

  2. Are you saying that moral worth is directly proportional to intelligence? If so, why do you think this is true?

But introspection suggests I don't value it as much depending on how conscious they are (probably the same as intelligence.)

Why not? Do you have a good reason, or are you just going off of intuition?

Have you read "Not for the Sake of Happiness (Alone)"?

Yes, I've read it. I'm not entirely convinced that all values reduce to happiness, but I've never seen any value that can't be reduced to happiness. That's one of the areas in ethics where I'm the most uncertain. In practice, it doesn't come up much because in almost every situation, happiness and preference satisfaction amount to the same thing.

I'm inclined to believe that not all preferences reduce to happiness, but all CEV preferences do reduce to happiness. As I said before, I'm fairly uncertain about this and I don't have much evidence.

Replies from: nshepperd, MugaSofer
comment by nshepperd · 2012-11-27T06:22:38.760Z · LW(p) · GW(p)

Yes, I've read it. I'm not entirely convinced that all values reduce to happiness, but I've never seen any value that can't be reduced to happiness. That's one of the areas in ethics where I'm the most uncertain. In practice, it doesn't come up much because in almost every situation, happiness and preference satisfaction amount to the same thing.

You can probably think of a happiness-based justification for any value someone throws at you. But that's probably only because you're coming from the privileged position of being a human who already knows those values are good, and hence wants to find a reason happiness justifies them. I suspect an AI designed only to maximise happiness would probably find a different way that would produce more happiness while disregarding almost all values we think we have.

Replies from: MTGandP, MugaSofer
comment by MTGandP · 2012-11-28T06:40:02.574Z · LW(p) · GW(p)

It's difficult for me to say because this sort of introspection is difficult, but I believe that I generally reject values when I find that they don't promote happiness.

You can probably think of a happiness-based justification for any value someone throws at you.

But some justifications are legitimate and some are rationalizations. With the examples of discovery and creativity, I think it's obvious that they increase happiness by a lot. It's not like I came up with some ad hoc justification for why they maybe provide a little bit of happiness. It's like discovery is responsible for almost all of the increases in quality of life that have taken place over the past several thousand years.

I suspect an AI designed only to maximise happiness would probably find a different way that would produce more happiness while disregarding almost all values we think we have.

I think a lot of our values do a very good job of increasing happiness, and I welcome an AI that can point out which values don't.

Replies from: nshepperd
comment by nshepperd · 2012-11-28T08:35:34.325Z · LW(p) · GW(p)

With the examples of discovery and creativity, I think it's obvious that they increase happiness by a lot.

The point is that's not sufficient. Like saying "all good is complexity, because for example a mother's love for her child is really complex". Yes, it's complex compared to some boring things like carving identical chair legs out of wood over and over for eternity, but compared to, say, tiling the universe with the digits of chaitin's omega or something, it's nothing. And tiling the universe with chaitin's omega would be a very boring and stupid thing to do.

You need to show that the value in question is the best way of generating happiness. Not just that it results in more than the status quo. It has to generate more happiness, than, say, putting everyone on heroine forever. Because otherwise someone who really cared about happiness would just do that.

I think a lot of our values do a very good job of increasing happiness, and I welcome an AI that can point out which values don't.

And they other point is that values aren't supposed to do a job. They're meant to describe what job you would like done! If you care about something that doesn't increase happiness, then self-modifying to lose that so as to make more happiness would be a mistake.

Replies from: MTGandP
comment by MTGandP · 2012-11-29T00:05:04.250Z · LW(p) · GW(p)

You need to show that the value in question is the best way of generating happiness.

You're absolutely correct. Discovery may not always be the best way of generating happiness; and if it's not, you should do something else.

And the other point is that values aren't supposed to do a job.

Not all values are terminal values. Some people value coffee because it wakes them up; they don't value coffee in itself. If they discover that coffee in fact doesn't wake them up, they should stop valuing coffee.

With the examples of discovery and creativity, I think it's obvious that they increase happiness by a lot.

The point is that's not sufficient.

What is sufficient is demonstrating that if discovery does not promote happiness then it is not valuable. As I explained in my sorting sand example, discovery that does not in any way promote happiness is not worthwhile.

comment by MugaSofer · 2012-11-27T06:45:32.856Z · LW(p) · GW(p)

Well, orgasmium, for a start.

comment by MugaSofer · 2012-11-27T05:33:13.896Z · LW(p) · GW(p)

must admit I am a tad confused here, but intelligence or whatever seems a good rule of thumb.

I was asking questions to try to better understand where you're coming from. Do you mean the questions were confusing?

No, I mean I am unsure as to what my CEV would answer.

Are you saying that moral worth is directly proportional to intelligence? If so, why do you think this is true?

Because I'll kill a bug to save a chicken, a chicken to save a cat, a cat to save an ape, and an ape to save a human. The part of me responsible for morality clearly has some sort of criteria for moral worth that seems roughly equivalent to intelligence.

But introspection suggests I don't value it as much depending on how conscious they are (probably the same as intelligence.)

Why not? Do you have a good reason, or are you just going off of intuition?

... both?

Have you read "Not for the Sake of Happiness (Alone)"?

Yes, I've read it. I'm not entirely convinced that all values reduce to happiness, but I've never seen any value that can't be reduced to happiness. That's one of the areas in ethics where I'm the most uncertain. In practice, it doesn't come up much because in almost every situation, happiness and preference satisfaction amount to the same thing.

Fair enough. Unfortunately, the area of ethics where I'm the most uncertain is weighting creatures with different intelligence levels.

Thing like discovery and creativity seem like good examples of preferences that don't reduce to happiness IIRC, although it's been a while since I thought everything reduced to happiness so I don't recall very well.

I'm inclined to believe that not all preferences reduce to happiness, but all CEV preferences do reduce to happiness. As I said before, I'm fairly uncertain about this and I don't have much evidence.

Not sure what this means.

Replies from: MTGandP
comment by MTGandP · 2012-11-27T05:59:20.504Z · LW(p) · GW(p)

Are you saying that moral worth is directly proportional to intelligence? If so, why do you think this is true?

Because I'll kill a bug to save a chicken, a chicken to save a cat, a cat to save an ape, and an ape to save a human. The part of me responsible for morality clearly has some sort of criteria for moral worth that seems roughly equivalent to intelligence.

But why is intelligence important? I don't see its connection to morality. I know it's commonly believed that intelligence is morally relevant, and my best guess as to why is that it conveniently places humans at the top and thus justifies mistreating non-human animals.

If intelligence is morally significant, then it's not really that bad to torture a mentally handicapped person.

I believe this is false: a mentally handicapped person suffers physical pain to the same extent that I do, so his suffering is just as morally significant. The same reasoning applies to many species of non-human animal. What matters is not intelligence but the capacity to experience happiness and suffering.

... both?

So then what is your good reason that's not directly based on intuition?

Thing like discovery and creativity seem like good examples of preferences that don't reduce to happiness IIRC, although it's been a while since I thought everything reduced to happiness so I don't recall very well.

Discovery leads to the invention of new things. In general, new things lead to increased happiness. It also leads to a better understanding of the universe, which allows us to better increase happiness. If the process of discovery brought no pleasure in itself and also didn't make it easier for us to increase happiness, I think it would be useless. The same reasoning applies to creativity.

Not sure what this means.

You mentioned CEV in your previous comment, so I assume you're familiar with it. I mean that I think if you took people's coherent extrapolated volitions, they would exclusively value happiness.

Replies from: MugaSofer
comment by MugaSofer · 2012-11-27T06:43:35.360Z · LW(p) · GW(p)

I'll kill a bug to save a chicken, a chicken to save a cat, a cat to save an ape, and an ape to save a human. The part of me responsible for morality clearly has some sort of criteria for moral worth that seems roughly equivalent to intelligence.

But why is intelligence important? I don't see its connection to morality. I know it's commonly believed that intelligence is morally relevant, and my best guess as to why is that it conveniently places humans at the top and thus justifies mistreating non-human animals.

Well, why is pain important? I suspect empathy is mixed up here somewhere, but honestly, it doesn't feel like it reduces - bugs just are worth less. Besides, where do you draw the line if you lack a sliding scale - I assume you don't care about rocks, or sponges, or germs.

If intelligence is morally significant, then it's not really that bad to torture a mentally handicapped person.

Well ... not as bad as torturing, say, Bob, the Entirely Average Person, no. But it's risky to distinguish between humans like this because it lets in all sorts of nasty biases, so I try not to except in exceptional cases.

I believe this is false: a mentally handicapped person suffers physical pain to the same extent that I do, so his suffering is just as morally significant. The same reasoning applies to many species of non-human animal. What matters is not intelligence but the capacity to experience happiness and suffering.

I know you do. Of course, unless they're really handicapped, most animals are still much lower; and, of course there's the worry that the intelligence is ther and they just can't express it in everyday life (idiot savants and so on.)

So then what is your good reason that's not directly based on intuition?

Well, it's morality, it does ultimately come down to intuition no matter what. I can come up with all sorts of reasons, but remember that they aren't my true rejection - my true rejection is the mental image of killing a man to save some cockroaches.

Discovery leads to the invention of new things. In general, new things lead to increased happiness. It also leads to a better understanding of the universe, which allows us to better increase happiness. If the process of discovery brought no pleasure in itself and also didn't make it easier for us to increase happiness, I think it would be useless. The same reasoning applies to creativity.

And yet, a world without them sounds bleak and lacking in utility.

You mentioned CEV in your previous comment, so I assume you're familiar with it. I mean that I think if you took people's coherent extrapolated volitions, they would exclusively value happiness

Oh, right.

Ah ... not sure what I can say to convince you if NFTSOH(A) didn't.

Replies from: MTGandP
comment by MTGandP · 2012-11-28T06:31:26.268Z · LW(p) · GW(p)

Well, why is pain important?

It's really abstract and difficult to explain, so I probably won't do a very good job. Peter Singer explains it pretty well in "All Animals Are Equal." Basically, we should give equal consideration to the interests of all beings. Any being capable of suffering has an interest in avoiding suffering. A more intelligent being does not have a greater interest in avoiding suffering [1]; hence, intelligence is not morally relevant.

Besides, where do you draw the line if you lack a sliding scale - I assume you don't care about rocks, or sponges, or germs.

There is a sliding scale. More capacity to feel happiness and suffering = more moral worth. Rocks, sponges, and germs have no capacity to feel happiness and suffering.

And yet, a world without [discovery] sounds bleak and lacking in utility.

Well yeah. That's because discovery tends to increase happiness. But if it didn't, it would be pointless. For example, suppose you are tasked with sifting through a pile of sand to find which one is the whitest. When you finish, you will have discovered something new. But the process is really boring and it doesn't benefit anyone, so what's the point? Discovery is only worthwhile if it increases happiness in some way.

I'm not saying that it's impossible to come up with an example of something that's not reducible to happiness, but I don't think discovery is such a thing.

[1] Unless it is capable of greater suffering, but that's not a trait inherent to intelligence. I think it may be true in some respects that more intelligent beings are capable of greater suffering; but what matters is the capacity to suffer, not the intelligence itself.

Replies from: Jayson_Virissimo, MugaSofer
comment by Jayson_Virissimo · 2012-11-28T07:13:10.614Z · LW(p) · GW(p)

There is a sliding scale. More capacity to feel happiness and suffering = more moral worth. Rocks, sponges, and germs have no capacity to feel happiness and suffering.

This sounds like a bad rule and could potentially create a sensitivity arms race. Assuming that people that practice Stoic or Buddhist techniques are successful in diminishing their capacity to suffer, does that mean they are worth less morally than before they started? This would be counter-intuitive, to say the least.

Replies from: MTGandP
comment by MTGandP · 2012-11-29T00:05:27.658Z · LW(p) · GW(p)

Assuming that people that practice Stoic or Buddhist techniques are successful in diminishing their capacity to suffer, does that mean they are worth less morally than before they started?

It means that inducing some typically-harmful action on a Stoic is less harmful than inducing it on a normal person. For example, suppose you have a Stoic who no longer feels negative reactions to insults. If you insult her, she doesn't mind at all. It would be morally better to insult this person than to insult a typical person.

Let me put it this way: all suffering of equal degree is equally important, and the importance of suffering is proportional to its degree.

A lot of conclusions follow from this principle, including:

  • animal suffering is important;
  • if you have to do something to one of two beings and it will cause greater suffering to being A, then, all else being equal, you should do it to being B.
comment by MugaSofer · 2012-11-29T21:53:43.486Z · LW(p) · GW(p)

Well, why is pain important?

It's really abstract and difficult to explain, so I probably won't do a very good job. Peter Singer explains it pretty well in "All Animals Are Equal." Basically, we should give equal consideration to the interests of all beings. Any being capable of suffering has an interest in avoiding suffering. A more intelligent being does not have a greater interest in avoiding suffering [1]; hence, intelligence is not morally relevant.

No, my point was that your valuing pain is itself a moral intuition. Picture a pebblesorter explaining that this pile is correct, while your pile is, obviously, incorrect.

There is a sliding scale. More capacity to feel happiness and suffering = more moral worth. Rocks, sponges, and germs have no capacity to feel happiness and suffering.

So, say, an emotionless AI? A human with damaged pain receptors? An alien with entirely different neurochemistry analogs?

Well yeah. That's because discovery tends to increase happiness. But if it didn't, it would be pointless. For example, suppose you are tasked with sifting through a pile of sand to find which one is the whitest. When you finish, you will have discovered something new. But the process is really boring and it doesn't benefit anyone, so what's the point? Discovery is only worthwhile if it increases happiness in some way.

No. I'm saying that I value exporation/discovery/whatever even when it serves no purpose, ultimately. Joe may be exploring a randomly-generated landscape, but it's better than sitting in a whitewashed room wireheading nonetheless.

[1] Unless it is capable of greater suffering, but that's not a trait inherent to intelligence. I think it may be true in some respects that more intelligent beings are capable of greater suffering; but what matters is the capacity to suffer, not the intelligence itself.

Can you taboo "suffering" for me?

Replies from: MTGandP
comment by MTGandP · 2012-11-30T02:54:45.186Z · LW(p) · GW(p)

I've avoided using the word "suffering" or its synonyms in this comment, except in one instance where I believe it is appropriate.

No, my point was that your valuing pain is itself a moral intuition.

Yes, it's an intuition. I can't prove that suffering is important.

So, say, an emotionless AI?

If the AI does not consciously prefer any state to any other state, then it has no moral worth.

A human with damaged pain receptors?

Such a human could still experience emotions, so ey would still have moral worth.

An alien with entirely different neurochemistry analogs?

Difficult to say. If it can experience states about which it has an interest in promoting or avoiding, then it has moral worth.

No. I'm saying that I value exporation/discovery/whatever even when it serves no purpose, ultimately. Joe may be exploring a randomly-generated landscape, but it's better than sitting in a whitewashed room wireheading nonetheless.

Okay. I don't really get why, but I can't dispute that you hold that value. This is why preference utilitarianism can be nice.

Replies from: MugaSofer
comment by MugaSofer · 2012-11-30T09:21:30.355Z · LW(p) · GW(p)

... oh.

You were defining pain/suffering/whatever as generic disutility? That's much more reasonable.

... so, is a hive of bees one mind of many or sort of both at once? Does evolution get a vote, here? If you aren't discounting optimizers that lack consciousness you're gonna get some damn strange results with this.

Replies from: MTGandP
comment by MTGandP · 2012-11-30T21:51:41.715Z · LW(p) · GW(p)

so, is a hive of bees one mind of many or sort of both at once?

Many. The unit of moral significance is the conscious mind. A group of bees is not conscious; individual bees are conscious.

(Edit: It's possible that bees are not conscious. What I meant was that if bees are conscious then they are conscious as individuals, not as a group.)

If you aren't discounting optimizers that lack consciousness you're gonna get some damn strange results with this.

A non-conscious being cannot experience disutility, therefore it has no moral relevance.

Replies from: army1987, MugaSofer
comment by A1987dM (army1987) · 2012-12-03T09:09:46.471Z · LW(p) · GW(p)

A non-conscious being cannot experience disutility

Er... Deep Blue?

Replies from: MTGandP
comment by MTGandP · 2012-12-04T00:44:54.322Z · LW(p) · GW(p)

Deep Blue cannot experience disutility (i.e. negative states). Deep Blue can have a utility function to determine the state of the chess board, but that's not the same as consciously experiencing positive or negative utility.

Replies from: army1987
comment by A1987dM (army1987) · 2012-12-04T11:10:21.884Z · LW(p) · GW(p)

Okay, I see what you mean by “experience”... but that makes “A non-conscious being cannot experience disutility” a tautology, so following it with “therefore” and a non-tautological claim raises all kind of warning lights in my brain.

comment by MugaSofer · 2012-12-01T05:44:40.013Z · LW(p) · GW(p)

Unless you can taboo "conscious" in such a way that that made sense, I'm gonna substitute "intelligent" for "conscious" there (which is clearly what I meant, in context.)

The point with bees is that, as a "hive mind", they act as an optimizer without any individual intention.

Replies from: MTGandP
comment by MTGandP · 2012-12-01T21:55:07.055Z · LW(p) · GW(p)

I'm gonna substitute "intelligent" for "conscious" there

I don't see that you can substitute "intelligent" for "conscious". Perhaps they are correlated, but they're certainly not the same. I'm definitely more intelligent than my dog, but am I more conscious? Probably not. My dog seems to experience the world just as vividly as I do. (Knowing this for certain requires solving the hard problem of consciousness, but that's where the evidence seems to point.)

(which is clearly what I meant, in context.)

It's clear to you because you wrote it, but it wasn't clear to me.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-03T04:21:31.194Z · LW(p) · GW(p)

Well yes, that's the illusion of transparency for you. I assure you, I was using conscious as a synonym for intelligent. Were you interpreting it as "able to experience qualia"? Because that is both a tad tautological and noticeably different from the argument I've been making here.

Whatever. We're getting offtopic.

If you value optimizer's goals regardless of intelligence - whether valuing a bugs desires as much as a human's, a hivemind's goals less than it's individual members or an evolution's goals anywhere - you get results that do not appear to correlate with anything you could call human morality. If I have misinterpreted your beliefs, I would like to know how. If I have interpreted them correctly, I would like to see how you reconcile this with saving orphans by tipping over the ant farm.

Replies from: MTGandP
comment by MTGandP · 2012-12-03T05:16:30.526Z · LW(p) · GW(p)

If ants experience qualia at all, which is highly uncertain, they probably don't experience them to the same extent that humans do. Therefore, their desires are not as important. On the issue of the moral relevance of insects, the general consensus among utilitarians seems to be that we have no idea how vividly insects can experience the world, if at all, so we are in no position to rate their moral worth; and we should invest more into research on insect qualia.

I think it's pretty obvious that (e.g.) dogs experience the world about as vividly as humans do, so all else being equal, kicking a dog is about as bad as kicking a human. (I won't get into the question of killing because it's massively more complicated.)

I would like to seehow you reconcile this with saving orphans by tipping over the ant farm.

I cannot say whether this is right or wrong because we don't know enough about ant qualia, but I would guess that a single human's experience is worth the experience of at least hundreds of ants, possibly a lot more.

you get results that do not appear to correlate with anything you could call human morality.

Like what, besides the orphans-ants thing? I don't know if you've misinterpreted my beliefs unless I have a better idea of what you think I believe. That said, I do believe that a lot of "human morality" is horrendously incorrect.

Replies from: JoshuaZ, MugaSofer
comment by JoshuaZ · 2012-12-03T05:20:28.794Z · LW(p) · GW(p)

I think it's pretty obvious that (e.g.) dogs experience the world about as vividly as humans do, so all else being equal, kicking a dog is about as bad as kicking a human.

This isn't obvious to me. And it is especially not obvious given that dogs are a species where one of the primary selection effects has been human sympathy.

Replies from: MTGandP
comment by MTGandP · 2012-12-03T05:27:57.678Z · LW(p) · GW(p)

You make a good point about human sympathy. Still, if you look at biological and neurological evidence, it appears that dogs are built in pretty much the same ways we are. They have the same senses—in fact, their senses are stronger in some cases. They have the same evolutionary reasons to react to pain. The parts of their brains responsible for pain look the same as ours. The biggest difference is probably that we have cerebral cortexes and they don't, but that part of the brain isn't especially important in responding to physical pain. Other forms of pain, yes; and I would agree that humans can feel some negative states more strongly than dogs can. But it doesn't look like physical pain is one of those states.

comment by MugaSofer · 2012-12-03T06:12:03.349Z · LW(p) · GW(p)

If ants experience qualia at all, which is highly uncertain, they probably don't experience them to the same extent that humans do. Therefore, their desires are not as important.

GOSH REALLY.

I think it's pretty obvious that (e.g.) dogs experience the world about as vividly as humans do, so all else being equal, kicking a dog is about as bad as kicking a human. (I won't get into the question of killing because it's massively more complicated.)

Once again, you fail to provide the slightest justification for valuing dogs as much as humans; if this was "obvious" we wouldn't be arguing, would we? Dogs are intelligent enough to be worth a non-negligable amount, but if we value all pain equally you should feel the same way about, say, mice, or ... ants.

I would like to see how you reconcile this with saving orphans by tipping over the ant farm.

I cannot say whether this is right or wrong because we don't know enough about ant qualia, but I would guess that a single human's experience is worth the experience of at least hundreds of ants, possibly a lot more.

Huh? You value individual bees, yet not ants?

Like what, besides the orphans-ants thing? I don't know if you've misinterpreted my beliefs unless I have a better idea of what you think I believe. That said, I do believe that a lot of "human morality" is horrendously incorrect.

How, exactly, can human morality be "incorrect"? What are you comparing it to?

Replies from: MTGandP
comment by MTGandP · 2012-12-03T06:18:39.331Z · LW(p) · GW(p)

you fail to provide the slightest justification for valuing dogs as much as humans

See my reply here.

if we value all pain equally you should feel the same way about, say, mice, or ... ants.

Not if mice or ants don't feel as much pain as humans do. Equal pain is equally valuable, no matter the species. But unequal pain is not equally valuable.

Huh? You value individualbees, yet not ants?

I worded my comment poorly. I didn't mean to imply that bees are necessarily conscious. I've edited my comment to reflect this.

How, exactly, can human morality be "incorrect"? What are you comparing it to?

Well I'd have to get into metaethics to answer this, which I'm not very good at. I don't think such a conversation would be fruitful.

GOSH REALLY.

Yes, really. You seemed to think that I believe ants were worth as much as humans, so I explained why I don't believe that.

Replies from: MugaSofer, Jayson_Virissimo
comment by MugaSofer · 2012-12-04T13:39:45.370Z · LW(p) · GW(p)

Firstly, I thought you said we were discussing disutility, not pain?

Secondly, could we taboo consciousness? It seems to mean all things to all people in discussions like this.

Thirdly, you claimed human morality was incorrect; I was under the impression that we were analyzing human morality. If you are working to a different standard to humanity (which I doubt) then perhaps a change in terminology is in order? If you are, in fact, a human, and as such the "morality" under discussion here is that of humans, then your statement makes no sense.

Assuming the second possibility, you're right; there is no need to get into metaethics as long as we focus on actual (human) ethics.

comment by Jayson_Virissimo · 2012-12-03T06:47:09.520Z · LW(p) · GW(p)

Not if mice or ants don't feel as much pain as humans do. Equal pain is equally valuable, no matter the species. But unequal pain is not equally valuable.

What conceivable test would verify if one organism feels more pain than another organism?

Replies from: MTGandP, juliawise, aelephant
comment by MTGandP · 2012-12-04T00:45:55.011Z · LW(p) · GW(p)

Good question. I don't know of any such test, although I'm reluctant to say that it doesn't exist. That's why it's important to do research in this area.

comment by juliawise · 2012-12-04T23:57:20.505Z · LW(p) · GW(p)

Some kind of brain scans? Probably not very useful on insects, etc, but would probably work for, say, chickens vs. chimpanzees.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2012-12-05T09:59:47.565Z · LW(p) · GW(p)

Some kind of brain scans? Probably not very useful on insects, etc, but would probably work for, say, chickens vs. chimpanzees.

Okay, say you had some kind of nociceptor analysis machine (or, for that matter, whatever you think "pain" will eventually reduce to). Would it count the number of discrete cociceptors or would it measure cociceptor mass? What if we encountered extra-terrestrial life that didn't have any (of whatever it is that we have reduced "pain" to)? Would they then count for nothing in your moral calculus?

To me, this whole things feels like we are trying to multiply apples by oranges and divide by zebras. Also, it seems problematic from an institutional design perspective, due to poor incentive structure. It would reward those persons that self-modify towards being more utility-monster-like on the margin.

Replies from: Nornagest
comment by Nornagest · 2012-12-05T10:38:22.818Z · LW(p) · GW(p)

Well, there's neurologically sophisticated Earthly life with neural organization very different from mammals', come to that.

I'm not neurologist enough to give an informed account of how an octopus's brain differs from a rhesus monkey's, but I'm almost sure its version of nociception would look quite different. Though they've got an opioid receptor system, so maybe this is more basal than I thought.

comment by aelephant · 2012-12-03T07:57:26.131Z · LW(p) · GW(p)

I remember reading that crustaceans don't have the part of the brain that processes pain. I don't feel bad about throwing live crabs into boiling water.

Replies from: MugaSofer, Jayson_Virissimo
comment by MugaSofer · 2012-12-04T13:31:52.654Z · LW(p) · GW(p)

Really? I remember reading the opposite. Many times. If you're regularly boiling them alive, have you considered researching this?

Replies from: aelephant
comment by aelephant · 2012-12-04T23:47:31.657Z · LW(p) · GW(p)

I'm not regularly boiling them alive, but I researched it a little anyway. Here's a study often used to show that crustaceans DO feel pain: http://forms.mbl.edu/research/services/iacuc/pdf/pain_hermit_crabs.pdf

comment by Jayson_Virissimo · 2012-12-03T08:31:15.078Z · LW(p) · GW(p)

If true, that is interesting. On the other hand, whether or not something feels pain seems like a much easier problem to solve than how much pain something feels relative to something else.

comment by MugaSofer · 2012-11-27T00:32:22.704Z · LW(p) · GW(p)

I would value the suffering of my child as more important than the suffering of your child. And vice versa.

To be clear, you are arguing that this is a bias to be overcome, yes?

I've given it some thought but I can't imagine a way how to get to an order of magnitude estimate that would feel reasonable to me.

Scope insensitivity?

Replies from: PeterisP
comment by PeterisP · 2012-11-27T12:11:05.469Z · LW(p) · GW(p)

No, I'm not arguing that this is a bias to overcome - if I have to choose wether to save my child or your child, the unbiased rational choice is to save my child, as the utility (to me) of this action is far greater.

I'm arguing that this is a strong counterexample to the assumption that all entities may be treated as equals in calculating "value of entity_X's suffering to me". They are clearly not equal, they differ by order(s) of magnitude.

"general value of entity_X's suffering" is a different, not identical measurement - but when making my decisions (such as the original discussion on what charities would be the most rational [for me] to support) I don't want to use the general values, but the values as they apply to me.

Replies from: MugaSofer
comment by MugaSofer · 2012-11-27T17:25:41.227Z · LW(p) · GW(p)

... oh.

That seems ... kind of evil, to be honest.

Replies from: PeterisP
comment by PeterisP · 2012-11-27T20:51:10.300Z · LW(p) · GW(p)

OK, then I feel confused.

Regarding " if I have to choose wether to save my child or your child, the unbiased rational choice is to save my child, as the utility (to me) of this action is far greater" - I was under impression that this would be a common trait shared by [nearly] all homo sapiens. Is it not so and is generally considered sociopathic/evil ?

Replies from: MugaSofer
comment by MugaSofer · 2012-11-27T21:00:48.709Z · LW(p) · GW(p)

Consider: if you attach higher utility to your child's life than mine, then even if my child has a higher chance of survival you will choose your child and leave mine to die.

Replies from: Kawoomba, PeterisP, PeterisP
comment by Kawoomba · 2012-12-03T07:11:28.509Z · LW(p) · GW(p)

if you attach higher utility to your child's life than mine, then even if my child has a higher chance of survival you will choose your child and leave mine to die.

Not true as a general statement, not if you're maximizing your expected utility gain.

Also, "if"? One often attaches utility based on ... attachment. Do you think there's more than, say, 0.01 parents per 100 that would not value their own child over some other child? Are most all parents "evil" in that regard?

Replies from: MugaSofer
comment by MugaSofer · 2012-12-04T13:23:57.372Z · LW(p) · GW(p)

Are most all parents "evil" in that regard?

I believe the technical term is "biased".

Replies from: Kawoomba
comment by Kawoomba · 2012-12-04T14:22:51.090Z · LW(p) · GW(p)

In the same way that I'm "biased" towards yogurt-flavored ice-cream. You can call any preference you have a "bias", but since we're here mostly dealing with cognitive biases (a different beast altogether), such an overloading of a preference-expression with a negatively connotated failure-mode should really be avoided.

What's your basis for objecting against utility functions that are "biased" (you introduced the term "evil") in the sense of favoring your own children over random other children?

Replies from: MugaSofer
comment by MugaSofer · 2012-12-04T14:34:49.407Z · LW(p) · GW(p)

No, I'm claiming that parents don't actually have a special case in their utility function, they're just biased towards their kids. Since parents are known to be biased toward their kids generally, and human morality is generally consistent between individuals, this seems a reasonable hypothesis.

Replies from: Vladimir_Nesov, Kawoomba
comment by Vladimir_Nesov · 2012-12-04T16:02:22.297Z · LW(p) · GW(p)

It seems like a possibility, but I don't think it's possible to clearly know that it's the case, and so it's an error to "claim" that it's the case ("claiming" sounds like an assertion of high degree of certainty). (You do say that it's a "reasonable hypothesis", but then what do you mean by "claiming"?)

Replies from: MugaSofer
comment by MugaSofer · 2012-12-10T17:35:52.387Z · LW(p) · GW(p)

Up until this point, I had never seen any evidence to the contrary. I'm still kinda puzzled at the amount of disagreement I'm getting ...

comment by Kawoomba · 2012-12-04T14:52:51.033Z · LW(p) · GW(p)

Clear preferences that are not part of their utility function? And which supposedly are evil, or "biased", with the negative connotations of "bias" included?

What about valuing specific friends, is that also not part of the utility function, or does that just apply to parents and their kids?

Are you serious that valuing your own kids over other kids is a bias to be overcome, and not typically a part of the parents' utility function?

Sorry about the incredulity, but that's the strangest apparently honestly held opinion I've read on LW in a long time. I'm probably misunderstanding your position somehow.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-04T15:04:40.851Z · LW(p) · GW(p)

Are you serious that valuing your own kids over other kids is a bias to be overcome

In a triage situation? Yes.

Replies from: Kawoomba
comment by Kawoomba · 2012-12-04T15:53:05.810Z · LW(p) · GW(p)

In a triage situation? Yes.

Even if you're restricting your assertion to special cases, let's go with that.

Why should I overcome my "bias" and not save my own child, just because there is some other child with a better chance of being saved, but which I do not care about as much?

What makes that an "evil" bias, as opposed to an ubiquitous aspect of most parents' utility functions?

Replies from: BerryPick6, MugaSofer
comment by BerryPick6 · 2012-12-04T15:55:59.128Z · LW(p) · GW(p)

Why should I overcome my "bias" and not save my own child, just because there is some other child with a better chance of being saved, but which I do not care about as much?

Assuming that saving my child would give me X utility and saving the other child would give his parents X utility, it's just a "shut up and multiply" kind of thing...

Replies from: Vladimir_Nesov, thomblake, Kawoomba, Kindly
comment by Vladimir_Nesov · 2012-12-04T16:06:58.905Z · LW(p) · GW(p)

Assuming that saving my child would give me X utility and saving the other child would give his parents X utility

This assumption is excluded by Kawoomba's "but which I do not care about as much", so isn't directly relevant at this point (unless you are making a distinction between "caring" and "utility", which should be more explicit).

Replies from: BerryPick6
comment by BerryPick6 · 2012-12-04T16:12:21.875Z · LW(p) · GW(p)

I guess I'm just not sure why Kawoomba's own utility gets special treatment over the other child's parents utility function. Then again, your reply and my own sentence just now have me slightly confused, so I may need to think on this a bit more.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-12-04T16:24:04.398Z · LW(p) · GW(p)

I guess I'm just not sure why Kawoomba's own utility gets special treatment over the other child's parents utility function.

Taboo "utility function", and "Kawoomba cares about Kawoomba's utility function" would resolve into the tautologous "Kawoomba is motivated by whatever it is that motivates Kawoomba". The subtler problem is that it's not a given that Kawoomba knows what motivates Kawoomba, so claims with certainty about what that is or isn't (including those made by Kawoomba) may be unfounded. To the extent "utility function" refers to idealized extrapolated volition, rather than present desires, people won't already have good understanding of even their own "utility function".

Replies from: Kawoomba
comment by Kawoomba · 2012-12-04T17:59:02.481Z · LW(p) · GW(p)

The subtler problem is that it's not a given that Kawoomba knows what motivates Kawoomba, so claims with certainty about what that is or isn't (including those made by Kawoomba) may be unfounded.

There is no idealized extrapolated volition that is based on my current volition that would prefer someone else's child over one of my own (CEV_me, not CEV_mankind). There are certainly inconsistencies in my non-idealized utility function, but that does not mean that every statement I make about my own utility function must be suspect, merely that such suspect/contradictory statements exist.

If you prefer vanilla over strawberry ice cream, there may be cases where that preference does not transfer to your extrapolated volition due to some other contradictory preferences. However, for comparisons with a significant delta involved, the initial result that determines your decision should be preserved. (It may however be different when extrapolating to a CEV for all humankind.)

Also, you used my name with a frequency of 7/84 in your last comment <3.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-12-04T18:13:30.310Z · LW(p) · GW(p)

that does not mean that every statement I make about my own utility function must be suspect

In general, unless something is well-understood, there is good reason to suspect an error. Human values is not something that's understood particularly well.

Replies from: Kawoomba
comment by Kawoomba · 2012-12-04T18:20:34.055Z · LW(p) · GW(p)

If you value e.g. your family extremely higher than a grain of salt, would you say that there is any chance of that not being reflected in your CEV?

Any "CEV" that doesn't conserve e.g. that particular relationship would be misnamed.

comment by thomblake · 2012-12-04T16:11:39.048Z · LW(p) · GW(p)

Assuming that saving my child would give me X utility and saving the other child would give his parents X utility

If you've found a way to aggregate utility across persons, I'd like to hear it.

Normally, we talk about trying to satisfy a particular utility function. If the parent values her child more than the neighbor's child, that is reflected in her utility function. What other standard are you trying to invoke?

Replies from: BerryPick6
comment by BerryPick6 · 2012-12-04T16:13:32.527Z · LW(p) · GW(p)

Ah, this clears up things a bit for me, thank you.

comment by Kawoomba · 2012-12-04T16:06:33.669Z · LW(p) · GW(p)

Why would I need to aim to satisfy overall utility including others, as opposed to just that of my own family?

Is any such preference that chooses my own utility over that of others a bias, and not part of my utility function?

Is it an evil bias if I buy myself some tech toys as opposed to donating that amount to my preferred charity?

Replies from: BerryPick6
comment by BerryPick6 · 2012-12-04T16:09:31.398Z · LW(p) · GW(p)

Why would I need to aim to satisfy overall utility including others, as opposed to just that of my own family?

What reason do you have for aiming to satisfy you own utility function, or that of your family's?

Is any such preference that chooses my own utility over that of others a bias, and not part of my utility function?

I'm afraid this is a little too much lingo for me. Sorry.

Is it an evil bias if I buy myself some tech toys as opposed to donating that amount to my preferred charity?

You'd have to taboo "evil" before I can answer this question.

Replies from: Kawoomba
comment by Kawoomba · 2012-12-04T16:51:05.461Z · LW(p) · GW(p)

What reason do you have for aiming to satisfy you own utility function

Um, it's my utility function, that which I aim to maximize and that which already incorporates my e.g. altruistic desires. Postulating "other preferences" that can overrule my utility function would be a contradiction in terms.

The other two questions were more aimed at MugaSofer, who was the one differentiating between preference as a "bias" and as part of your utility function, and who introduced the whole "evil" thing.

comment by Kindly · 2012-12-04T18:32:07.040Z · LW(p) · GW(p)

The nearest I can come to making sense of your claim is that it's some sort of imaginary Prisoner's Dilemma: you can cooperate by saving a random child instead of your own, and in symmetric cases other parents can cooperate by saving your child instead of theirs.

However, even if you are into counterfactual bargaining, I am pretty sure almost no other parent would cooperate here, which makes defecting a no-brainer.

I suppose to be fair I should imagine a world in which every parent is brainwashed into valuing other children's lives as much as their own (I am pretty sure it would take brainwashing). In this case (assuming you escaped the brainwashing so it's still a legitimate decision) saving the other child might be the right thing to do. At that point, though, you're arguably not optimizing for humans anymore.

comment by MugaSofer · 2012-12-10T18:13:47.564Z · LW(p) · GW(p)

My assertion is that all humans share utility - which is the standard assumption in ethics, and seems obviously true - and that parents are biased towards their children (for simple evopsych reasons,) leading them to choose their child when, objectively, their own ethics dictates they choose the other. The example given was that of a triage situation; you can only choose one, and need to decide who has he greater chance of survival.

Replies from: Kawoomba
comment by Kawoomba · 2012-12-12T09:04:44.830Z · LW(p) · GW(p)

Your moral philosophy in so far as it affects your actions is by definition already part of your utility function.

It makes no sense to say "my utility function dictates I want to do X, but because my own ethics says otherwise, I should do otherwise", it's a contradictio in terminis.

We should be very careful with ethical assumptions that seem "obviously true". Especially when they are not (true as in "common", it wouldn't make sense otherwise) - parents choosing their own child over other children is an example of following a different ethical compass, one valuing their own children over others. You can neither claim that those parents are confused about their own utility function, nor that they are "wrong". Your proposed "obviously true" ethical assumption is also based on "evopsych". You're trying to elevate an extreme altruist approach above others and calling it obviously true. For you, maybe, for the vast majority of e.g. parents? Not so much.

There is no epistemological truth in terminal values.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-12T09:35:54.341Z · LW(p) · GW(p)

parents choosing their own child over other children is an example of following a different ethical compass, one valuing their own children over others. You can neither claim that those parents are confused about their own utility function, nor that they are "wrong".

No.

Humans regularly act against their own ethics, whether due to misinformation or bias, akrasia, or cached thoughts about morality.

... are you seriously suggesting that, say, racists, are right about what they want? How then do they change when confronted with evidence that other races are, well, people? Perhaps I have misunderstood your point.

Replies from: Nornagest, Kawoomba
comment by Nornagest · 2012-12-12T09:56:22.008Z · LW(p) · GW(p)

It seems obviously true that the moralities people implement are often internally inconsistent. It also seems obviously true that people can talk about imperatives they feel derive from one horn or the other of an inconsistent moral system, without either lying or being wrong as such.

The inconsistency might resolve itself with new information, but it's going to inform any statements we make about the moral system it exists in until that information arrives.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-12T10:43:10.843Z · LW(p) · GW(p)

I would advise you to read "cached thoughts" and then answer my question:

... are you seriously suggesting that, say, racists, are right about what they want? How then do they change when confronted with evidence that other races are, well, people?

comment by Kawoomba · 2012-12-12T09:43:53.986Z · LW(p) · GW(p)

... are you seriously suggesting that, say, racists, are right about what they want?

I am saying that the statement "a racist wants that which he/she wants" is tautologically true. There is no objective "right" or "wrong" when comparing utility functions, there is just "this utility function values X and Y, this other utility function values X and Z, they are compatible in respect to X, they are incompatible in respect to Y".

Certainly what we value changes all the time. But that's just change, it's not becoming "less wrong" or "wronger". Instead, it may be "more (/less) compatible with commonly shared elements of western utility functions" (which still fluctuate across time and culture, and species).

Replies from: MugaSofer
comment by MugaSofer · 2012-12-12T09:59:27.337Z · LW(p) · GW(p)

Except that humans share a utility function, which doesn't change. You can persuade someone that murder is good, but you do it by persuading them that it leads to outcomes they already considered "good" and they were mistaken about the downsides of, well, killing people. Cached thoughts can result in actions that, objectively, are wrong. They are not wrong because this is some essential property of these actions, morality is in our minds, but we can still meaningfully say "this is wrong" just was we can say "this is a chair" or "there are five apples". Eliezer's latest sequence touches on this kind of meaningfulness. Other standard stuff worth reading in this context is "The Psychological Unity of Humankind" and "Coherent Extrapolated Volition"; and, well, the Metaethics Sequence.

Replies from: Nornagest
comment by Nornagest · 2012-12-12T10:26:26.220Z · LW(p) · GW(p)

Except that humans share a utility function, which doesn't change.

Humans trivially don't share a utility function, since they have differing preferences over world-states. I'm even pretty sure that individual people don't have anything that we could call a reliable utility function, since we don't have the cognitive juice to evaluate world-states in their totality and even tractable subsets of the world end up getting evaluated differently based on all sorts of random crap including, but not limited to, presentation order and how recently you've eaten.

CEV attempts to resolve people's conflicting preferences by doing away with several human cognitive limitations, requiring reflective consistency, and applying resolution steps based on projected social interactions (at least, that's how I'm reading "grew up farther together"), but these requirements (especially the latter) are underspecified in its present form. Even if they weren't, CEV in its present form does not, nor does it try to, demonstrate that the entirety of the human moral landscape in fact coheres.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-12T10:41:22.535Z · LW(p) · GW(p)

Humans trivially don't share a utility function, since they have differing preferences over world-states.

Humans trivially do share a utility function, since they change their beliefs consistently in response to argument. Of course, as with all other knowledge, self-knowledge and moral reasoning are hampered by biases, cached thoughts, and simple stupidity.

CEV, and for that matter The Psychological Unity of Humankind, are relevant without being themselves arguments. Have you, in fact, read the metaethics sequence? I ask for information as to how best to proceed.

Replies from: Nornagest
comment by Nornagest · 2012-12-12T10:54:16.243Z · LW(p) · GW(p)

Humans trivially do share a utility function, since they change their beliefs consistently in response to argument.

...no offense, but I don't think that word means what you think it means.

Non-pathological human ethics may or may not ultimately run off some consistent set of intrinsic affective associations. (Whether or not it does more or less reduces to the question of whether CEV is complete, which as I've said is currently unknown.) Even if true, this doesn't imply a shared utility function within any useful domain.

Utility (in its simplest form) is nothing more or less than a preference ordering over some set of possible states, a utility function is one that maps those states to their preference ordering for a given agent, and in between those states and our hypothetical intrinsic associations there's layers upon layers of bias and acculturation, probably enough to be effectively unique to the individual. I've be very surprised if we could find two people with exactly the same preferences over fully specified future states, though we'd probably find large chunks that looked quite similar.

Have you, in fact, read the metaethics sequence?

Yes.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-12T11:02:50.655Z · LW(p) · GW(p)

Have you, in fact, read the metaethics sequence?

Yes.

Good to know.

Non-pathological human ethics may or may not ultimately run off some consistent set of intrinsic affective associations. (Whether or not it does is a question that more or less reduces to the question of whether CEV is complete, which as I've said is currently unknown.) If true, this does not demonstrate a shared utility function within some domain. Utility (in its simplest form) is nothing more or less than a preference ordering over some set of possible states, and between those states and our hypothetical intrinsic associations there's layers upon layers of bias and acculturation, probably enough to be effectively unique to the individual. I've be very surprised if we could find two people with exactly the same preferences over fully specified future states, though we'd probably find large chunks that looked quite similar.

...huh?

The fact that morality is acted upon in different ways (due to your "layers" or simply mistaken beliefs about the world) doesn't change the fact that it is there, underneath, and that this is the standard we work by to declare something "good" or "bad". We aren't perfect at it, but we can make a reasonable attempt. Just like, say, mathematics, or predicting the movement of planets.

Replies from: Nornagest
comment by Nornagest · 2012-12-12T11:15:18.842Z · LW(p) · GW(p)

The fact that morality is acted upon in different ways (due to your "layers" or simply mistaken beliefs about the world) doesn't change the fact that it is there, underneath, and that this is the standard we work by to declare something "good" or "bad".

Now we're getting somewhere.

First, that's not a utility function; see the edited version of my last comment. We have a tendency around here to use "utility function" as if it describes fundamental moral impulses, but I'd imagine that's because we like to talk about AIs, for whom such a function can be written explicitly and for whom consistency between agents is no trouble. Neither of those conditions holds true for our messy meat brains.

That being said, I'm afraid the idea that there's some uniform set of impulses on which all existing moralities are fundamentally based is more an article of faith than a statement of fact given the present state of knowledge. There's clearly enough unity there for some moral concepts to (e.g.) be describable in language, but that's a relatively weak criterion. Pathology gives the idea of strong consistency a lot of trouble, but even if you ignore that there's simply not enough evidence to declare that it's consistent enough to define as a single function covering all normal people; just off the top of my head, for example, it could easily be that parts of it sum as a polynomial, or something similar, for which the coefficients vary somewhat between people or populations.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-12T11:33:34.509Z · LW(p) · GW(p)

First, that's not a utility function; see the edited version of my last comment. We have a tendency around here to use "utility function" as if it describes fundamental moral impulses, but I'd imagine that's because we like to talk about AIs, for whom such a function can be written explicitly and for whom consistency between agents is no trouble. Neither of those conditions holds true for our messy meat brains.

Fair enough. What term would you prefer? I'll use "morality" for now.

Pathology gives the idea a lot of trouble, but even if you ignore that there's simply not enough evidence to declare that it's consistent enough to define as a single function describing the foundational moral sentiments of all normal people.

Quite the opposite, we can see that our morality exists unchanged regardless of beliefs by the fact that there are people who actually do have different moralities. As a vegetarian, I can tell you that a lot of people who believe eating meat is OK do so because they are mistaken about the environment; remove the mistake (by showing them how horrible conditions are in factory farms, for example) and they will see that eating meat is wrong (or at least that factory farming is wrong.) If they genuinely didn't value the pain of animals, say, this would fail. No amount of argument will persuade Clippy that killing people is wrong.

Replies from: taelor, Nornagest
comment by taelor · 2012-12-13T09:42:25.611Z · LW(p) · GW(p)

As a vegetarian, I can tell you that a lot of people who believe eating meat is OK do so because they are mistaken about the environment; remove the mistake (by showing them how horrible conditions are in factory farms, for example) and they will see that eating meat is wrong (or at least that factory farming is wrong.) If they genuinely didn't value the pain of animals, say, this would fail.

You wouldn't happen to have non-anecdotal evidence that this is actually the case, would you?

Replies from: MugaSofer
comment by MugaSofer · 2012-12-13T11:33:55.506Z · LW(p) · GW(p)

What, like a study of people showed images of slaughterhouses or something? Nope. To be honest, that's kind of a terrible example. Racists work much better.

comment by Nornagest · 2012-12-12T11:48:27.446Z · LW(p) · GW(p)

Fair enough. What term would you prefer?

How about "moral architecture"?

I think I'd agree that most humans share roughly the same set of inputs to that architecture: hit most people on the head, and they're likely to feel pain; humiliate them, and they're likely to feel embarrassment. I doubt that the relative weightings of these traits are likely to remain identical between individuals, but if you factor that out I think we have a human commonality that I could get behind.

I suspect we'd differ in our opinion of acculturation's role in defining certain categories (the pain of animals, for example) as morally significant, though. That strikes me as a level or two above anything I'd be comfortable calling a human universal.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-12T12:52:42.606Z · LW(p) · GW(p)

Moral architecture sounds good.

I think I'd agree that most humans share roughly the same set of inputs to that architecture: hit most people on the head, and they're likely to feel pain; humiliate them, and they're likely to feel embarrassment.

I note that humans can empathise with pains they do not themselves feel.

I suspect we'd differ in our opinion of acculturation's role in defining certain categories (the pain of animals, for example) as morally significant, though. That strikes me as a level or two above anything I'd be comfortable calling a human universal.

Well, yeah. It's not the greatest example, I suppose. How about racism? That's usually my go-to for this sort of thing. I kill Jews because Jews are parasites that undermine civilization; you kill Nazis because they murder innocent people.

EDIT: I'm not actually Nazi, obviously.

comment by PeterisP · 2012-11-27T21:31:38.640Z · LW(p) · GW(p)

Another situation that has some parallels and may be relevant to the discussion.

Helping starving kids is Good - that's well understood. However, my upbringing and current gut feeling says that this is not unconditional. In particular, feeding starving kids is Good if you can afford it; but feeding other starving kids if that causes your own kids to starve is not good, and would be considered evil and socially unacceptable. i.e., that goodness of resource redistribution should depend on resource scarcity; and that hurting your in-group is forbidden even with good intentions.

It may be caused by the fact that I'm partially brought up by people that actually experienced starvation and have had their relatives starve to death (WW2 aftermath and all that), but I'd guess that their opinion is more fact-based than mine and that they definitely had put more thought into it than I have, so until/if I analyze it more, I probably should accept that prior.

comment by PeterisP · 2012-11-27T21:18:56.036Z · LW(p) · GW(p)

That is so - though it depends on the actual chances; "much higher chance of survival" is different than "higher chance of survival".

But my point is that:

a) I might [currently thinking] rationally desire that all of my in-group would adopt such a belief mode - I would have higher chances of survival if those close to me prefer me to a random stranger. And "belief-sets that we want our neighbors to have" are correlated with what we define as "good".

b) As far as I understand, homo sapiens do generally actually have such an attitude - evolutionary psychology research and actual observations when mothers/caretakers have had to choose kids in fires/etc.

c) Duty may be a relevant factor/emotion. Even if the values were perfectly identical (say, the kids involved would be twins of a third party), if one was entrusted to me or I had casually accepted to watch him, I'd be strongly compelled to save that one first, even if the chances of survival would (to an extent) suggest otherwise. And for my own kids, naturally, I have a duty to take care of them unlike 99.999% other kids - even if I wouldn't love them, I'd still have that duty.

Replies from: MugaSofer
comment by MugaSofer · 2012-11-29T22:18:23.640Z · LW(p) · GW(p)

My point is that duty, while worth encouraging throughout society, is screened off by most utilitarian calculations; as such it is a bias if, rationally, the other choice is superior.

comment by Viliam_Bur · 2012-11-11T17:40:57.484Z · LW(p) · GW(p)

This probably sounds horrible, but "saving human lives" in some contexts is an applause light. We should be able to think beyond that.

As a textbook example, saving Hitler's life in a specific moment of history of the alternate universe would create more harm than good. Regardless of how much or little money it would cost.

Even if we value all human lifes as intrinsically equal, we can still ask what will be the expected consequences of saving this specific human. Is he or she more likely to help other people, or perhaps to harm them? Because that is a multiplier of my intervention, and consequences of consequences of my actions are consequences of my actions, even when I am not aware of them.

Don't just tell me that I saved a hypothetical person from malaria. Tell me whether that person is likely to live a happy life and contribute to happy lives of their neighbors, or whether I have most likely provided another soldier for the next genocide.

Even in areas with frequent wars and human rights violations, curing malaria does more good than harm. (To prevent the status quo bias: Imagine healthy people suffering from the war or genocide. Would sending tons of malaria-infected mosquitoes make the situation better or worse?) But perhaps something else, like education or government change that could reduce war, would be better in long term, even if in the short term there are less "lives per dollar saved".

Of course, as is the usual problem with consequentialism, it is pretty difficult to predict the consequences of our actions.

comment by MTGandP · 2012-11-11T00:25:33.571Z · LW(p) · GW(p)

GWWC in particular does not recommend any animal welfare charities, which makes me especially reluctant to donate to them or even support them at all. It seems much too specifically focused on global poverty. From the GWWC homepage:

Extreme poverty causes much of the world’s worst suffering, but when armed with the right information you can make an enormous difference.

This seems excessively limiting given that good animal welfare charities are orders of magnitude more efficient than even the best human charities; and it becomes especially concerning when we consider the poor meat-eater problem.

Effective Animal Activism is a meta-charity that evaluates animal welfare charities. They do not accept donations and instead recommend that you give directly to their top charities.

Replies from: bryjnar, Pablo_Stafforini
comment by bryjnar · 2012-11-11T00:42:16.789Z · LW(p) · GW(p)

Just to be clear: EAA is an 80k project at the moment, but at some point it may become a fully-fledged sub-organization of CEA, like GWWC and 80k.

The segmentation by target area is deliberate: GWWC in particular is in many ways a much more conservative organization, but that correspondingly broadens its appeal to people who aren't necessarily on board with full-on consequentialism and wouldn't be much concerned about animal rights.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2012-11-11T05:17:19.791Z · LW(p) · GW(p)

The segmentation by target area is deliberate: GWWC in particular is in many ways a much more conservative organization, but that correspondingly broadens its appeal to people who aren't necessarily on board with full-on consequentialism and wouldn't be much concerned about animal rights.

I agree that hard-core consequentialism and a utilitarian approach to animal welfare are alien to most people. However, I don't think this supports GWWC's emphasis on human suffering. Currently, members are asked to sign a pledge "to donate 10% of their income to the charities that they believe will most effectively help people living in poverty." The pledge could instead require members "to donate 10% of their income to the charities that they believe will do good most effectively." Such a reformulation would allow people that don't think alleviating human suffering is the most effective way of doing good to take the pledge, without discouraging those who are willing to take the pledge in its current formulation.

Replies from: EricHerboso
comment by EricHerboso · 2012-11-12T00:50:15.750Z · LW(p) · GW(p)

It might be slightly deceptive (and thus not worth doing), but what about changing "people" to "persons"? Those who think about animal welfare more liberally would recognize "persons" as referring to both humans and non-humans, while those who are more conservative that GWWC is trying to reach will just automatically assume it means "people".

I would prefer this to your reformulation of "do good" because it explicitly takes other types of "doing good" out of the equation. (Unless possibly there's some reason why being more inclusive of "doing good" is worthwhile to use in such a pledge? It seems at first glance to me that specificity is important in pledges of this kind.)

Replies from: tog, wedrifid
comment by tog · 2012-11-12T17:23:04.462Z · LW(p) · GW(p)

It might be slightly deceptive (and thus not worth doing), but what about changing "people" to "persons"? Those who think about animal welfare more liberally would recognize "persons" as referring to both humans and non-humans, while those who are more conservative that GWWC is trying to reach will just automatically assume it means "people".

That'd be too deceptive - people would rightly feel you'd tricked them if they got the impression all money was going to alleviate human suffering. If GWWC were to go down this route (which I don't think it should - better for CEA to leave that to EAA), then the word 'others' would be more appropriate, though still a little deceptive.

Replies from: EricHerboso
comment by EricHerboso · 2012-11-12T19:36:41.317Z · LW(p) · GW(p)

Remember that the pledge is not to give money to GWWC; it's a pledge to give to effective charities in general. So those who want to focus on just human will be giving only to human-based charities, while those who give to animal welfare charities will have their money spent on animal welfare.

Although I agree the pledge wording would be perhaps too deceptive, I do not agree that anyone would ever feel tricked, since they still individually choose where to send their money. Conservatives would probably give to the human welfare orgs GWWC recommends, while others would give to the animal welfare orgs EAA recommends.

Replies from: tog
comment by tog · 2012-11-12T21:06:31.013Z · LW(p) · GW(p)

Remember that the pledge is not to give money to GWWC; it's a pledge to give to effective charities in general.

It's not; the whole message of GWWC is about the strong reasons we in the relatively wealthy west have to give significant portions of our income to cost-effective global poverty charities. I completely respect those who think we have even stronger reasons to donate to cost-effective charities focused on causes like animal welfare or x-risk, but GWWC is focused on global poverty (which does earn it more mainstream credibility than, say, EAA or SingInst).

Replies from: EricHerboso
comment by EricHerboso · 2012-11-12T21:43:11.584Z · LW(p) · GW(p)

You're correct; I was confusing the 80k pledge with the GWWC pledge. I retract all previous comments made in this thread on this point. Sorry for being stubborn earlier without rechecking the source.

comment by wedrifid · 2012-11-12T00:57:59.136Z · LW(p) · GW(p)

It might be slightly deceptive (and thus not worth doing), but what about changing "people" to "persons"? Those who think about animal welfare more liberally would recognize "persons" as referring to both humans and non-humans, while those who are more conservative that GWWC is trying to reach will just automatically assume it means "people".

The usage of "people" in the context seems to be referring to actors with the means and inclination to take significant altruistic action through economic leverage. If you can find some horses or dogs who have such capabilities and interests then the change may become useful.

Replies from: EricHerboso
comment by EricHerboso · 2012-11-12T01:33:06.083Z · LW(p) · GW(p)

To clarify I meant changing the pledge from:

"to donate 10% of their income to the charities that they believe will most effectively help people living in poverty"

to:

"to donate 10% of their income to the charities that they believe will most effectively help persons living in poverty".

I don't think the usage in this context is referring to the actors with the means and inclination to take altruistic action; the context instead is on those acted upon. (Of course, this is not a very good way of saying it, especially as there is ample evidence that money given directly to the poor in developing countries might be better than developed countries giving what they incorrectly think the poor need, but this is beside the point.)

When conservative people read "persons in poverty", they will automatically think "humans living in poverty", whereas those more familiar with the use of "person" being inclusive with non-humans might instead interpret "persons living in poverty" much more liberally. (I realize this is nonstandard usage of the term, but my intent here is to allow a liberal interpretation while maintaining specificity.)

Replies from: wedrifid
comment by wedrifid · 2012-11-12T01:55:45.233Z · LW(p) · GW(p)

That being the case I agree with your previous comment. (The proposal is clever but a little on the deceptive side!)

comment by Pablo (Pablo_Stafforini) · 2012-11-11T05:06:27.358Z · LW(p) · GW(p)

GWWC in particular does not recommend any animal welfare charities, which makes me especially reluctant to donate to them or even support them at all. It seems much too specifically focused on global poverty.

I agree. As I made clear to the folks at GWWC, I am reluctant to take the pledge precisely because of their focus on human suffering.

Effective Animal Activism is a meta-charity that evaluates animal welfare charities. They do not accept donations and instead recommend that you give directly to their top charities.

They do take donations. I know this because I have personally given to them recently, and intend to give again over the coming months. (The donation is formally made to the Centre for Effective Altruism, but is earmarked for EAA.)

comment by Giles · 2012-11-11T16:43:28.624Z · LW(p) · GW(p)

I think the poor meat eater problem is a legitimate concern, and it's something that would benefit from research - we may not be able to establish the relative value of human/nonhuman life to everyone's satisfaction, but in principle we can do empirical research to find out the size of the effect that poverty reduction has on factory farming.

To me this would be a point in favour of "meta" in general, but not necessarily GWWC/80K in particular, as they don't seem currently focused on this kind of research.

A good concrete step you could take would be to get in touch with Effective Animal Activism (an 80K spinoff) and see if you can get the poor meat eater problem onto their research agenda. If there's already research in this area (I haven't looked), they may be able to point you towards it.

Replies from: wdmacaskill
comment by wdmacaskill · 2012-11-11T17:07:45.532Z · LW(p) · GW(p)

That's right. If there's a lot of concern, we can write up what we already know, and look into it further - we're very happy to respond to demand. This would naturally go under EAA research.

Replies from: Giles
comment by Giles · 2012-11-11T18:28:53.150Z · LW(p) · GW(p)

There are some related concerns that need to be factored into the multipliers for extending lifespans and reducing poverty, but which don't fall naturally under EAA's research:

  • Impact of extra population/animal population/consumption on environmental and other resources
  • Effect of extending a life or reducing poverty on global economic growth
  • Positive impact of increased economic growth
  • Negative impact of increased economic growth - existential risk and possibly other considerations?
  • How much of the weights in the Disability-Adjusted Life Year calculation come from valuing quality of life factors for their own sake, and how much is a fudge factor associated with reduced expected income/employability/social involvement associated with disability or disease? Toby Ord makes sort of this point here

Do you know which organisation's remit these kinds of question would fall into? Do any of these questions already receive mainstream attention (and if so are they likely to miss something important out of their calculations?)

Replies from: wdmacaskill
comment by wdmacaskill · 2012-11-12T01:10:36.202Z · LW(p) · GW(p)

These are all good questions! Interestingly, they are all relevant to the empirical aspect of a research grant proposal I'm writing. Anyway, our research team is shared between 80,000 Hours and GWWC. They would certainly be interested in addressing all these questions (I think it would officially come under GWWC). I know that those at GiveWell are very interested in at least some of the above questions as well; hopefully they'll write on them soon.

comment by juliawise · 2012-11-10T16:08:25.634Z · LW(p) · GW(p)

Will, I remember you saying that new 80K members tend to be interested in x-risk, so that expanding 80K could be a good way to increase x-risk funding. Is that right?

Replies from: wdmacaskill
comment by wdmacaskill · 2012-11-10T23:33:34.221Z · LW(p) · GW(p)

That's the hope! See below.

comment by beoShaffer · 2012-11-10T06:41:10.692Z · LW(p) · GW(p)

(One might ask: if the idea of meta-charity is so good, why don’t many more meta-charities exist than currently do?)

Hansonian answer "Charity is not about helping"(Actually a quote from Gwern but Hansonian in spirit).

Replies from: wdmacaskill
comment by wdmacaskill · 2012-11-10T23:41:06.528Z · LW(p) · GW(p)

I wouldn't want to commit to an answer right now, but the Hansonian Hypothesis does make the right prediction in this case. If I'm directly helping, it's very clear that I have altruistic motives. But if I'm doing something much more indirect, then my motives become less clear. (E.g. if I go into finance in order to donate, I no longer look so different from people who go into finance in order to make money for themselves). So you could take the absence of meta-charity as evidence in favour of the Hansonian Hypothesis.

comment by Peter Wildeford (peter_hurford) · 2012-11-12T06:30:18.687Z · LW(p) · GW(p)

If I were to donate $1K right now, how would GWWC / 80k / etc. plan to use it? I'd also like to request the calculations.

Replies from: wdmacaskill
comment by wdmacaskill · 2012-11-12T16:30:56.530Z · LW(p) · GW(p)

Hi - answer to this will be posted along with the responses to other questions on Giles' discussion page. If you e-mail me (will [dot] crouch [at] givingwhatwecan.org) then I can send you the calculations.

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2012-11-12T21:57:35.819Z · LW(p) · GW(p)

I look forward to it. Email sent!

comment by wdmacaskill · 2012-11-10T23:56:39.722Z · LW(p) · GW(p)

By the way, thanks for the comments! Seeing as the post is getting positive feedback, I'm going to promote it to the main blog.

comment by anholt · 2012-11-11T07:45:34.840Z · LW(p) · GW(p)

I recently sent in my membership for GWWC, and just got confirmation for the larger of my two donations for the year, and this article got me thinking:

The membership form asked me (iirc) what I expected to be donating before learning about GWWC and what I expect after joining GWWC. I filled in the "before" field based on historical behavior (~2% of income). But I think that was a wrong answer on my part -- the main thing that GWWC changed for me was the idea of 10% of income as the focal point. But since I decided to join a year ago, I've encountered the 10% idea elsewhere, in only slightly less persuasive ways, so I probably would have committed to 10% pretty soon anyway. We may be overcounting the impact of GWWC because people whose donation patterns would have gone up over time anyway are not accounting for that (unless you already do in your analysis).

Replies from: wdmacaskill
comment by wdmacaskill · 2012-11-11T17:14:03.146Z · LW(p) · GW(p)

Thanks for this. Asking people "how much would you have pledged?" is of course only a semi-reliable method of ascertaining how much someone actually would have pledged. Some people - like yourself - might neglect that fact that they would have been convinced by the same arguments from other sources; others might be overoptimistic about how their future self would live up to their youthful ideals. We try to be as conservative as reasonable with our assumptions in this area: we take the data and then err on the side of caution. We assumed that 54% of the pledged donations would have happened anyway, that 25% of donations would have gone to comparably good charities, and that we have a dropout rate amortized over time equivalent to 50% of people dropping out immediately. It's possible that these assumptions still aren't conservative enough.

Replies from: Strange7, anholt
comment by Strange7 · 2012-11-11T19:46:20.081Z · LW(p) · GW(p)

Perhaps it would also be useful to work backwards? That is, figure out exactly how conservative the assumptions need to be to put the value of a donation below the break-even point.

comment by anholt · 2012-11-13T03:04:39.207Z · LW(p) · GW(p)

Excellent. That sounds pretty reasonable, and that's pretty impressive leveraging given those assumptions.

comment by Giles · 2012-11-11T16:20:40.617Z · LW(p) · GW(p)

I've emailed Will a bunch of questions about 80K/GWWC and their need for funding - I'll post the answers in the Discussion section. (I have his permission to do this, and he seemed pretty enthusiastic about making the information public)

Replies from: wdmacaskill
comment by wdmacaskill · 2012-11-11T17:21:39.747Z · LW(p) · GW(p)

Feel free to post the questions just now, Giles, in case that there are others that people want to add.

Replies from: Giles
comment by Giles · 2012-11-11T18:55:21.589Z · LW(p) · GW(p)

Done

comment by Giles · 2012-11-10T04:33:05.404Z · LW(p) · GW(p)

(One might ask: if the idea of meta-charity is so good, why don’t many more meta-charities exist than currently do?) So you might need to see a lot more hard data (perhaps verified by independent sources) before being convinced.

This is a really interesting issue, and it applies to any exceptional giving candidate, not just to meta-charities. In order to get exceptional value for money you need to (correctly) believe that you are smarter than the big donors - otherwise they'd already have funded whatever you're planning on funding to the point where the returns diminish to the same level as everything else.

This relates to the issue of collecting lots of hard data because rationality is partly about the ability to make the right decision given a relatively small amount of data.

My tentative conclusion is that if you have no good reason to believe you're more rational than the big money then the best thing is to invest your resources in improving your own rationality.

Replies from: CarlShulman, wdmacaskill
comment by CarlShulman · 2012-11-10T21:05:49.839Z · LW(p) · GW(p)

because rationality is partly about the ability to make the right decision given a relatively small amount of data.

And sensibly collecting obtainable data that could make a big difference for a decision. Making correct decisions with less data is harder, and so more taxing of epistemic rationality, but that difficulty means it's often instrumentally rational to avoid such difficulty.

Replies from: Giles, lukeprog
comment by Giles · 2012-11-11T17:00:42.608Z · LW(p) · GW(p)

Yep, totally agree - see this comment and this post.

I'd treat the graph of GiveWell's money moved as evidence in favour of meta (and in particular CEA) being promising, under three assumptions:

  • GW's top charities really are significantly more effective than what people would otherwise be giving to (otherwise that graph would just show the amount of money uselessly moved from one place to another)
  • CEA is doing something orthogonal to what GW are doing (otherwise they might just be needlessly competing with each other)
  • CEA is part of the same "effective altruism" growth sector that GW is part of.

In a way you could regard any charity fundraising as "meta" in some sense, but the market there is already saturated in a way that I don't think "effective giving" is. So I wouldn't expect people to be getting such huge returns from fundraising (even if they're trying a somewhat novel approach), but I wouldn't count this as strong evidence against meta.

Definitely curious about what other kinds of evidence I should be on the lookout for, or for reasons why I shouldn't take GW's big takeoff so seriously.

Replies from: CarlShulman
comment by CarlShulman · 2012-11-11T20:13:31.105Z · LW(p) · GW(p)

I'd treat the graph of GiveWell's money moved as evidence in favour of meta (and in particular CEA) being promising, under three assumptions:

Yes, that and the stats for Giving What We Can/CEA look pretty good.

CEA is doing something orthogonal to what GW are doing (otherwise they might just be needlessly competing with each other)

I think competition tends to be good! It keeps people on their toes, and provides a check on problems. Consider your other point:

GW's top charities really are significantly more effective than what people would otherwise be giving to (otherwise that graph would just show the amount of money uselessly moved from one place to another)

With competitors you could check the rate of concordance, when they disagree, or look to see which organizations identify problems with data first, that sort of thing.

comment by lukeprog · 2012-11-11T11:34:25.785Z · LW(p) · GW(p)

Cannot upvote this enough. Neglected Virtue of Scholarship and all that.

comment by wdmacaskill · 2012-11-10T23:49:58.771Z · LW(p) · GW(p)

In order to get exceptional value for money you need to (correctly) believe that you are smarter than the big donors - >otherwise they'd already have funded whatever you're planning on funding to the point where the returns diminish to the >same level as everything else.

That's if you think that the big funders are rational and have similar goals as you. I think assuming they are rational is pretty close to the truth (though I'm not sure: charity doesn't have the same feedback mechanisms as business, because if you get punished you don't get punished in the same way). beoShaffer suggests that they just have different goals - they are aiming to make themselves look good, rather than do good. I think that could explain a lot of cases, but not all - e.g. it just doesn't seem plausible to me for the Gates Foundation.

So I ask myself: why doesn't Gates spend much more money on increasing revenue to good causes, through advertising etc? One answer is that he does spend such money: the Giving Pledge must be the most successful meta-charity ever. Another is that charities are restricted in how they can act by cultural norms. E.g. if they spent loads of money on advertising, then their reputation would take a big enough hit to outweigh the benefits through increased revenue.

Replies from: beoShaffer, Strange7
comment by beoShaffer · 2012-11-11T00:32:26.457Z · LW(p) · GW(p)

beoShaffer suggests that they just have different goals - they are aiming to make themselves look good, rather than do good.

Agree with the part before the dash, have a subtle but important correction to the second part. While the explicit desire to look good certainly can play a role, I think it is as or more common for giving to have a different proximate cause, but to still approximate efficient signaling (rather than efficient helping) because the underlying intuitions evolved for signaling purposes.

comment by Strange7 · 2012-11-11T19:40:58.486Z · LW(p) · GW(p)

The best way to look good to, say, exceptionally smart people and distant-future historians, is to act in almost exactly the way a genuinely good person would act.

comment by wdmacaskill · 2012-11-20T22:42:42.902Z · LW(p) · GW(p)

My response was too long to be a comment so I've posted it here. Thanks all!

comment by Jade · 2012-11-18T11:04:40.633Z · LW(p) · GW(p)

Did you know about Humanity United or other orgs for reducing human trafficking? http://www.forbes.com/sites/clareoconnor/2012/11/08/inside-ebay-billionaire-pierre-omidyars-battle-to-end-human-trafficking/

comment by someonewrongonthenet · 2012-11-12T02:08:30.981Z · LW(p) · GW(p)

I'd be shocked if this were downvoted, as Lesswrong's affiliation with your charities is probably the best part of this website from a utilitarian standpoint.

So, I see that you use various sources to determine the optimal charity. Via GWWC, I found links to GiveWell's review via the site, and I notice that they post the results of their analysis next to each charity. Is your meta-analysis posted somewhere on your site as well?

If not, it should be, and more prominently featured! Your target audience are the type of people who would seek out a meta-charity, they would need to see those papers. It's important that a given viewer can, with relatively little effort, be relatively assured that the claims of the meta charity are accurate.

As a user of the web-page, I'd like an accessible, concise summary of how you know that your top recommended charities do in fact have the best QALYs/dollar ratio, as well as a resource for more thorough investigation. (And apologies if this information is on the site and I just didn't find it - but if I didn't find it then it's likely others are having the same issue!)

comment by TrickBlack · 2012-11-20T09:28:11.637Z · LW(p) · GW(p)

I'm interested to hear what you think is more important in terms of making a difference - the money or the job. Some jobs (teacher, social worker) which can have quite an impact can also have low salaries - teaching in particular is under political attack in the United States. Such jobs don't allow for as much donation to charity. On the other hand, there are jobs with high salaries (say, in the business and corporate world) which make a low or potentially negative impact, but have a larger salary which they could donate to charity.

There are of course jobs which fall under both categories - the medical profession in particular can be quite well-paying while making a very positive impact. Unfortunately, not everyone who wants to make a difference can be (or wants to be) a doctor (I believe that enjoying your job is very important for various reasons, but that's a matter for another day).

So what's better - more teachers dedicated to helping their students towards the future, or more Warren Buffetts? If you had to ask each of a million people to donate to only one of your charities, which would you advocate for?

Replies from: None, Larks
comment by [deleted] · 2012-11-20T17:12:30.064Z · LW(p) · GW(p)

A Warren Buffet donating a small fraction to efficient charity has a positive impact several orders of magnitude above a teacher. It would be awesome win if we could take thousands of teachers and turn them into Buffets. The reality however is that most teachers aren't capable of this. To build a good case I suggest you sit down and do the math of the measured life gains students get from good teachers vs the gains of mediocre white collar professionals giving the extra money they earn beyond the teacher salary to efficient charity.

Replies from: mytyde
comment by mytyde · 2012-11-20T22:37:09.502Z · LW(p) · GW(p)

Wealth doesn't appear out of nowhere. The decreasing wages of professionals doesn't mean there's less wealth in the system, it means that wealth goes is distributed differently, mostly towards creating more Warren Buffets. The donation of a small sum of an accumulated fortune cannot create an impact equivalent to if that fortune in its entirety had been distributed in fairly paid labor. A charity by definition requires an additional, socially superfluous level of bureaucracy which is paid out of donations. The charities that billionaires tend to support also don't necessarily apply their spending in any semblance of efficiency, if they even affect good policy decisions with the impact they do have. Charities cannot achieve economies of scale, nor do they have a secular source of funding (their continued existence is dependent on appeasing potential donors, not on efficient performance). Teachers are not slaves.

Replies from: None, Athrelon
comment by [deleted] · 2012-11-21T14:07:19.221Z · LW(p) · GW(p)

Wealth doesn't appear out of nowhere.

It isn't zero sum either. I'm fairly certain Warren Buffet creates quite a lot of it. I'm also sure the marginal value of yet another school teacher pales in comparison to it.

The donation of a small sum of an accumulated fortune cannot create an impact equivalent to if that fortune in its entirety had been distributed in fairly paid labor.

My inner anthropologist from Jupiter is confused by this sentence, what is this "fair pay" thing. Please elaborate on it.

The donation of a small sum of an accumulated fortune cannot create an impact equivalent to if that fortune in its entirety had been distributed in fairly paid labor.

Assuming for the sake of argument that Warren Buffet is a vampire squid on the face of the world ... why not? You have humans routinely buying liquor and cigarettes instead of malaria nets for their own children, or wasting dollars on negative sum games like jostling for positional goods.

And to be avoid misunderstandings I do think you can make a good case that some people & institutions probably are vampire squids in the sense I used here.

The charities that billionaires tend to support also don't necessarily apply their spending in any semblance of efficiency, if they even affect good policy decisions with the impact they do have.

We aren't talking about what billionaires tend to support. We are using a thought experiment of primary school teacher vs. efficient charity donating rich dude to help you decide whether you where on the curve you want to go.

I'm pretty sure that being a pirate in Somalia and donating to efficient charity is probably justified by utilitarian calculations. If you can't possibly imagine this being the case, pause to consider that a criminal is just a start up government, a local bandit who ideally would want to have the monopoly on violence that a real state has but just isn't good enough for now. The best approach for the pirate would be to just stay in port and have passing ships pay him protection money. We accept some taxation of the trade routes can produce better results than not taxing it at all. I think it clear most government spending is much worse in its impact per dollar than efficient charity. If you disagree why in the world aren't you donating money to say the US government or writing up an argument for it? Model piracy by me and my merry armed band in Somalia as a tax on the trade route, then judge them as you would a government program with the same bang for buck.

To give another controversial example, I find it plausible that selling Marijuana and several other kinds of drugs (but not all) full time and donating the money to efficient charity beats out being a primary school teacher or working in kindergarten on utilitarian grounds.

Replies from: mytyde, Larks
comment by mytyde · 2012-12-04T21:29:56.727Z · LW(p) · GW(p)

What is the mechanism by which Warren Buffet creates wealth by himself? If you're talking about investing, couldn't a good supercomputer hypothetically do the same job for free? Anyways, Buffet doesn't do all of his own investments: most capitalists don't. They engage in joint ventures and mutual funds. Their only "contribution" to these is being the owner of investment funds (an arbitrary title when removed from historical context). Buffet does contribute to society but not (through some divine justice) proportionate to the compensation he is allotted.

Consider if Warren Buffet's teachers had not taught him to do math and he hadn't had the opportunity to do anything he did. What if his local librarian wasn't able to help him find books on investment, if he hadn't happened upon mentors who could teach him business, if he had been born poor and had to work minimum wage from a young age. Now consider if there are other potential Warren Buffets who would thrive as much as him given the opportunity but actually DO experience such setbacks.

Anyways, to assume that private investment is a social imperative is not friendly to reality. China right now has a totalitarian government which controls investments (including closely regulating foreign investment), and its economy has been exploding for decades as a result of infrastructure investment. There are plenty of models in-between China and the US which also function fine.

In the United States, we consistently overestimate the contribution of private industry in developing our infrastructure. Cars are only possible because of roads, telephones were only possible because of telephone wires, the internet & technology revolution were only possible because of massive Cold War defense department spending (the ARPANET was the prototype for the internet). It is not an exaggeration to say that the public has a far greater stake in private business than it realizes. In some cases, the privatization of public research can justifiably be seen as a transfer of wealth from taxpayers towards the fortunes of big business investors. http://en.wikipedia.org/wiki/ARPANET

comment by Larks · 2012-11-21T15:59:40.795Z · LW(p) · GW(p)

We accept some taxation of the trade routes can produce better results than not taxing it at all.

Unless you're using "can" in a very weak sense - as in "if the revenue was donates to efficient charity", I don't think that's true, because they cause additional wasteful substitution to intra-national trade. Taxes should fall on income (or negative externalities).

Replies from: None
comment by [deleted] · 2012-11-21T16:02:57.011Z · LW(p) · GW(p)

You are taking the quote in a too narrow context. Replace pirates preying on internal or international shipping with a bunch of thugs that show up in the market and take every tenth apple for themselves. Or robbing local farmer and craftsman and taking some of their stuff. Or road warriors enacting an environmentally friendly carbon tax on fuel.

Replies from: Larks
comment by Larks · 2012-11-21T16:07:31.144Z · LW(p) · GW(p)

I don't understand what you mean. Is your point that taxes can be justified, and that sufficiently advanced piracy is indistinguishable from taxes? Or that taxes are better than pirates? Or that taxes on trade routes are better than other taxes? I agree with the first two, and was objecting to the last one.

Replies from: None, None
comment by [deleted] · 2012-11-21T16:22:50.041Z · LW(p) · GW(p)

I don't understand what you mean. Is your point that taxes can be justified, and that sufficiently advanced piracy is indistinguishable from taxes?

Yep.

Or that taxes are better than pirates?

Generally they are because taxes tend towards efficient banditry at the Laffer maximum. A pirate spending a fraction of their income on efficient charity probably beats out taxes. Naturally a better utilitarian solution is to give that pirate more and more power so he can better and better approximate taxation and spend more on efficient charity, until the marginal gain of efficient charity drops to that of other government spending. Now of course maybe taxes are already too high and do more harm than good, in which case the pirate should stop earlier.

Or that taxes on trade routes are better than other taxes?

I didn't mean to claim this.

comment by [deleted] · 2012-11-21T16:11:31.677Z · LW(p) · GW(p)

Efficient charity is a better use of wealth currently than most government programs. If you are an utilitarian and ok with something being taxed and then using that wealth on government programs, you should ceteris paribus be ok with a hypothetical pirate taking a share and giving it to optimal charity. Many people seem not to be. This suggests compartmentalization.

Also illegal means to obtaining wealth to donate to optimal charity is a good strategy that gets very little attention. This is especially the case considering the general relative ineptitude of caught criminals. It is probably true that may LWers can find their strongest comparative advantage in crime.

comment by Athrelon · 2012-11-21T14:58:02.160Z · LW(p) · GW(p)

glances at thread

Econ is the mind-killer.

Replies from: None, mytyde
comment by [deleted] · 2012-11-21T15:52:53.329Z · LW(p) · GW(p)

GLaDOS already complained about our community being more easily mind killed about it than it once was. This i why I requested a sequence on economics and especially prediction markets recently.

comment by mytyde · 2013-01-15T04:17:13.885Z · LW(p) · GW(p)

Poorly informed anything is a mind-killer http://www.youtube.com/watch?v=JroogX7zBek

comment by Larks · 2012-11-21T16:03:19.611Z · LW(p) · GW(p)

If you didn't become a teacher someone else would. If you didn't donate to charity there's no-one who would fill in your place. Hence, you should donate to charity and not become a teacher.

As an aside, I'd like to see evidence that teachers or social workers have much impact, even ignoring replacability concerns.

Replies from: michaeljohnston0
comment by michaeljohnston0 · 2012-11-25T07:04:14.079Z · LW(p) · GW(p)

I'd also like to see evidence that Buffets don't have a significant social impact through the work they do. Successful companies create valued products, jobs, etc. and depend on investment. On the other hand, they may also make income inequality different and hence lead to less efficient allocations of resources in terms of quality of life. Anyone know a good starting point for reading about this?

comment by JaySwartz · 2012-11-20T01:28:19.765Z · LW(p) · GW(p)

I am compelled to point to a fundamental supply chain issue; intermediary drag. Simply stated, the greater the number of steps, the greater the overhead expense. While aggregators have some advantage on the purchasing side, they are an added expense on the distribution side in the vast majority of cases. If they enable some form of extended access, intermediaries may have a value, but the limited nature of charitable donations would make intermediaries an unlikely advantage.