On not diversifying charity

post by DanielLC · 2014-03-14T05:14:08.909Z · LW · GW · Legacy · 73 comments

Contents

  Standard Explanation:
  Corrections to Account for Reality:
    Uncertainty:
    Nonlinear Utility Function:
    Risk-Aversion:
    Discontinuity:
  A few other notes:
    Small Charities:
    Timeless Decision Theory:
None
73 comments

A common belief within the Effective Altruism movement that you should not diversify charity donations when your donation is small compared to the size of the charity. This is counter-intuitive, and most people disagree with this. A Mathematical Explanation of Why Charity Donations Shouldn't Be Diversified has already been written, but it uses a simplistic model. Perhaps you're uncertain about which charity is best, charities are not continuous, let alone differentiable, and any donation is worthless unless it gives the charity enough money to finally afford another project, your utility function is nonlinear, and to top it all off, rather than accepting the standard idea of expected utility, you are risk-averse.

Standard Explanation:

If you are too lazy to follow the link, or you just want to see me rehash the same argument, here's a summary.

The utility of a donation is differentiable. That is to say, if donating one dollar gives you one utilon, donating another dollar will give you close to one utilon. Not exactly the same, but close. This means that, for small donations, it can be approximated as a linear function. In this case, the best way to donate is to find the charity that has the highest slope, and donate everything you can to it. Since the amount you donate is small compared to the size of the charity, a first-order approximation will be fairly accurate. The amount of good you do with that strategy is close to what you predicted it would do, which is more than you'd predict of any other strategy, which is close to what you'd predict for them, so even if this strategy is sub-optimal, it's at least very close.

Corrections to Account for Reality:

Uncertainty:

Uncertainty is simple enough. Just replace utility with expected utility. Everything will still be continuous, and the reasoning works pretty much the same.

Nonlinear Utility Function:

If your utility function is nonlinear, this is fine as long as it's differentiable. Perhaps saving a million lives isn't a million times better than saving one, but saving the millionth life is about as good as the one after that, right? Maybe each additional person counts for a little less, but it's not like the first million all matter the same, but you don't care about additional people after that.

In this case, the effect of the charity is differentiable with respect to the donation, and the utility is differentiable with respect to the effect of the charity, so the utility is differentiable with respect to the donation.

Risk-Aversion:

If you're risk-averse, it gets a little more complicated.

In this case, you don't use expected utility. You use something else, which I will call meta-utility. Perhaps it's expected utility minus the standard deviation of utility. Perhaps it's expected utility, but largely ignoring extreme tails. What it is is a function from a random variable representing all the possibilities of what could happen to the reals. Strictly speaking, you only need an ordering, but that's not good enough here, since it needs to be differentiable.

Differentiable is more confusing in this case. It depends on the metric you're using. The way we'll be using it here is that having a sufficiently small probability of a given change, or a given probability of a sufficiently small change, counts as a small change. For example, if you only care about the median utility, this isn't differentiable. If I flip a coin, and you win a million dollars if it lands on heads, then you will count that as worth a million dollars if the coin is slightly weighted towards heads, and nothing if it's slightly weighted towards tails, no matter how close it is to being fair. But that's not realistic. You can't track probabilities that precisely. You might care less about the tails, so that only things in the 40% - 60% range matter much, but you're going to pick something continuous. In fact, I think we can safely say that you're going to pick something differentiable. If I add a 0.1% chance of saving a life given some condition, it will make about the same difference as adding another 0.1% chance given the same condition. If you're risk-averse, you'd care more about a 0.1% chance of saving a life it's takes effect during the worst-case scenario than the best-case, but you'd still care about the same for a 0.1% chance of saving a life during the worst case as for upgrading it to saving two lives in that case.

Once you accept that it's continuous, the same reasoning follows as with expected utility. A continuous function of a continuous function is continuous, so the meta-utility of a donation with respect to the amount donated is continuous.

To make the reasoning more clear, here's an example:

Charity A saves one life per grand. Charity B saves 0.9 lives per grand. Charity A has ten million dollars, and Charity B has five million. One or more of these charities may be fraudulent, and not actually doing any good. You have $100, and you can decide where to donate it.

The naive view is to split the $100, since you don't want to risk spending it on something fraudulent. That makes sense if you care about how many lives you save, but not if you care about how many people die. They sound like they're the same thing, but they're not.

If you donate everything to Charity A, it has $10,000,100 and Charity B has $5,000,000. If you donate half and half, Charity A has $10,000,050 and Charity B has $5,000,050. It's a little more diversified. Not much more, but you're only donating $100. Maybe the diversification outweighs the good, maybe not. But if you decide that it is diversifying enough to matter more, why not donate everything to Charity B? That way, Charity A has $10,000,000, and Charity B has $5,000,100. If you were controlling all the money, you'd probably move a million or so from Charity A to Charity B, until it's well and truly diversified. Or maybe it's already pretty close to the ideal and you'd just move a few grand. You'd definitely move more than $100. There's no way it's that close to the optimum. But you only control the $100, so you just do as much as you can with that to make it more diversified, and send it all to Charity B. Maybe it turns out that Charity B is a fraud, but all is not lost, because other people donated ten million dollars to Charity A, and lots of lives were saved, just not by you.

Discontinuity:

The final problem to look at is that the effects of donations aren't continuous. The time I've seen this come up the most is when discussing vegetarianism. If you don't it meat, it's not going to make enough difference to keep the stores from ordering another crate of meat, which means exactly the same number of animals are slaughtered.

Unless, of course, you were the straw that broke the camel's back, and you did keep a store from ordering a crate of meat, and you made a huge difference.

There are times where you might be able to figure that out before-hand. If you're deciding whether or not to vote, and you're not in a battleground state, you know you're not going to cast the deciding vote, because you have a fair idea of who will win and by how much. But you have no idea at what point a store will order another crate of meat, or when a charity will be able send another crate of mosquito nets to Africa, or something like that. If you make a graph of the number of crates a charity sends by percentile, you'll get a step function, where there's a certain chance of sending 500 crates, a certain chance of sending 501, etc. You're just shifting the whole thing to the left by epsilon, so it's a little more likely each shipment will be made. What actually happens isn't continuous with respect to your donation, but you're uncertain, and taking what happens as a random variable, it is continuous.

A few other notes:

Small Charities:

In the case of a sufficiently small charity or large donation, the argument is invalid. It's not that it takes more finesse like those other things I listed. The conclusion is false. If you're paying a good portion of the budget, and the marginal effects change significantly due to your donations, you should probably donate to more than one charity even if you're not risk-averse and your utility function is linear.

I would expect that the next best charity you manage to find would be worse by more than a few percent, so I really doubt it would be worth diversifying unless you personally are responsible for more than a third of the donations.

An example of this is keeping money for yourself. The hundredth dollar you spend on yourself has about a tenth of the effect the thousandth does, and the entire budget is donated by you. The only time you shouldn't diversify is if the marginal benefit of the last dollar is still higher than what you could get donating to charity.

Another example is avoiding animal products. Avoiding steak is much more cost-effective than avoiding milk, but once you've stopped eating meat, you're stuck with things like avoiding milk.

Timeless Decision Theory:

If other people are going to make similar decisions to you, your effective donation is larger, so the caveats about small charities applies. That being said, I don't think this is really much of an issue.

If everyone is choosing independently, even if most of them correlate, the end result will be that the charities get just enough funding that some people donate to some and others donate to others. If this happens, chances are that it would be worth while for a few people to actually split their investments, but it won't make a big difference. They might as well just donate it all to one.

I think this will only become a problem if you're just donating to the top charity on GiveWell, regardless of how closely they rated second place, or you're just donating based purely on theory, and you have no idea if that charity is capable of using more money.

73 comments

Comments sorted by top scores.

comment by CarlShulman · 2014-03-14T07:23:04.720Z · LW(p) · GW(p)

Moral pluralism or uncertainty might give a reason to construct a charity portfolio which serves multiple values, as might emerge from something like the parliamentary model.

Replies from: DanielLC
comment by DanielLC · 2014-03-14T22:53:51.656Z · LW(p) · GW(p)

I still don't think it would be a good idea to diversify.

If the parliament doesn't know the budget of each charity beforehand, they would be able to improve on the normal decisions by betting on it. For example, if Alice wants to donate to Charity A, and Bob wants to donate to Charity B, they could agree to donate half to each, but they'd be better off donating to the one that gets less money.

Replies from: Nisan
comment by Nisan · 2014-03-15T18:29:20.598Z · LW(p) · GW(p)

A parliament that can make indefinitely binding contracts turns into a VNM-rational agent. But a parliament that can't make binding contracts might always diversify.

Replies from: Squark
comment by Squark · 2014-03-22T20:58:32.253Z · LW(p) · GW(p)

If the parliament consists of UDT agents then effectively it can make binding contracts.

comment by benkuhn · 2014-03-14T16:14:27.172Z · LW(p) · GW(p)

This post has a number of misconceptions that I would like to correct.

It is a truism within the Effective Altruism movement that you should not diversify charity donations.

Not really. Timeless decision theory considerations suggest that you actually should be splitting your donations, because globally we should be splitting our options. I think many other effective altruists take this stance as well. (See below for explanation.)

Nonlinear Utility Function:

If your utility function is nonlinear, this is fine as long as it's differentiable.

Not necessarily. You are leaving out an important point here, which is that this argument only goes through when differentiability implies that on the margin your utility can be well-approximated by a linear function. That only works if the charity's budget is large relative to your contribution. If you're donating large amounts of money (say over $10k to a small org like MIRI or CEA), this may not hold and it's not necessarily irrational to donate to multiple organizations.

Risk-Aversion:

Also seconding what Squark pointed out, which is that risk-aversion is a feature of your utility function, not something that you tack onto your utility function. Wikipedia has a good description/picture, but basically, it's rational to be risk averse for gains in widgets if and only if your utility function has diminishing returns to widgets. (It's also worth pointing out, since by your phrasing I'm not sure whether you know this, that literally everyone's utility function is nonlinear in literally everything.)

Since this is the case when "widgets" are donations to a particular charity, the socially optimal global allocation of funds does not have all the funds going to the single highest-EV charity. If you follow timeless decision theory, this suggests that you should make the split that you think would be optimal if everyone else who followed TDT made that split (which accounts for a sizeable amount of donation).

Replies from: Squark
comment by Squark · 2014-03-15T11:47:34.095Z · LW(p) · GW(p)

The TDT argument is not solid (see the OP's reply in the post)

comment by Squark · 2014-03-14T07:59:07.627Z · LW(p) · GW(p)

If you're risk-averse, it gets a little more complicated. In this case, you don't use expected utility

As long as you're a rational agent, you have to use expected utility. See VNM theorem.

Replies from: solipsist, DanielLC, Lumifer, Bobertron
comment by solipsist · 2014-03-14T16:50:43.189Z · LW(p) · GW(p)

To be clear: your VNM utility function does not have to correspond directly to utilitarian utility if you are not a strict utilitarian. Even if you are a strict utilitarian, diversifying donations can still, in theory, be VNM rational. E.g.:

A trustworthy Omega appears. He informs you that if you are not personally are responsible for saving 1,000 QALYs, he will destroy the earth. If you succeed, he will leave the earth alone. Under these contrived conditions, the amount of good you are responsible for is important, and you should be very risk-averse with that quantity. If there's even a 1 in a million risk that the 7 effective charities you donated to were all, by coincidence, frauds, you would be well advised to donate to an eighth (even though the eighth charity will not as effective as the other seven).

Replies from: Squark
comment by Squark · 2014-03-14T19:07:02.430Z · LW(p) · GW(p)

Diversifying donations is not rational as long as the marginal utility per dollar generated by a charity is affected negligibly by the the small sum you are donating. This assumption seems correct for a large class of utility functions under realistic conditions.

comment by DanielLC · 2014-03-14T17:51:28.866Z · LW(p) · GW(p)

There are serious problems with not using expected utility, but even if you still decide to be risk-averse, this doesn't change the conclusion that you should only donate to one charity.

comment by Lumifer · 2014-03-14T14:39:35.771Z · LW(p) · GW(p)

As long as you're a rational agent, you have to use expected utility. See VNM theorem.

That seems to be a rather narrow and not very useful definition of a "rational agent" as applied to humans.

Replies from: Squark
comment by Squark · 2014-03-14T14:45:19.552Z · LW(p) · GW(p)

I think it is the correct definition in the sense that you should behave like one.

Replies from: Lumifer
comment by Lumifer · 2014-03-14T15:02:01.775Z · LW(p) · GW(p)

Why should I behave as if my values satisfy the VNM axioms?

Rationality here is typically defined as either epistemic (make sure your mental models match reality well) or instrumental (make sure the steps you take actually lead to your goals).

Defining rationality as "you MUST have a single utility function which MUST follow VNM" doesn't strike me as a good idea.

Replies from: Squark
comment by Squark · 2014-03-14T15:29:17.452Z · LW(p) · GW(p)

Because the VNM axioms seem so intuitively obvious that violating them strongly feels like making an error. Of course I cannot prove them without introducing another set of axioms which can be questioned in turn etc. You always need to start with some assumptions.

Which VNM axiom would you reject?

Replies from: asr, Eugine_Nier, Lumifer
comment by asr · 2014-03-14T21:08:05.178Z · LW(p) · GW(p)

I would reject the completeness axiom. I often face choices where I don't know which option I prefer, but where I would not agree that I am indifferent. And I'm okay with this fact.

I also reject the transitivity axiom -- intransitive preference is an observed fact for real humans in a wide variety of settings. And you might say this is irrational, but my preference are what they are.

Replies from: Squark
comment by Squark · 2014-03-14T21:39:28.654Z · LW(p) · GW(p)

Can you give an example of situations A, B, C for which your preferences are A > B, B > C, C > A? What would you do if you need to choose between A, B, C?

Replies from: asr
comment by asr · 2014-03-15T16:02:06.750Z · LW(p) · GW(p)

Sure. I'll go to the grocery store and have three kinds of tomato sauce and I'll look at A and B, and pick B, then B and C, pick C, and C and A, and pick A. And I'll stare at them indecisively until my preferences shift. It's sort of ridiculous -- it can take something like a minute to decide. This is NOT the same as feeling indifferent, in which case I would just pick one and go.

I have similar experiences when choosing between entertainment options, transport, etc. My impression is that this is an experience that many people have.

If you google "intransitive preference" you get a bunch of references -- this one has cites to the original experiements: http://www.stanford.edu/class/symbsys170/Preference.pdf

Replies from: Squark
comment by Squark · 2014-03-15T16:48:48.934Z · LW(p) · GW(p)

It seems to me that what you're describing are not preferences but spur of the moment decisions. A preference should be thought of as in CEV: the thing you would prefer if you thought about it long enough, knew enough, were more the person you want to be etc. The mere fact you somehow decide between the sauces in the end suggests you're not describing a preference. Also I doubt that you have terminal values related to tomato sauce. More likely, your terminal values involve something like "experiencing pleasure" and your problem here is epistemic rather than "moral": you're not sure which sauce would give you more pleasure.

Replies from: asr
comment by asr · 2014-03-15T23:06:10.864Z · LW(p) · GW(p)

You are using preference to mean something other than I thought you were.

I'm not convinced that the CEV definition of preference is useful. No actual human ever has infinite time or information; we are always making decisions while we are limited computationally and informationally. You can't just define away those limits. And I'm not at all convinced that our preferences would converge even given infinite time. That's an assumption, not a theorem.

When buying pasta sauce, I have multiple incommensurable values: money, health, and taste. And in general, when you have multiple criteria, there's no non-paradoxical way to do rankings. (This is basically Arrow's theorem). And I suspect that's the cause for my lack of preference ordering.

Replies from: Squark
comment by Squark · 2014-03-16T08:06:14.174Z · LW(p) · GW(p)

No actual human ever has infinite time or information

Of course. But rationality means your decisions should be as close as possible to the decisions you would make if you had infinite time and information.

When buying pasta sauce, I have multiple incommensurable values: money, health, and taste.

Money is not a terminal value for most people. I suspect you want money because of the things it can buy you, not as a value in itself. I think health is also instrumental. We value health because illness is unpleasant, might lead to death and generally interferes with taking actions to optimize our values. The unpleasant sensations of illness might well be commensurable with the pleasant sensations of taste. For example you would probably pass up a gourmet meal if eating it implies getting cancer.

Replies from: Lumifer
comment by Lumifer · 2014-03-16T18:16:51.380Z · LW(p) · GW(p)

But rationality means your decisions should be as close as possible to the decisions you would make if you had infinite time and information.

However you can not know what decisions you would make if you had infinite time and information. You can make guesses based on your ideas of convergence, but that's about it.

Replies from: Squark
comment by Squark · 2014-03-17T19:47:12.248Z · LW(p) · GW(p)

A Bayesian never "knows" anything. She can only compute probabilities and expectation values.

Replies from: Lumifer
comment by Lumifer · 2014-03-17T20:15:55.469Z · LW(p) · GW(p)

Can she compute probabilities and expectation values with respect to decisions she would make if she had infinite time and information?

Replies from: Squark
comment by Squark · 2014-03-19T19:49:30.298Z · LW(p) · GW(p)

I think it should be possible to compute probabilities and expectation values of absolutely anything. However to put it on a sound mathematical basis we need a theory of logical uncertainty.

Replies from: Lumifer
comment by Lumifer · 2014-03-19T20:08:07.403Z · LW(p) · GW(p)

I think it should be possible to compute probabilities and expectation values of absolutely anything.

On the basis of what do you think so? And what entity will be doing the computing?

Replies from: Squark
comment by Squark · 2014-03-21T08:09:31.684Z · LW(p) · GW(p)

I think so because conceptually a Bayesian expectation value is your "best effort" to estimate something. Since you can always do your "best effort" you can always compute the expectation value. Of course, for this to fully make sense we must take computing resource limits into account. So we need a theory of probability with limited computing resources aka a theory of logical uncertainty.

Replies from: Lumifer
comment by Lumifer · 2014-03-21T14:31:05.844Z · LW(p) · GW(p)

conceptually a Bayesian expectation value is your "best effort" to estimate something.

Not quite. Conceptually a Bayesian expectation is your attempt to rationally quantify your beliefs which may or may not involve best efforts. That requires these beliefs to exist. I don't see why it isn't possible to have no beliefs with regard to some topic.

Since you can always do your "best effort" you can always compute the expectation value.

That's not very meaningful. You can always output some number, but so what? If you have no information you have no information and your number is going to be bogus.

Replies from: Squark
comment by Squark · 2014-03-23T15:40:36.030Z · LW(p) · GW(p)

If you don't believe that the process of thought asymptotically converges to some point called "truth" (at least approximately), what does it mean to have a correct answer to any question?

Meta-remark: Whoever is downvoting all of my comments in this thread, do you really think I'm not arguing in good faith? Or are you downvoting just because you disagree? If it's the latter, do you think it's good practice or you just haven't given it thought?

Replies from: Lumifer
comment by Lumifer · 2014-03-23T19:47:58.905Z · LW(p) · GW(p)

If you don't believe that the process of thought asymptotically converges to some point called "truth" (at least approximately), what does it mean to have a correct answer to any question?

There is that thing called reality. Reality determines what constitutes a correct answer to a question (for that subset of questions which actually have "correct" answers).

I see no reason to believe that the process of thought converges at all, never mind asymptotically to 'some point called "truth"'.

Replies from: Squark
comment by Squark · 2014-03-23T19:51:59.864Z · LW(p) · GW(p)

How do you know anything about reality if not through your own thought process?

Replies from: Lumifer
comment by Lumifer · 2014-03-23T19:55:52.488Z · LW(p) · GW(p)

Through interaction with reality. Are you arguing from a brain-in-the-vat position?

Replies from: Squark
comment by Squark · 2014-03-23T20:12:51.323Z · LW(p) · GW(p)

Interaction with reality only gives you raw sensory experiences. It doesn't allow you to deduce anything. When you compute 12 x 12 and it turns out to be 144, you believe 144 is the one correct answer. Therefore you implicitly assume that tomorrow you won't somehow realize the answer is 356.

Replies from: Lumifer
comment by Lumifer · 2014-03-23T21:07:45.757Z · LW(p) · GW(p)

And what does that have to do with knowing anything about reality? Your thought process is not a criterion of whether anything is true.

Replies from: Squark
comment by Squark · 2014-03-23T21:10:42.272Z · LW(p) · GW(p)

But it is the only criterion you are able to apply.

Replies from: Lumifer
comment by Lumifer · 2014-03-23T21:57:05.253Z · LW(p) · GW(p)

Not quite. I can test whether a rock is hard by kicking it.

But this byte-sized back-and-forth doesn't look particularly useful. I don't understand where you are coming from -- to me it seems that you consider your thought processes primary and reality secondary. Truth, say you, is whatever the thought processes converge to, regardless of reality. That doesn't make sense to me.

Replies from: Squark
comment by Squark · 2014-03-25T12:26:02.367Z · LW(p) · GW(p)

When you kick the rock, all you get is a sensory experience (a quale, if you like). You interpret this experience as a sensation arising from your foot. You assume this sensation is the result of your leg undergoing something called "collision" with something called "rock". You deduce that the rock probably has a property called "hard". All of those are deductions you do using your model of reality. This model is generated from memories of previous experiences by a process of thought based on something like Occam's razor.

Replies from: Lumifer
comment by Lumifer · 2014-03-25T13:58:10.380Z · LW(p) · GW(p)

OK, and how do we get from that to 'the process of thought asymptotically converges to some point called "truth"'?

Replies from: Squark
comment by Squark · 2014-03-26T20:33:26.055Z · LW(p) · GW(p)

Since the only access to truth we might have is through our own thought, if the latter doesn't converge to truth (at least approximately) then truth is completely inaccessible.

Replies from: Lumifer
comment by Lumifer · 2014-03-27T00:18:15.509Z · LW(p) · GW(p)

if the latter doesn't converge to truth (at least approximately) then truth is completely inaccessible.

Why not? Granted that we have access to reality only through mental constructs and so any approximations to "the truth" are our own thoughts, but I don't see any problems with stating that sometimes these mental constructs adequately reflect reality (=truth) and sometimes they don't. I don't see where this whole idea of asymptotic convergence is coming from. There is no guarantee that more thinking will get you closer to the truth, but on the other hand sometimes the truth is right there, easily accessible.

Replies from: Squark
comment by Squark · 2014-03-27T09:21:58.222Z · LW(p) · GW(p)

I apologize but this discussion seems to be going nowhere.

Replies from: Lumifer
comment by Lumifer · 2014-03-27T14:36:37.399Z · LW(p) · GW(p)

Agreed.

comment by Eugine_Nier · 2014-03-18T02:13:28.074Z · LW(p) · GW(p)

So you care more about following the VNM axioms, then which utility function you are maximizing? That behavior is itself not VNM rational.

Replies from: Squark
comment by Squark · 2014-03-19T19:53:42.439Z · LW(p) · GW(p)

If you don't follow the VNM axioms you are not maximizing any utility function.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-03-22T06:34:03.131Z · LW(p) · GW(p)

So why do you care about maximizing any utility function?

Replies from: Squark
comment by Squark · 2014-03-23T17:26:40.769Z · LW(p) · GW(p)

What would constitute a valid answer to that question, from your point of view?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-03-23T17:43:30.180Z · LW(p) · GW(p)

I can't think of one. You're the one arguing for what appears to be an inconsistent position.

Replies from: Squark
comment by Squark · 2014-03-23T18:39:26.942Z · LW(p) · GW(p)

What is the inconsistency?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2014-03-27T05:27:00.912Z · LW(p) · GW(p)

Saying one should maximize a utility function, but not caring which utility function is maximized.

Replies from: Squark
comment by Squark · 2014-03-27T19:04:54.706Z · LW(p) · GW(p)

Who said I don't care which utility function is maximized?

comment by Lumifer · 2014-03-14T16:05:42.812Z · LW(p) · GW(p)

"feels like" is a notoriously bad criterion :-)

Before we even get to VNM axioms I would like to point out that humans do not operate in a VNM setting where a single consequentialist entity is faced with a sequence of lotteries and is able to express his preferences as one-dimensional rankings.

Haven't there been a lot of discussion about the applicability of VNM to human ethical systems? It looks like a well-trodden ground to me.

Replies from: DanielLC, Squark
comment by DanielLC · 2014-03-14T23:38:32.109Z · LW(p) · GW(p)

Before we even get to VNM axioms I would like to point out that humans do not operate in a VNM setting where a single consequentialist entity is faced with a sequence of lotteries and is able to express his preferences as one-dimensional rankings.

He doesn't express the entire ranking, but he does still have to choose the best option.

comment by Squark · 2014-03-14T18:52:51.601Z · LW(p) · GW(p)

"feels like" is a notoriously bad criterion :-)

What would be a good criterion? You cannot pull yourself up by your bootstraps. You need to start from something.

Before we even get to VNM axioms I would like to point out that humans do not operate in a VNM setting where a single consequentialist entity is faced with a sequence of lotteries and is able to express his preferences as one-dimensional rankings.

How would you want to operate? You mentioned instrumental rationality. I don't know how to define instrumental rationality without the VNM setting (or something similar).

Replies from: Lumifer
comment by Lumifer · 2014-03-14T19:52:29.145Z · LW(p) · GW(p)

What would be a good criterion?

Mismatch with reality.

I don't know how to define instrumental rationality without the VNM setting (or something similar)

Well, the locally canonical definition is this:

Instrumental rationality: achieving your values. Not necessarily "your values" in the sense of being selfish values or unshared values: "your values" means anything you care about. The art of choosing actions that steer the future toward outcomes ranked higher in your preferences. On LW we sometimes refer to this as "winning".

I see nothing about VNM there.

Replies from: Squark
comment by Squark · 2014-03-14T20:02:28.048Z · LW(p) · GW(p)

Mismatch with reality.

I'm not following

Well, the locally canonical definition is this...

This is a nice motto, but how do you make a mathematical model out of it?

Replies from: Lumifer
comment by Lumifer · 2014-03-14T20:15:00.155Z · LW(p) · GW(p)

I'm not following

Well, you originally said " violating them strongly feels like making an error. " I said that "feels like" is a weak point. You asked for an alternative. I suggested mismatch with reality. As in "violating X leads to results which do not agree with what we know of reality".

This is a nice motto, but how do you make a mathematical model out of it?

We were talking about how would a human qualify as a "rational agent". I see no need to make mathematical models here.

Replies from: Squark
comment by Squark · 2014-03-14T20:22:17.149Z · LW(p) · GW(p)

..."violating X leads to results which are do not agree with what we know of reality".

This only makes sense in epistemic context, not in instrumental one. How can a way of making decisions "not agree with what we know of reality"? Note that I'm making a normative statement (what one should do), not a descriptive statement ("people usually behave in such-and-such way").

We were talking about how would a human qualify as a "rational agent". I see no need to make mathematical models here.

There is always a need to make mathematical models since before you have a mathematical model your understanding is imprecise. For example, a mathematical model allows you to prove than under certain assumptions diversifying donations is irrational.

Replies from: Lumifer
comment by Lumifer · 2014-03-14T20:58:13.854Z · LW(p) · GW(p)

How can a way of making decisions "not agree with what we know of reality"?

Ever heard of someone praying for a miracle?

There is always a need to make mathematical models since before you have a mathematical model your understanding is imprecise.

Bollocks! I guess next you'll be telling me I can not properly understand anything which is not expressed in numbers... :-P

Replies from: Squark
comment by Squark · 2014-03-14T21:32:43.712Z · LW(p) · GW(p)

Ever heard of someone praying for a miracle?

There is nothing intrinsic to the action of "praying for a miracle" which "disagrees with reality". It's only when we view this action in the context of a decision theory which says e.g. "choose the action which leads to maximal expected utility under the Solomonoff prior" can we say the action is "irrational" because, in fact, it does not lead to maximal expected utility. But in order to make this argument you need to assume a decision theory.

Replies from: Lumifer
comment by Lumifer · 2014-03-14T23:45:35.792Z · LW(p) · GW(p)

There is nothing intrinsic to the action of "praying for a miracle" which "disagrees with reality".

Given the definition of a miracle, I think there is, but anyway -- I'm willing to go out on a limb, take the shortcut, and pronounce praying for a miracle to fail instrumental rationality. Without first constructing a rigorous mathematical model of the expected utility under the Solomonoff prior. YMMV, of course.

comment by Bobertron · 2014-03-14T13:15:32.794Z · LW(p) · GW(p)

Ergo, if you're risk-averse, you aren't a rational agent. Is that correct?

Replies from: Squark
comment by Squark · 2014-03-14T13:35:17.209Z · LW(p) · GW(p)

Depends how you define "risk averse". When utility is computed in terms on another parameter, diminishing returns result in what appears like "risk averseness". For example, suppose that you assign utility 1u to having 1000$, utility 3u to having 4000$ and utility 4u to having 10000$. Then, if you currently have 4000$ and someone offers you to participate in a lottery in which you have a 50% chance of losing 3000$ and a 50% chance of gaining 6000$, you will reject it (in spite of an expected gain of 1500$) since your expected utility for not participating is 3u whereas your expected utility for participating is 2.5u.

comment by RomeoStevens · 2014-03-14T06:29:58.760Z · LW(p) · GW(p)

I worry that pushing on this will make smaller donors not want to donate. All eggs in one basket makes people nervous. And they are in a situation where we don't want any additional negative affect, they are already painfully giving money away.

comment by solipsist · 2014-03-14T06:16:42.335Z · LW(p) · GW(p)

Risk aversion is not unselfish. It implies a willingness to trade away expected good for greater assurance that you were responsible for good. I wouldn't fault you for that choice, but it's not effective altruism in the egoless sense.

Replies from: solipsist, DanielLC
comment by solipsist · 2014-03-14T06:46:46.197Z · LW(p) · GW(p)

I'd like to reemphasize that if you donate to multiple effective charities you are doing awesome stuff. Switching from average charities to a diversified portfolio of effective charities can make you hugely more effective -- it's like turning yourself into 10 people. Switching from a diversified portfolio of effective charities to the single most effective charity might make you maybe a few percentage points more effective. That's not nearly as important as doing whatever makes your brain enthusiastic about effective altruism. *The point I'm made in the parent comment is not of practical concern.

*I am making up these numbers -- don't quote me on this.

comment by DanielLC · 2014-03-14T18:00:02.953Z · LW(p) · GW(p)

Not generally. It's usually just there to counteract overconfidence bias. You want something that will never fail instead of something that will fail 1% of the time, because something that you think will never fail will only fail about 1% of the time, and something that you think will fail 1% of the time will fail around 10% of the time. It's much more than the apparent 1% advantage.

If you donate all your money to Deworm the World because you want a lot of good to still get done if SCI turns out to be a fraud, you're not being selfish. If you donate half you money because you personally want to be doing the good even if one of them is a fraud, then you're selfish.

I alluded to this with the sentence:

That makes sense if you care about how many lives you save, but not if you care about how many people die.

comment by ThrustVectoring · 2014-03-14T09:00:58.477Z · LW(p) · GW(p)

There are timeless decision theory and coordination-without-communication issues that make diversifying your charitable contributions worthwhile.

In short, you're not just allocating your money when you make a contribution, but you're also choosing which strategy to use for everyone who's thinking sufficiently like you are. If the optimal overall distribution is a mix of funding different charities (say, because any specific charity has only so much low-hanging fruit that it can access), then the optimal personal donation can be mixed.

You can model this by a function that maps your charitable giving to society's charitable giving after you make your choice, but it's not at all clear what this function should look like. It's not simply tacking on your contribution, since your choice isn't made in a vacuum.

Replies from: Squark
comment by Squark · 2014-03-15T11:32:30.597Z · LW(p) · GW(p)

This is already addressed in the post (a late addition maybe?)

Replies from: ThrustVectoring
comment by ThrustVectoring · 2014-03-15T13:49:56.398Z · LW(p) · GW(p)

Yeah, it wasn't there when I posted the above. The "donate to the top charity on GiveWell" plan is a very good example of what I was talking about.

Replies from: Squark
comment by Squark · 2014-03-15T14:42:22.843Z · LW(p) · GW(p)

This plan can work if GiveWell adjust their top charity as a function of incoming donations sufficiently fast. For example, if GiveWell have precomputed the marginal utility per dollar of each donation as a function of its budget and they have access to a continuously updated budget figure for each charity, they can create an automatically updated "top charities" page.

comment by Squark · 2014-03-23T21:43:23.222Z · LW(p) · GW(p)

I think that if the "a few other notes" section and the comments on moral parliaments are integrated more cleanly into the post, it will well deserve to be in main. Don't know why it only got +2.

comment by Bobertron · 2014-03-14T13:28:41.892Z · LW(p) · GW(p)

Most people set aside an amount of money they spend an charity, and an amount they spend on their own enjoyment. It seems to me that whatever reasoning is behind splitting money between charity and yourself, can also support splitting money between multiple charities.

Replies from: DanielLC
comment by DanielLC · 2014-03-14T18:13:06.095Z · LW(p) · GW(p)

There is a reason that I probably should have made more clear in the article. I'll go back and fix it.

The reasoning assumes that your donation is small compared to the size of the charity. For example, donating $1,000 a year to a charity that spends $10,000,000 a year.

Keeping money for yourself can be thought of as a charity. Even if you're partially selfish and you value yourself as a thousand strangers, the basic reasoning still works the same. The reason you keep some for yourself is that it's a small charity. The amount you donate makes up 100% of its budget. As a result, it cannot be approximated as a linear function. A log function seems to work better.

I should add that there is still something about that that's often overlooked. If you're spending money on yourself because you value your happiness more than others, the proper way to donate is to work out how much money you have to have before the marginal benefit to your happiness is less than the amount of happiness that would be created by donating to others, and donating everything after that.

There are other reasons to keep money for yourself. Keeping yourself happy can improve your ability to work and by extension make money. The thought of having more money can be incentive to work. Nonetheless, I don't think you should be donating anywhere near a fixed fraction of your income. I mean, it's not going to hurt much if you decide to only donate 90% no matter how rich you get, but if you don't feel like you can spare more than 10% now, and you become as rich as Bill Gates, you shouldn't be spending 90% of your money on yourself.

Replies from: Bobertron
comment by Bobertron · 2014-03-14T21:56:15.898Z · LW(p) · GW(p)

Keeping money for yourself can be thought of as a [small] charity

Oh, interesting. I assumed the reason I keep anything beyond the bare minimum to myself is that I'm irrationally keeping my own happiness and the well-being of strangers as two separate, incomparable things. I probably prefer to see myself as irrational compared to seeing myself as selfish.

The concept I was thinking of (but didn't quite remember) when I wrote the comment was Purchase Fuzzies and Utilons Separately.