Debate between 80,000 hours and a socialist
post by jefftk (jkaufman) · 2012-06-07T13:30:01.258Z · LW · GW · Legacy · 71 commentsContents
71 comments
The current issue of the Oxford Left Review has a debate between socialist Pete Mills and two 80,000 hours people, Ben Todd and Sebastian Farquhar: The Ethical Careers Debate, p4-9. I'm interested in it because I want to understand why people object to the ideas of 80,000 hours. A paraphrasing:
- Todd and Farquhar
- Choose your career to most improve the world. Focus on the consequences of your decisions.
- Mills
- 80,000 hours says you must work a high paying but world-destroying career so you can give more money away. Then "your interests are aligned with the interests of capital" and you can't use political means to improve the world because that endangers your career.
- Todd and Farquhar
- Professional philanthropy is one option, but so are research and advocacy. Even if you go the high paying career route, you could be a doctor. Even if you take a people-harming but well paying job you're just replacing someone else who would do it instead. Engels was a professional philanthropist, funding Marx's reasearch.
- Mills
- "80k makes much of replaceability: 'the job will exist whatever you do.' This is stronger than the claim that someone else will become a banker; rather, it states that there will always be bankers, that there will always be exploitation." Engles took on too much by trying to be both a professional philanthropist and an activist which drove him to depression and illness.
- Todd and Farquhar
- Campaigning might be better than professional philanthropy, though you should consider whether you do better to get a well-paying job and fund multiple people to campaign. Replaceability means that a given job will exist whether you take it or not, but "there might be some things you could do that would cause the job to cease to exist; for instance, by campaigning against banking". "Even if you believe capitalism is one of the world's greatest problems, you shouldn't make the seductive inference that you should devote your energies to fighting it. Rather, you should work on the cause that enables you to make the biggest difference. There may be other very big problems which are more tractable."
- Mills
- "The language of probability will always fail to capture the possibility of system change. What was the expected value of the civil rights movement, or the campaign for universal suffrage, or anticolonial struggles for independence? As we have seen most recently with the Arab Spring, every revolution is impossible, until it is inevitable." I don't like that 80,000 hours uses calculations in their attempt to estimate the good you could do through various potential careers. Stop focusing on the individual when the system is the problem. [Other stuff that doesn't make sense to me.]
As a socialist, Mills really doesn't like the argument that the best way to help the world's poor is probably to work in heavily capitalist industries. He seems to be avoiding engaging with Todd and Farquhar's arguments, especially replaceability. He also really doesn't like looking at things in terms of numbers, I think because numbers suggest certainty. When I calculate that in 50 years of giving away $40K a year you save 1000 lives at $2K each, that's not saying the number is exactly 1000. It's saying 1000 is my best guess, and unless I can come up with a better guess it's the estimate I should use when choosing between this career path and other ones. He also doesn't seem to understand prediction and probability: "every revolution is impossible, until it is inevitable" may be how it feels for those living under an oppressive regime but it's not our best probability estimate. [1]
In a previous discussion a friend also was mislead calculations. When I said "one can avert infant deaths for about $500 each" their response was "What do they do with the 500 dollars? That doesn't seem to make sense. Do they give the infant a $500 anti-death pill? How do you know it really takes a constant stream of $500 for each infant?". Have other people run into this? Bad calculations also tend to be distributed widely, with people saying things like "one pint of blood can save up to three lives" when the expected marginal lives saved is actually tiny. Maybe we should focus less on estimates of effectiveness in smart-giving advocacy? Is there a way to show the huge difference in effect between the best charities and most charities without using these?
Maybe I should have way more of these discussions, enough that I can collect statistics on what arguments and examples work and which don't.
(I also posted this on my blog)
[1] Which is not to say you can't have big jumps in probability estimates. I could put the chance of revolution at 5% somewhere based on historical data but then hear some new information about how one has just started and sounds really promising which bumps my estimate up to 70%. But expected value calculations for jobs can work with numbers like these, it's just "impossible" and "inevitable" that break estimates.
71 comments
Comments sorted by top scores.
comment by pragmatist · 2012-06-07T23:27:40.438Z · LW(p) · GW(p)
Here's an attempted reconstruction of Mills' argument. I'm not endorsing this argument (although there are parts of it with which I sympathize), but I think it is a lot better than the case for Mills as you present it in your post:
If a friend asked me whether she should vote in the upcoming Presidential election, I would advise her not to. It would be an inconvenience, and the chance of her vote making a difference to the outcome in my state is minuscule. From a consequentialist point of view, there is a good argument that it would be (mildly) unethical for her to vote, given the non-negligible cost and the negligible benefit. So if I were her personal ethical adviser, I would advise her not to vote. This analysis applies not just to my friend, but to most people in my state. So I might conclude that I would encourage significant good if I launched a large-scale state-wide media blitz discouraging voter turn-out. But this would be a bad idea! What is sound ethical advice directed at an individual is irresponsible when directed at the aggregate.
80k strongly encourages professional philanthropism over political activism, based on an individualist analysis. Any individual's chance of making a difference as an activist is small, much smaller than his chance of making a difference as a professional philanthropist. Directed at individuals, this might be sound ethical advice. But the message has pernicious consequences when directed at the aggregate, as 80k intends.
It is possible for political activism to move society towards a fundamental systemic change that would massively reduce global injustice and suffering. However, this requires a cadre of dedicated activists. Replaceability does not hold of political activism; if one morally serious and engaged activist is lured away from activism, it depletes the cadre. Now any single activist leaving (or not joining) the cadre will not significantly affect the chances of revolution succeeding. But if there is a message in the zeitgeist that discourages political participation, instead encouraging potential revolutionaries to participate in the capitalist system, this can significantly impact the chance of revolutionary success. So 80k's message is dangerous If enough motivated and passionate young people are convinced by their argument.
It's sort of like an n-person prisoner's dilemma, where each individual's (ethically) dominant strategy is to defect (conform with the capitalist system and be a philanthropist), but the Nash equilibrium is not the Pareto optimum. This kind of analysis is not uncommon in the Marxist literature. Analytic Marxists (like Jon Elster) interpret class consciousness as a stage of development at which individuals regard their strategy in a game as representative of the strategy of everyone in their socio-economic class. This changes the game so that certain strategies which would otherwise be individually attractive but which lead to unfortunate consequences if adopted in the aggregate are rendered individually unattractive. [It's been a while since I've read this stuff, so I may be misremembering, but this is what I recall.]
Replies from: bryjnar, jkaufman, CronoDAS, Eneasz↑ comment by bryjnar · 2012-06-08T10:16:11.975Z · LW(p) · GW(p)
Responding to your reconstruction: I think 80k are pretty clear about the fact that their advice is only good on the margin. If they get to a position where they can influence a significant fraction of the workers in some sector, then I expect their advice would change.
↑ comment by jefftk (jkaufman) · 2012-06-08T13:25:33.438Z · LW(p) · GW(p)
Thank you for writing this. I think I understand Mills' view better now.
↑ comment by CronoDAS · 2012-06-08T18:46:55.103Z · LW(p) · GW(p)
If a friend asked me whether she should vote in the upcoming Presidential election, I would advise her not to. It would be an inconvenience, and the chance of her vote making a difference to the outcome in my state is minuscule. From a consequentialist point of view, there is a good argument that it would be (mildly) unethical for her to vote, given the non-negligible cost and the negligible benefit. So if I were her personal ethical adviser, I would advise her not to vote. This analysis applies not just to my friend, but to most people in my state. So I might conclude that I would encourage significant good if I launched a large-scale state-wide media blitz discouraging voter turn-out. But this would be a bad idea! What is sound ethical advice directed at an individual is irresponsible when directed at the aggregate.
The fewer people that vote, the more influential each vote is. Everyone else, stay home on Election Day! ;)
↑ comment by Eneasz · 2012-06-08T17:58:29.599Z · LW(p) · GW(p)
What is sound ethical advice directed at an individual is irresponsible when directed at the aggregate.
Didn't you just re-state the prisoner's dilemma? This is the first fundamental principle of human morality. So when you say:
So if I were her personal ethical adviser, I would advise her not to vote.
I can only assume that you are an astoundingly poor ethical adviser. That is not ethics, it is simple self-interest. There is a difference.
It reminds me of people who two-box and keep insisting that two-boxing is the optimum, rational choice. If two-boxing is ideal, why don't you have a million dollars? Or, alternatively, if adopting your advice is ethical, why do you live in such a fucked-up society? Rationalists should win. It's not the ethical choice if choosing it results in tons of overall disutility.
ETA: I overall agree with your comment, it's well written and I upvoted, I just object to the losing choice being presented as the right one.
Replies from: khafra, jkaufman, Viliam_Bur↑ comment by jefftk (jkaufman) · 2012-06-08T19:49:43.340Z · LW(p) · GW(p)
I can advise someone against voting now even if I would advise them otherwise once fewer people were doing it.
Consider a travel advisor. They suggest you visit remote location X because the people there like foreigners but it's not too touristy. To one person, this is good advice. To enough people it is bad advice because once they get there they will find that actually it is quite touristy.
The reason that "sound ethical advice directed at an individual is irresponsible when directed at the aggregate" has some truth to it is that it's very hard to carefully explain the complexity of how in the current circumstance something (not voting, professional philanthropy) is the right choice for one more person to do but that if a bunch more people do it then other choices do better.
↑ comment by Viliam_Bur · 2012-06-08T19:07:10.204Z · LW(p) · GW(p)
Didn't you just re-state the prisoner's dilemma?
Prisonner's dilemma for N players is more complex than for 2 players.
For iterated 2 player's dilemma, you cooperate when the other player cooperates, and defect when the other player defects. Always cooperating is not the best strategy; you need to respond to the other player's actions.
When you have 100,000,000 player's prisonner's dilemma, where 60,000,000 players defect and 40,000,000 players cooperate, what exactly are you supposed to do? To make it even more difficult, cooperation has non-zero costs (you have to do some research about political candidates), and it's not even obvious whether the expected payoff is greater than this.
Replies from: APMason↑ comment by APMason · 2012-06-08T23:52:00.097Z · LW(p) · GW(p)
For iterated 2 player's dilemma, you cooperate when the other player cooperates, and defect when the other player defects. Always cooperating is not the best strategy; you need to respond to the other player's actions.
Actually you only cooperate if the other player would defect if you didn't cooperate. If they cooperate no matter what, defect.
comment by Oligopsony · 2012-06-07T16:48:14.766Z · LW(p) · GW(p)
With respect to why some viscerally reject the idea, I think many see charity as a sort of morally repugnant paternalism that demeans its supposed beneficiaries. (I can sympathize with this, although it seems like a rather less pressing consideration than famine and plague.)
You might actually be able to cut ideologies up - or at least the instinctive attitudes that tend to precede them - according to how comfortable they are with charity and what they see it as encompassing: liberals think charity is great; socialists find charity uncomfortable and think it would be best if the poor took rather than passively received; libertarians either also find charity uncomfortable but extend that feeling to any system that socialists might hope to establish, or think charity is great but that the social democratic stuff liberals like isn't charity.
It might also be possible to view this unease as stemming from formally representing charity as purchasing status. I give you some money, I feel great, you feel crummy (but eat.) It's a bit like prostitution: one doesn't have to deny that both parties are on net better off from any given transaction to hold that something exploitative is going on. For socialists and some libertarians, a world sustained by charity (whatever that is) is intolerable and people should instead take what is theirs (whatever that is.) Others think charity is great because - to put it, well, very uncharitably - it lets them be the johns. (One of Aristotle's arguments against socialism is that if we owned all things in common, he wouldn't be able to grow in generosity by lending slaves to his friends.)
I would guess that it is much easier for people to recategorize what falls into the "charity" bucket than to flip their valence on the bucket itself.
Replies from: Viliam_Bur, jkaufman↑ comment by Viliam_Bur · 2012-06-08T08:35:17.140Z · LW(p) · GW(p)
I think the problem with charity reflects an ethical question: what exactly does it mean that something is "good", and if something is "good" what should be the consequences for our behavior?
The traditional answer is that it is proper to reward doing "good" things socially, but they should not be enforced legally. One will be celebrated as a hero for saving people from a burning house, but one will not be charged with murder for not saving people from a burning house.
On the other hand, doing "bad" things should be punished not only socially but also legally. Stealing things from others is punished not only by losing friends, but also by prison.
What is the source of this asymetry? Why is "bad" not the opposite of "good", with all consequences? This is especially important for utilitarians, because if we convert everything to utilons, at the end we have a choice between an action A which creates a worldstate with X utilons, and B which creates the worldstate with Y utilons. Knowing that X is greater than Y, should we treat action A as "good", or action B as "bad"?
My guess is that we have some baseline that we consider the standard behavior. (Minding your own business, neither helping others nor harming them.) A "good" action is change from this baseline towards more utilons, a "bad" action is change from this baseline towards less utilons. Not lowering this baseline is considered more important than increasing it. It makes sense to have a long-term Schelling point.
Problem is that if you change this baseline, you have redefined the boundary between "good" and "bad". And people disagree about where exactly should this baseline be. If two groups disagree about the baseline, they have moral disagreement even if they use the same utility function. They disagree about whether choosing worse B instead of better A should be punished.
For example people are socially rewarded for giving money to charity, but they are not punished for not giving to charity, because the baseline is "not giving to charity". On the other hand, people are punished for not paying taxes, because the baseline is "paying taxes". Both concepts mean "giving up personal money to improve the society", but the reactions are different, because the baseline is different.
Giving money to poor people creates some utility, and the question is: where is the baseline? For some people the baseline is "keeping what you have" or "keeping most of what you have, but not all, especially if you have more than your neighbors". For socialists the baseline is "doing the best thing possible", because this makes sense for a utilitarian. I guess, for a socialist, voluntary charity is a textbook example of compartmentalization. ("If you think giving money to poor people is the right thing to do, because it creates utility, what not make it a law for everyone, and create a lot more utility? And why not give as much as possible, to create as much as possible utility?") For a non-socialist, this kind of thinking seems like a huge conjunction fallacy, and also while we value the well-being of others, we usually value our own well-being more, so it makes sense to contribute only to the most urgent causes.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2013-09-17T09:18:09.484Z · LW(p) · GW(p)
The traditional answer is that it is proper to reward doing "good" things socially, but they should not be enforced legally. One will be celebrated as a hero for saving people from a burning house, but one will not be charged with murder for not saving people from a burning house.
You're conflating two different questions here:
What interval of quantified goodness (utility) should the Law actively promote, by distributing punishments or rewards to agents? What are the least good good deeds the Law should care about, and what are the most good good deeds?
Restricting our attention to deeds the Law actively promotes or discourages, how ungood does an act have to be before the Law should discourage it via positive punishment, as opposed to just discouraging it by withholding a reward or by rewarding a somewhat-less-bad alternative action?
You start off speaking as though you're answering the first question -- when should the state be indifferent to supererogation? -- but then you only list punishment (and extremely harsh punishment, at that!) as the mechanism by which Laws can incentivize behavior. This is confusing. Whether the Law should encourage people (e.g., with economic inventives) to save their neighbors from burning houses is quite a different question from whether the Law should punish people who don't save their neighbors, and that in turn is quite a different question from whether such a punishment should be as harsh as that for, say, manslaughter! A $100 fine is also a punishment. (And a $100 reward is also an incentive.)
If two groups disagree about the baseline, they have moral disagreement even if they use the same utility function. They disagree about whether choosing worse B instead of better A should be punished.
I don't agree with this. If two rational and informed people disagree about whether enacting a certain punishment is a good idea, then they don't have the same utility function -- assuming they have utility functions at all.
I think the core problem is that you're conceiving the Law as a utilometer. You input the goodness or badness of an act's consequences. (Or its act-type's foreseeable consequences.) The Law, programmed with a certain baseline, calculates how far those consequences fall below the baseline, and assigns a punishment proportional to the distance below. (If it is at or above the baseline, the punishment is 0.) The Law acts as a sort of karmic justice system, mirroring the world's distribution of utility. (We could have a similar system that rewards things for going above the baseline, but never mind that.)
In contrast, I think just about any consistent consequentialist will want to think of the Law as a non-map tool. The Law isn't a way of measuring an act's badness and outputting a proportional punishment; it's a lever for getting people to behave better and thereby making the world a more fun place to live in. Questions 1 and 2 above are wrong questions, because the ideal set of Laws almost certainly won't consistently respond to acts in proportion to the acts' foreseeable harm. Rather, the ideal set of Laws will respond to acts in whichever way leads to the best outcome. If act A is worse than act B, but people end up overall much better off if we use a harsher punishment against B than against A, then we should use the harsher punishment against B. (Assuming we have to punish both acts at all.)
So no Schelling point is needed. The facts of our psychology should determine how useful it is to rely on punishment vs. reward in different scenarios. It should also determine how useful it is to rely on material rewards vs. social or internal ones in different contexts. Laws are (ideally) a way of making the right thing happen more often, not a way of keeping tabs on exactly how right or wrong individual actions are.
↑ comment by jefftk (jkaufman) · 2012-06-07T17:26:19.418Z · LW(p) · GW(p)
This makes sense to me, but then wouldn't Mills be arguing against the charity component instead of the career component?
Replies from: Oligopsony↑ comment by Oligopsony · 2012-06-07T17:41:07.590Z · LW(p) · GW(p)
Possibly. Or possibly he's deciding to go after the weaker claim, or is personally too cowardly to accept the lifestyle consequences of full-on consequentialism, or you should accept at face value his arguments that even on consequentialist grounds high-paying finance jobs are likely to destroy as much as they create. I'm mostly speculating based on my experiences among the kommie krowd and what I like to imagine (though don't we all) is a developed sympathetic understanding of other tribes as well. This shouldn't be read as a strong claim or even really a claim at all about Mills specifically. (From your summary it sounds like you found yourself confused by Mills' arguments, so either it is hopelessly confused, or you might benefit from giving it another go, or there's simply too much inferential distance at this moment.)
comment by pragmatist · 2012-06-07T14:28:02.888Z · LW(p) · GW(p)
Downvoted for the extremely tendentious paraphrase. I'm generally in favor of more discussion of politics on this site, but I think it's a topic we need to be extra careful about. This is not the way to do it.
Also, it's "Engels", not "Engles".
Replies from: Raemon, jkaufman↑ comment by jefftk (jkaufman) · 2012-06-07T17:38:53.428Z · LW(p) · GW(p)
"Extremely tendentious" is not what I want. The ideas of 80k make a lot of sense to me and a lot of what Mills was arguing did not, but I tried to paraphrase them as accurately as I could, or leave quotes in when I couldn't. [1] Which parts do you think badly represent their sources?
[1] For example, "The language of probability will always fail to capture the possibility of system change. What was the expected value of the civil rights movement, or the campaign for universal suffrage, or anticolonial struggles for independence? As we have seen most recently with the Arab Spring, every revolution is impossible, until it is inevitable." was originally [misunderstands probability] but I tried to be fairer to him and avoid my own biases by using his own words.)
Replies from: pragmatist, None, CuSithBell↑ comment by pragmatist · 2012-06-07T20:47:04.790Z · LW(p) · GW(p)
I'm sure your intention was to present an unbiased summary. Unfortunately, this is very difficult to do when you strongly identify with one side of a dispute. It also doesn't help that Mills is not a very clear writer. I've noticed that when I read an argument for a conclusion I do not agree with, and the argument doesn't seem to make much sense, my default is to assume it must be a bad argument, and to attribute the lack of sense to the author's confusion rather than my own. On the other hand, when the conclusion is one with which I agree, and especially if its a conclusion I think is underappreciated or nonobvious, an unconscious principle of charity comes into play. If I can't make sense of an argument, I think I must be missing something and try harder to interpret what the author is saying.
This is probably a reasonably effective heuristic in general. There's only so much time I can spend trying to parse arguments, and in the absence of other information, using the conclusion as a filter to determine how much credibility (and therefore time) I should assign to the source isn't a terrible strategy. When I'm trying to provide a fair paraphrase of someone's argument though, the heuristic needs to be actively suppressed. I need to ignore the signals that the person isn't all that smart or well-informed and engage with the argument under the working assumption that the person is very smart, so that an inability to understand is an indication of a failure on my part. Only if concentrated effort is insufficient to produce an interpretation that I think makes sense do I conclude that the argument is genuinely bad.
If you think a debate is worth reporting on (for purposes other than mockery of one side), then it is worth engaging in this manner. Part of what makes your paraphrase tendentious is that I get the sense you are so convinced that Mills is out of his depth here (which might well be true) that you haven't tried to read his arguments with care to see if there's an important point you might be missing. I've posted my own attempt at a charitable reading of Mills' argument elsewhere in the thread, but I think CuSithBell and Khoth have pointed out important lacunae in your presentation. Just including the points they articulate would make Mills come across as much less of an analytic incompetent than he does in your post.
↑ comment by [deleted] · 2012-06-07T17:58:23.676Z · LW(p) · GW(p)
"I refuse to accept replaceability because it conflicts with my politics" is hardly a fair representation of his point, for a start.
I think his point here along the lines of that although if you don't become a banker someone else will, if you do become a banker, nobody will become a political activist in your place (and for various reasons it's extremely hard to be both a banker and a socialist activist). And if you're a successful political activist, you increase the chance that society will be reformed so that there aren't a load of bankers.
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2012-06-07T18:17:53.979Z · LW(p) · GW(p)
From the source:
80k makes much of replaceability: “the job will exist whatever you do.” This is stronger than the claim that someone else will become a banker; rather, it states that there will always be bankers, that there will always be exploitation.
Mills doesn't argue against replaceability, he says that he can't accept replaceability because it implies there will always be bankers and exploitation.
Replies from: None↑ comment by [deleted] · 2012-06-07T18:35:39.542Z · LW(p) · GW(p)
His actual quote is saying that replaceability goes away if the whole system can be changed. Your original paraphrase makes it sound like he has an ideological precommitment to the idea that if you don't become a banker, nobody else will.
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2012-06-07T19:04:35.192Z · LW(p) · GW(p)
Ok; I'll replace the paraphrase with the quote.
↑ comment by CuSithBell · 2012-06-07T17:50:05.316Z · LW(p) · GW(p)
Regarding your example, I think what Mills is saying is probably a fair point - or rather, it's probably a gesture towards a fair point, muddied by rhetorical constraints and perhaps misunderstanding of probability. It is very difficult to actually get good numbers to predict things outside of our past experience, and so probability as used by humans to decide policy is likely to have significant biases.
comment by [deleted] · 2012-06-07T14:18:22.662Z · LW(p) · GW(p)
When I calculate that in 50 years of giving away $40K a year you save 1000 lives at $2K each, that's not saying the number is exactly 400. It's saying 1000 is my best guess...
Bold added myself. Should that be 1000?
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2012-06-07T17:24:37.163Z · LW(p) · GW(p)
Fixed. Thanks!
comment by MinibearRex · 2012-06-09T05:35:54.626Z · LW(p) · GW(p)
I hate to go off on a tangent, but:
Bad calculations also tend to be distributed widely, with people saying things like "one pint of blood can save up to three lives" when the expected marginal lives saved is actually tiny.
Just in the past week I was trying to figure out the math behind that statistic. I couldn't find actual studies on the topic that would let me calculate the expected utility of donating blood. Do you happen to know said information?
Replies from: jkaufman, TimS↑ comment by jefftk (jkaufman) · 2012-06-09T21:08:30.487Z · LW(p) · GW(p)
I bet it's that 1/3 of a pint is pretty much the minimum amount of blood needed for a lifesaving transfusion (The Red Cross says the average transfusion size is three pints).
The expected utility of giving blood is currently very small, at least in the USA, because people are not being refused transfusions due to lack of blood. If that started happening blood drives would expand hugely and you'd know about it.
We do have shortages of other organs, such as kidneys, and with those you have around maybe a 50% chance of saving someone's life if after you offer to donate one they find a match for you. If you're able to start a chain of kidney swaps that wouldn't happen otherwise, you may be able to get above one expected life saved per kidney.
Replies from: jkaufman, MinibearRex↑ comment by jefftk (jkaufman) · 2012-06-17T04:15:06.801Z · LW(p) · GW(p)
Upon further thought, 50% may be too high for kidney donation. I was estimating that you'd only be giving your kidney to someone who would die otherwise (there are many more people who need kidney transplants than are available, so the replaceability effect should be absent.), and it had a 50% chance of working. While people who get a donated kidney do live longer than ones who stay on dialysis (and get to stop having to go in for dialysis) they tend to only live an extra 10-15 years.
As the lives of people with major kidney problems are worse than those without, maybe count each year as only 75% when quality weighting? So 50% chance of working 10-15 years 75% gives a very rough estimate of 4-6 QALYs per kidney donation. Still much better than blood, but with deworming at around $100/QALY donating $1K does more good than donating a kidney.
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2012-06-17T04:45:16.677Z · LW(p) · GW(p)
This is wrong: the 10-15 year estimate already takes into account rejections and other transplant failures. So I'm off by 50% and we get 8-12 QALYs.
The data is also from a study on kidneys from cadavers; live donated ones might be better.
Replies from: gwern↑ comment by gwern · 2012-06-17T22:00:51.114Z · LW(p) · GW(p)
Live ones apparently are better; I heard this before recently, and this seems to be right according to a few pages I checked, although they don't cite specific studies: http://kidney-beans.blogspot.com/2009/08/living-kidney-donation-vs-cadaver.html or http://kidney.niddk.nih.gov/kudiseases/pubs/transplant/
↑ comment by MinibearRex · 2012-06-10T04:55:51.097Z · LW(p) · GW(p)
The expected utility of giving blood is currently very small, at least in the USA, because people are not being refused transfusions due to lack of blood. If that started happening blood drives would expand hugely and you'd know about it.
I have heard statistics (from sources that aren't the red cross) that the supply of blood in the US is nearly always quite low. Running out rarely happens, although I did hear that lack of adequate blood reserves did cause a problem on 9/11. My impression is that the Red Cross typically has enough reserves for a few average days, but if something major happens, that supply can get used up pretty quickly.
↑ comment by TimS · 2012-06-10T00:31:39.648Z · LW(p) · GW(p)
Somewhere I heard that a single donation unit of blood can be made into three distinct products that could be given to three distinct patients. I assume that this possibility is the source of the statistic. Wikipedia is suggestive but not definitive.
Even if that's right, it's not a very useful statistic because it ignores the fairly common occurrence than more than one unit of medical thing will be needed for a single patient.
comment by Pablo (Pablo_Stafforini) · 2014-06-25T01:48:49.223Z · LW(p) · GW(p)
The current issue of the Oxford Left Review has a debate between socialist Pete Mills and two 80,000 hours people, Ben Todd and Sebastian Farquhar: The Ethical Careers Debate, p4-9
Link to the article (the one in the post is dead)
comment by DanielVarga · 2012-06-07T19:54:52.356Z · LW(p) · GW(p)
I find the replaceability assumption very problematic, too. If this wasn't LW, I would simply state the obvious an say that all sorts of evil stuff can be justified by replaceability. But this is LW, so I'll say that replaceability is not true for reflective decision theories.
Replies from: Douglas_Knight, Zack_M_Davis, amcknight, jkaufman↑ comment by Douglas_Knight · 2012-06-07T20:36:43.761Z · LW(p) · GW(p)
The other potential bankers aren't using reflective decision theories. It's really that simple.
Added: Actually, it's even simpler: the other potential bankers have different goals. But the point about whether other people are using reflective decision theories is sometimes relevant.
↑ comment by Zack_M_Davis · 2012-06-07T22:18:52.452Z · LW(p) · GW(p)
If this wasn't LW, I would simply state the obvious an say that all sorts of evil stuff can be justified by replaceability.
I'm not sure how to parse this. One possible interpretation is, "If the replaceability thesis were true, then it would follow that people should do evil things. But since people shouldn't do evil things, it follows by modus tollens that the replaceability thesis is false." This kind of argument could be correct depending on how the details were fleshed out, but I certainly would not call it obvious.
Another interpretation is, "Unscrupulous clever arguers could use the replaceability thesis to persuade people to do evil things." This is more obvious, but it doesn't seem very relevant; sufficiently bad reasoning can be used to argue for any conclusion from any set of premises.
Replies from: DanielVarga↑ comment by DanielVarga · 2012-06-07T22:51:29.686Z · LW(p) · GW(p)
I wasn't trying to say anything deep, really. If the replaceability argument works for investment bankers, then it works for henchmen of an oppressive regime, too. In my country, many people actually used the replaceability argument, without the fancy name. And in hindsight people in my county agree that they shouldn't have used the argument. So yeah, maybe it's the modus tollens. But maybe it's simpler than that: maybe these people misjudged being completely replaceable. In the eighties more and more people dared to say no to the Hungarian secret service, with less and less consequences.
By the way, the apparently yet-unpublished part 2 of jkaufman's link will deal with this issue.
Replies from: bryjnar, ciphergoth, jkaufman↑ comment by bryjnar · 2012-06-08T10:19:53.737Z · LW(p) · GW(p)
Well, it kind of does apply to henchmen of an opressive reigime. The classic example is Oskar Schindler: he ran munitions factories for the Nazis in order to help him smuggle Jews out of Germany (and he ran them at under capacity). Schindler is generally regarded as a hero, but that seems to be trading on precisely something like the replaceability argument. If he hadn't done the job, someone else would have, and not only would they not have saved anybody, they would have run the factories better.
Flip the argument around for "being a banker" (or your doubtful career of choice) and it's hard to see what changes.
Replies from: DanielVarga↑ comment by DanielVarga · 2012-06-08T14:43:11.783Z · LW(p) · GW(p)
Sure, I never meant to imply that the issue is clear-cut. Many of the people revealed to be informers argued that they only reported the most innocent things about the people they were tasked to spy on. Tens of thousands of books are written about such moral dilemmas. When people decide that Schindler is a hero, they seem to use a litmus test that is similar but definitely not identical to replaceability. They ask: Did he do more than what can reasonably be expected from him under his circumstances? I don't think focusing on the replaceability part of this very complex question helps clear things up.
Replies from: bryjnar↑ comment by bryjnar · 2012-06-09T08:58:52.390Z · LW(p) · GW(p)
Okay, that's pretty fair. I can only really claim that a replaceability argument could be used to argue that Schindler was a hero; there may be other ways of thinking about it, and those may be the ways people actually do think!
That said, I've found that example does sometimes make people reconsider their opinion of replaceability arguments, so it certainly appeals to something in the folk morality.
↑ comment by Paul Crowley (ciphergoth) · 2012-06-09T08:45:25.872Z · LW(p) · GW(p)
Replaceability is also not total. If you decide to be a henchman, on average you slightly increase henchman quality and reduce henchman salary. So refusing to be a henchman does cost the evil regime something.
↑ comment by jefftk (jkaufman) · 2012-06-08T16:53:56.243Z · LW(p) · GW(p)
the apparently yet-unpublished part 2 of jkaufman's link will deal with this issue
I'm confused.
Replies from: DanielVarga↑ comment by DanielVarga · 2012-06-08T18:29:18.122Z · LW(p) · GW(p)
The essay you linked to acknowledges the existence of the coordination problems I am talking about, and promises a Part 2 where it deals with them. This Part 2 is not yet published.
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2012-06-08T19:41:24.929Z · LW(p) · GW(p)
I see. You meant the link in this post, not one of the links in the top level post (which was also me).
↑ comment by amcknight · 2012-06-07T21:22:54.407Z · LW(p) · GW(p)
Can you elaborate for me, please? I don't know what you mean (even though this is LW).
Replies from: DanielVarga↑ comment by DanielVarga · 2012-06-07T22:18:38.955Z · LW(p) · GW(p)
As Douglas_Knight shows, my comment wasn't really well thought out. However, the idea is that a reflective decision theory agent considers the implications of the fact that whatever her decision is, similar agents will reach a similar decision. This makes them cooperate in Prisoner's Dilemma - Tragedy of the Commons situations where "if all of us behaved so selfishly, we would be in big trouble". The thing is sometimes called superrationality.
↑ comment by jefftk (jkaufman) · 2012-06-07T20:59:53.689Z · LW(p) · GW(p)
A more detailed consequentialist argument for replaceability: The replaceability effect: working in unethical industries part 1.
comment by Shmi (shminux) · 2012-06-07T16:35:03.694Z · LW(p) · GW(p)
This is the first I heard of 80,000 hours, and their site gives me an instant negative vibe, and it's not just the abundance of weird pink on it. But I have trouble pinpointing quite what it is.
Replies from: Richard_Kennaway, jkaufman↑ comment by Richard_Kennaway · 2012-06-07T20:47:56.956Z · LW(p) · GW(p)
I notice that in their list of high-impact careers, not one of them involves actually doing the work that all this charity pays for. The grunt work is beneath them and the audience they're aiming at. A lot of the careers consist of telling other people what to do: managers, policy advisors, grant writers, sitting on funding bodies. The closest any of them get to boots on the ground is scientific research and development.
Now, I can see the argument for this. If your abilities lie in the direction of a lucrative profession, you should do that and give most of the proceeds to charity. The lawyer and the soup kitchen. But take this a step further (as the 80,000ers do themselves). Wouldn't it be even more effective to persuade other people to do this? If you get 10 people to make substantial donations, that's more effective than just doing it yourself. Or in their words, "There are also many opportunities for forming chains of these activities. For instance, you could campaign for more people to become professional philanthropists, who could spend money paying for more campaigners."
But why stop there? The more money people have, the more they can give, so you should concentrate on persuading the seriously wealthy to donate. And to move in their circles, you will have to cultivate a certain degree of prosperity yourself, or you'll never get access. Just an expense of the job, which will pay off with even more money raised for charity.
But then again, governments command vastly more wealth and power than almost any individual, so that is where you could have a truly great impact. Better still, go for the governments of governments, the supra-national organisations. Of course, it will be such a chore to maintain a pied-à-terre in every major capital, charter private jets for your travelling, and dine at the most expensive restaurants with senior politicians and businessmen, but one could probably put up with it.
Now, the higher up this pyramid you go, the smaller it gets, so there will only be room for a few giga-effective careers at the top. But never mind, it's your duty to climb as far up as you can, and if you do replace someone rather than adding yourself, make sure that it's because you can do an even more effective job than they were doing.
And it's all for the sake of the poor, and the more successful you are, the less you'll ever see of them.
Yes, I can see the argument.
Onion, if you want to write a satirical piece on this theme, go right ahead.
BTW, a couple of the names on the list of authors of their blog are LessWrong regulars, although I'm not sure Eliezer should be listed there: the only post attributed to him is actually a repost by someone else of something he posted to LW.
Replies from: bryjnar, Vladimir_Nesov, Raemon, amcknight, jkaufman↑ comment by bryjnar · 2012-06-08T10:26:41.928Z · LW(p) · GW(p)
You know, this is sort of WAD. It's much easier to get people to do good if it happens to nearly coincide with something they wanted to do anyway. If you have someone who was already planning to become a banker, then it's much easier to persuade them to keep doing that, but give away money, than it is to persuade them to become a peniless activist. As it happens, this may be hugely effective, and so a massive win from a consequentialist point of view.
I like to think of it more on the positive front: as a white, male, privileged Westerner with a high-status education you basically have economic superpowers, so you can quite easily do a lot of good by doing pretty much what you were going to do anyway. Obviously, most of this is due to your circumstances, but it's still a great opportunity.
↑ comment by Vladimir_Nesov · 2012-06-08T00:31:58.485Z · LW(p) · GW(p)
The amusement or absurdity value should be irrelevant in evaluating such decisions. I feel really angry when I consider remarks like these (not angry at someone or some action in particular, but more vaguely about the human status quo). The kind of spectacle where tickets are purchased in dead child currency.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2012-06-08T06:14:27.614Z · LW(p) · GW(p)
I can see that my response to 80,000 Hours could be just as self-serving as they can be seen as being, but see my further response to satt.
↑ comment by Raemon · 2012-06-07T23:47:21.138Z · LW(p) · GW(p)
This has been my concern. I'm not involved with 80k but I travel in Effective Altruism circles, which extend beyond 80k and include most of their memes.
What is incredibly frustrating is that none of this actually proves anything. It is still true that a wealthy banker is probably able to do more good than a single aid worker. Clearly we DO need to make sure there's an object-level impact somewhere. But for the near future, unless their memes overtake the bulk of the philanthropy world, it is likely that methods 80k advocates are sound.
Still, the whole thing smells really off to me, and your post sums up exactly why. It is awful convenient for a movement consisting of mostly upper-middle-class college grads that their "effective" tools for goodness award them the status and wealth that they'd otherwise feel entitled to.
Replies from: drethelin, RomeoStevens↑ comment by drethelin · 2012-06-08T07:31:54.830Z · LW(p) · GW(p)
alternately, humans are badly made and care more about status and wealth than about the poor sick. They won't listen if you tell them to sacrifice themselves, but they might listen if you tell them to gain status and also help the poor at the same time. The mark of a strong system in my mind is one that functions despite the perverse desires of the participants. If 80k can harness people's desire to do charity to people's desire for money and status I think it can go really far.
Replies from: Raemon↑ comment by RomeoStevens · 2012-06-08T07:12:34.355Z · LW(p) · GW(p)
this seems like a feature since it means it is attractive to a MUCH larger subset of the populace than self sacrifice.
↑ comment by amcknight · 2012-06-07T21:20:46.389Z · LW(p) · GW(p)
Is this actually meant to be an argument against 80k hours' style of effective altruism or are you just joking around?
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2012-06-07T22:15:48.259Z · LW(p) · GW(p)
I am not joking around, but neither am I arguing that they should shut up shop and go save the world some other way. I have not concluded a view on whether what they are doing is worth while, and my posting is simply to voice a concern. If you think it's an unfounded one, go ahead and say why.
I don't know. But I think there's a valid point here, and an Onion piece begging to be written about a bunch of Oxford philosophy students urging people to save the world by earning pots of money as bankers (a target they've painted on themselves with their own press release), and I can't help imagining what Mencius Moldbug would have to say about them.
Replies from: juliawise↑ comment by juliawise · 2012-06-08T00:27:56.038Z · LW(p) · GW(p)
I don't think I understand your concern. It's that people who go into high-earning careers will lose touch with "real people" (although the people these folks want to help are usually future people or in the developing world, and thus people they would never have met anyway)?
Replies from: satt, Richard_Kennaway↑ comment by satt · 2012-06-08T03:59:48.288Z · LW(p) · GW(p)
I'm not RichardKennaway, but I read him as basically applying a be-wary-of-convenient-clever-arguments-for-doing-something-you'd-probably-want-to-do-anyway heuristic. I see where he's coming from. The 80,000 Hours argument for getting riches & status instead of becoming (say) an aid worker or a doctor does smell suspiciously self-serving, at least to my nose. However — and it's a big "However" — their argument does appear to be correct, so I try to ignore the smell.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2012-06-08T05:55:00.097Z · LW(p) · GW(p)
Yes, that's exactly what was in my mind, and Raemon expressed it also.
their argument does appear to be correct, so I try to ignore the smell.
I don't think that's the right way to resolve the conflict. One person's taking beliefs seriously is another's toxic decompartmentalisation, after all. Why should the smell yield to the argument instead of vice versa? Especially when you notice that the part telling you that is the part making the argument, and the cognitive nose is inarticulate. No, what is needed is to resolve the contradiction, not to suppress one side of it and pretend you are no longer confused.
And meanwhile, go and be a banker, or whatever the right answer presently seems to be, because there is no such thing as inaction while you resolve the conundrum: to do nothing is itself to do something. If you later conclude it's a false path, at least the money will give you the flexibility to switch to whatever you decide you should have been doing instead.
Replies from: satt↑ comment by satt · 2012-06-09T04:58:21.884Z · LW(p) · GW(p)
Why should the smell yield to the argument instead of vice versa?
That's the $64,000(-a-year) question, and I don't have an answer I'm happy with for the general case. Here's roughly what I think for this specific situation.
As you say, my nose can't describe what it smells. It might be a genuine problem or a false alarm. To find out which, I have to consciously poke around for an overlooked counterargument or a weak spot in the original argument, something to corroborate the bad smell. I did that here and couldn't find a killer gap in 80,000 Hours' arguments, nor a strong counterargument for why I should disregard them.
For simplicity, consider a stripped down version of the decision problem where I have only two options: becoming a rich banker vs. getting a normal job paying the median salary.
Suppose I disapprove of the banker option for whatever reason. If I hold my nose and become a banker anyway, it seems very likely to me that (1) I would nonetheless prefer that to having someone with different values in my place instead, and (2) that even if taking a banking job worked against my values or goals, I could compensate for that by hiring other people to further them.
I had thought that reference class forecasting might warn against the get-rich-and-give strategy: people with more income give a smaller percentage of it to charity, so by entering banking one might opt into a less generous reference class. But quick Googling reveals that people with higher incomes give more in absolute terms, at least in the UK, the US, and Canada.
Putting aside the chance of my being wrong, what about the disutility of being wrong? Well, I agree with your final paragraph, so that doesn't seem to weigh heavily against the 80,000 Hours point of view either.
All in all my nose seems to have overreacted on this one. Maybe it raised the alarm because 80,000 Hours' conclusion failed a quick universalizability test, namely "would this still be the best choice if everyone else in the same boat made it too?" But that test itself seems to fail here.
I doubt my thoughts on this are bulletproof; there's a good chance I'm missing part of the puzzle or just plain wrong on some fundamental issue. Maybe I've built a convenient, clever meta-argument for arguing myself into something I'd probably want to do anyway! Still, ultimately I can devote only so much thought (and self-distrust) to this. I have to make a judgement call, and this is the best one I can make, whatever the risk of motivated cognition.
↑ comment by Richard_Kennaway · 2012-06-08T06:21:07.215Z · LW(p) · GW(p)
That's a part of it. One reason for the lawyer to now and then put in a shift at the soup kitchen is to keep his feet on the ground and observe the actual effect of what he's donating to. Some managers put in a shift on the shop floor now and then for the same reason. Maybe 80,000ers should consider spending their vacations out in the field?
Replies from: juliawise↑ comment by juliawise · 2012-06-10T20:56:39.819Z · LW(p) · GW(p)
Maybe 80,000ers should consider spending their vacations out in the field?
I agree. I'm writing from Ecuador right now. Seeing serious poverty first-hand does hit me in a different place than reading about it. But I still think donating to efficient charities is the best way to help these people - not me volunteering or moving here.
I think most of the 80K Hours founders are/were philosophy grad students. So they weren't especially likely to wind up as either on-the-ground nonprofit workers or high-flying financiers. And I gather many of them had an ugh field around money, so trying to earn more of it (and being read by other people as someone who loves money) is more of a sacrifice than it might seem.
↑ comment by jefftk (jkaufman) · 2012-06-08T18:14:04.456Z · LW(p) · GW(p)
Some of the links you make aren't sound (lots of people are already trying to get the seriously wealthy to donate, so it might not be where you can have the greatest impact, there's not a good reason to think that you would be more effective than the people who currently run the IMF and WorldBank) but the overall idea seems good to me: look for where you can most improve the world and go there.
↑ comment by jefftk (jkaufman) · 2012-06-07T17:25:02.494Z · LW(p) · GW(p)
If you do pinpoint it, I would be curious.
Replies from: Viliam_Bur, Incorrect↑ comment by Viliam_Bur · 2012-06-08T08:52:30.555Z · LW(p) · GW(p)
Assuming that we run on corrupted hardware, how much should we trust explanations: "I should get a lot of money and power, because I am a good person, so this will help the whole society"?
Also, if I tried to convert you to a money-making cult, such as Amway, I would start by describing the good things you could do after you become super-rich. Not because we are at LW where we signal that we care about saving the world, but because this is the standard recruitment tactic.
(This does not prove that "80,000 hours" is a bad thing. I just explain how it pattern-matches and creates a negative vibe.)
EDIT: Also, be wary of convenient clever arguments for doing something you'd probably want to do anyway.
comment by katydee · 2012-06-07T22:10:34.442Z · LW(p) · GW(p)
Linking to the article seems like it would be significantly better than this sort of paraphrase. I'm not sure whether you can get authorization to do so, but I would find that a lot more useful, especially for controversial political issues like this.
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2012-06-07T23:52:11.601Z · LW(p) · GW(p)
It's linked in the original post: http://oxfordleftreview.files.wordpress.com/2012/06/olr7_web.pdf
The summary was an attempt to get more people to read and consider the ideas by making it all a lot shorter. Mostly seems to have been a bad idea, primarily because I didn't do a good job of keeping my biases out of it.