Earning to Give vs. Altruistic Career Choice Revisited
post by JonahS (JonahSinick) · 2013-06-02T02:55:23.414Z · LW · GW · Legacy · 153 commentsContents
Responses to MacAskill’s Considerations Variance in cost-effectiveness of charities Discrepancy in earnings Replaceability Other important considerations that favor an altruistic career Asymmetric implications of the existence of small probability failure modes Altruistic careers extend beyond the nonprofit world Historical Precedent Mainstream consensus Steelmanning wealth maximization Closing summary None 153 comments
A commonly voiced sentiment in the effective altruist community is that the best way to do the most good is generally to make as much money as possible, with a view toward donating to the most cost-effective charities. This is often referred to as “earning to give.” In the article To save the world, don’t get a job at a charity; go work on Wall Street William MacAskill wrote:
Top undergraduates who want to “make a difference” are encouraged to forgo the allure of Wall Street and work in the charity sector ... while researching ethical career choice, I concluded that it’s in fact better to earn a lot of money and donate a good chunk of it to the most cost-effective charities, a path that I call “earning to give.” ... In general, the charitable sector is people-rich but money-poor. Adding another person to the labor pool just isn’t as valuable as providing more money, so that more workers can be hired.
In private correspondence, MacAskill clarified that he wasn’t arguing that “earning to give” is the best way to do good, only that it’s often better than working at a given nonprofit. In a recent comment MacAskill wrote
I think there's too much emphasis on “earning to give” as the *best* option rather than as the *baseline* option
and raises a number of counter-considerations against “earning to give.” Despite this, the idea that “earning to give” is optimal has caught on in the effective altruist community, and so it’s important to discuss it.
Over the past three years, I myself have shifted from the position that “earning to give” is philanthropically optimal, to the position that it’s generally the case that one can do more good by choosing a career with high direct social value than by choosing a lucrative career with a view toward donating as much as possible.
In this post I’ll outline some arguments in favor of this view.
Responses to MacAskill’s Considerations
In the article To save the world, don’t get a job at a charity; go work on Wall Street, MacAskill gives three considerations in favor of “earning to give.” I respond to these considerations below. What I write should be read as a response to the article, rather than to MacAskill’s views.
Variance in cost-effectiveness of charities
MacAskill wrote
… charities vary tremendously in the amount of good they do with the money they receive. For example, it costs about $40,000 to train and provide a guide dog for one person, but it costs less than $25 to cure one person of sight-destroying trachoma. For the cost of improving the life of one person with blindness, you can cure 1,000 people of it…it’s unlikely that you can work for only the very best charities. In contrast, if you earn to give, you can donate anywhere, preferably to the most cost-effective charities, and change your donations as often as you like.
GiveWell has spent about five years looking for the best giving opportunities in global health, and its current #1 ranked charity is Against Malaria Foundation (AMF). GiveWell estimates that AMF saves an infant’s life for ~ $2,300, not counting other benefits. These other benefits not withstanding, AMF’s cost per DALY saved is much higher than the implied cost per DALY saved associated with the figure cited for curing sight-destroying trachoma.
GiveWell may have missed giving opportunities in global health that are much more cost-effective than AMF is, but given the amount of time, energy and attention that GiveWell spent on its search, one should have a strong prior against the possibility that one can easily find a better giving opportunity in global health. So a plausible estimate of the cost-effectiveness of donating to the best charity that delivers direct global health interventions is much lower than the above quotation suggests.
Furthermore, the phenomenon of the optimizer’s curse suggests that all charities with robust case for fairly high cost-effectiveness are closer in cost-effectiveness to AMF than explicit cost-effectiveness calculations indicate. This narrows the variance in cost-effectiveness amongst charities.
So the advantage of being able to choose a charity to support and change at any time is smaller than the above quotation suggests.
Discrepancy in earnings
MacAskill wrote:
Annual salaries in banking or investment start at $80,000 and grow to over $500,000 if you do well. A lifetime salary of over $10 million is typical. Careers in nonprofits start at about $40,000, and don’t typically exceed $100,000, even for executive directors ... By entering finance and donating 50% of your lifetime earnings, you could pay for two nonprofit workers in your place—while still living on double what you would have if you’d chosen that route.
The assumption “if you do well” is a very strong one. Only about 1% of Americans make ~$500k/year. There are some people who have a strong comparative advantage in finance, for whom “earning to give” to give may be especially compelling. But people who are able to make ~$500k/year in finance who don’t have a large comparative advantage in finance have very strong transferable skills. Such people are significantly more capable than the average non-profit worker, and can plausibly have a bigger impact than 2 or 3 such workers by working directly on something with high social value.
Replaceability
MacAskill wrote:
…“making a difference” requires doing something that wouldn’t have happened anyway…The competition for not-for-profit jobs is fierce, and if someone else takes the job instead of you, he or she likely won’t be much worse at it than you would have been. So the difference you make by taking the job is only the difference between the good you would do, and the good that the other person would have done.
I would guess that there are some highly cost-effective humanitarian interventions that are sufficiently easy to implement that the implementers are easily replaceable. I could easily imagine that this is the case for vaccination efforts.
But funding opportunities for these interventions can be thought of as “low hanging fruit.” Broad market efficiency suggests that such interventions will be funded. And indeed, GiveWell has found that straightforward immunization efforts are already largely funded, to the point that GiveWell has been unable to find giving opportunities for individual donors in this area.
This suggests that at the margin, very high value humanitarian efforts require highly skilled and highly motivated laborers.
High skilled laborers are a relatively small subset of laborers, so there are fewer people available to do these sorts of jobs than other jobs. Doing a hard, non-routine job well requires high motivation. The collection of people who are sufficiently highly motivated to do a hard job with high social value that doesn’t pay well, and who could otherwise be making much more money, largely consists of people who are trying to have a significant positive social impact.
So suppose that you’re a highly skilled laborer deciding whether to “earn to give” or take a job with high social value that requires high skills and motivation. If you don’t take the job with high social value, your counterfactual replacement is likely be one of the following:
1. Substantially less capable than you on account of having low skills, or low altruistic motivation.
2. A highly skilled person with high motivation, who would be doing something else with high social value if you had taken the job, and who can’t do this because they have to do the job that you would have done.
3. Nonexistent.
So the replaceability consideration carries less weight than it might seem.
Admittedly there’s a counterconsideration — broad market efficiency cuts both ways, and one could imagine that the low hanging fruit in working directly on projects with high social value is also plucked, and this counter-consideration pushes in favor of “earning to give.” I have a fairly strong intuition that “if you don’t fund it, somebody else will” is more true than “if you don’t do it, somebody else will” so that this counter-consideration is outweighed. It’s important to note that many projects of high social value are the first of their kind, and that finding somebody else to execute such a project is highly nontrivial. I think that it’s also relevant that 114 billionaires have signed the Giving Pledge, committing to giving 50+% of their wealth away in their lifetimes.
In any case, there isn’t a clear-cut, unconditional argument that favors “earning to give”: whether “earning to give” is the best option very much depends on nuanced empirical considerations rather than a general abstract argument.
Other important considerations that favor an altruistic career
There are additional important considerations that favor pursuing a career with high social value over “earning to give”:
Asymmetric implications of the existence of small probability failure modes
In Robustness of Cost-Effectiveness Estimates and Philanthropy, I described how a large collection of small probability failure modes conspires to substantially reduce the expected value of a funding opportunity. The same issue applies to choosing a narrow career goal with a view toward directly having a high positive social impact. But a worker has more capacity than a donor does to learn whether small probability failure modes prevail in practice, and can switch to a different job if he or she finds that such a failure mode prevails.
Here’s an example. Suppose that you go to medical school with a view toward the possibility of performing cleft palate surgeries in the developing world. It’s probably the case that the opportunity isn’t as promising as it seems. But if you try it, then you’ll be able to see how effective the intervention is firsthand. If it’s highly effective, then you can keep doing it. If it’s not highly effective, then you can explore other possibilities, such as
- Starting your own surgery organization.
- Switching to doing a different kind of surgery in the developing world, such as cataract removal.
- Working in a poor community in the developed world (which could have a bigger impact than working in the developing world owing to flow-through effects).
- Working for a biotech company.
- Getting involved in clinical medical research.
- Other things that haven't occurred to me.
By experimenting, one can hope to hone in on a job that has both high ostensible cost-effectiveness, and and a relatively small mass of small probability failure modes.
Altruistic careers extend beyond the nonprofit world
Even on the assumption that “earning to give” is better than working at a nonprofit, it doesn’t follow that “earning to give” optimizes social impact. There are ways to have a positive social impact in the for-profit world, in scientific research, and in the government.
Historical Precedent
For the most part, the people who have had the biggest positive impact on the world haven’t had their impact by “earning to give.”
There are a few possible exceptions, such as Bill Gates and Warren Buffett, whose philanthropic activities could be having a huge impact (though it’s hard to tell from the outside) and could well outstrip the value that they contributed through their labor. But they appear to have an unusually high ratio of wealth to direct positive impact of their work, and so appear to be unrepresentative.
Steve Jobs’ highest net worth was on the order of $10 billion, whereas Bill Gates’ highest net worth was on the order of $100 billion. I don’t think that Bill Gates contributed 10x as much as Steve Jobs to technology, and I don’t think that Jobs could have had a bigger social impact by donating than through his work (which had massive positive flow-through effects). I acknowledge that Jobs is a cherry picked example, but I think that the general principle still holds.
Mainstream consensus
Few people think that “earning to give” is the best way to make the world a better place. This could be attributable to irrationality or to low altruism, but my experience is that there are many people who care about global welfare, or just welfare within a specific cause, and many people who are highly intelligent. In light of the existence of illusory superiority, one should be wary of holding an implicit view that one knows more about how to make the world a better place than the vast majority of the population.
Steelmanning wealth maximization
It’s worth highlighting some factors that favor choosing a career with a view toward maximizing wealth in some situations:
- Comparative advantage — Some people are unusually good at making money relative to doing other things. Such people may do better to “earn to give” than to try to choose a job that has a direct positive impact (which they’re relatively bad at).
- The market mechanism — In the for-profit world, maximizing wealth is often correlated with maximizing positive social impact, and so can be used as a proxy goal for maximizing positive social impact.
- Connections and personal growth — People with high earnings are generally more capable and more knowledgeable than people in other contexts, and tend to be well connected, so positioning oneself among such people can increase one’s prospects of soaring to greater heights. Jeff Bezos started his career in finance, and later created Amazon, which has had massive positive social impact (both direct, and via flow-through effects).
- Unusual values — If one cares about causes that very few people care about, then it could be difficult to find funding for work on them, so “earning to give” could be necessary. I don’t believe this to be the case, but it’s a consideration that's been raised by others, and so is worth mentioning.
Closing summary
There are many arguments against the claim that “earning to give” is generally the best way to maximize one’s positive social impact, and I believe that choosing a job where one can do as much good as possible through one’s work is generally the best way to maximize one’s positive social impact. However, for some people in unusual situations, “earning to give” may be the best way to have a positive social impact.
Note: I formerly worked as a research analyst at GiveWell. All views expressed here are my own.
Acknowledgements: I thank Nick Beckstead, ModusPonies and Will Crouch for helpful feedback on an earlier version of this article.
153 comments
Comments sorted by top scores.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-28T17:48:46.090Z · LW(p) · GW(p)
The top considerations that come into play when I advise someone whether to earn-to-give or work directly on x-risk look like this:
1) Does this person have a large comparative advantage at the direct problem domain? Top-rank math talent can probably do better at MIRI than at a hedge fund, since there are many mathematical talents competing to go into hedge funds and no guarantee of a good job, and the talent we need for inventing new basic math does not translate directly into writing the best QT machine learning programs the fastest.
2) Is this person going to be able to stay motivated if they go off on their own to earn-to-give, without staying plugged into the community? Alternatively, if the person's possible advantage is at a task that requires a lot of self-direction, will they be able to stay on track without requiring constant labor to keep them on track, since that kind of independent job is much harder to stick at then a 9-to-5 office job with supervision and feedback and cash bonuses?
Every full-time employee at a nonprofit requires at least 10 unusually generous donors or 1 exceptionally generous donor to pay their salary. For any particular person wondering how they should help this implies a strong prior bias toward earning-to-give. There are others competing to have the best advantage for the nonprofit's exact task, and also there are thousands of job opportunities out there that are competing to be the maximally-earning use of your exact talents - best-fits to direct-task-labor vs. earning-to-give should logically be rare, and they are.
The next-largest issue is motivation, and here again there are two sides to the story. The law student who goes in wanting to be an environmentalist (sigh) and comes out of law school accepting the internship with the highest-paying firm is a common anecdote, though now that I come to write it down, I don't particularly know of any gathered data. Earning to give can impose improbability in the form of likelihood that the person will actually give. Conversely, a lot of the most important work at the most efficient altruistic organizations is work that requires self-direction, which is also demanding of motivation.
I should pause here to remark that if you constrain yourself to 'straightforward' altruistic efforts in which the work done is clearly understandable and repeatable and everyone agrees on how wonderful it is, you will of course be constraining yourself very far away from the most efficient altruism - just like a grant committee that only wants to fund scientific research with a 100% chance of paying off in publications and prestige, or a VC that only wanted to fund companies that were certain to be defensible-appearing decisions, or someone who constrained their investments to assets that had almost no risk of going down. You will end up doing things that are nearly certain never to appear to future historians as a decisive factor in the history of Earth-originating intelligent life; this requires tolerance for not just risk but scary ambiguity. But if you want to work on things that might actually be decisive, you will end up in mostly uncharted territory doing highly self-directed work, and many people cannot do this. Just as many other people cannot sustain altruism without being surrounded by other altruists, but this can possibly be purchased elsewhere via living on the West or East Coast and hanging around with others who are earning-to-give or working directly.
These are the top considerations when someone asks me whether they should work directly or earn to support others working directly - the low prior, whether the exact fit of talent is great enough to overcome that prior, and whether the person can sustain motivation / self-direct.
Replies from: MichaelVassar, John_Maxwell_IV, JonahSinick, JonahSinick↑ comment by MichaelVassar · 2013-05-29T13:18:02.711Z · LW(p) · GW(p)
My main comment on this is that if self-direction is as important as it appears to be, it would seem to me that 'become self directed' really should be everyone's first priority if they can think of any way to do that. My second comment is that it seems to me that if one is self-directed and seeks appropriate mentorship, the expected value of pursuing a conventional career is very low compared to that of pursuing an entrepreneurial career. Conversely, mentorship or advice that doesn't account for the critical factor of how self-directed someone is, as well as a few other critical factors such at the disposition to explore options, respond to empirical feedback from the market, etc, is likely to be worse than useless.
Replies from: None↑ comment by [deleted] · 2013-06-02T21:29:36.998Z · LW(p) · GW(p)
My second comment is that it seems to me that if one is self-directed and seeks appropriate mentorship, the expected value of pursuing a conventional career is very low compared to that of pursuing an entrepreneurial career.
Can you expand on this? How does one seek appropriate mentorship?
↑ comment by John_Maxwell (John_Maxwell_IV) · 2013-05-29T07:10:04.333Z · LW(p) · GW(p)
Every full-time employee at a nonprofit requires at least 10 unusually generous donors or 1 exceptionally generous donor to pay their salary.
Isn't $36K/yr the modal MIRI salary? That doesn't feel like it should be too hard on a $100K/yr software developer salary considering that charitable donations are tax-deductible up to 50% of your income (supposedly). If one donated $36K/yr out of their software developer salary, at $64K/yr they'd still be earning much more than a typical nonprofit employee (heck, much more than a typical college graduate), and if they were to pretend they were working at a nonprofit and subsist on $36K/yr themselves, they could probably subsidize 2 people at the modal MIRI salary (after several years' worth of promotions/raises).
Replies from: shminux, wubbles, Lumifer, katydee↑ comment by Shmi (shminux) · 2013-05-29T07:44:30.867Z · LW(p) · GW(p)
$36k/yr salary works out to be about $50k-$70k gross expense (including benefits, insurance, taxes etc) for a regular employer, not sure how much it is for a non-profit like MIRI.
↑ comment by wubbles · 2013-05-30T00:19:19.962Z · LW(p) · GW(p)
10% is the charitable giving limit. There is another thing to be asked about, and that is the impact of the job. If I were to be a tax lawyer, I would be directly harming the ability of the US government to spend on social welfare programs. If I worked on Wall Street anywhere but Vanguard I would be bilking people out of their life savings, and at Vanguard I wouldn't be making $100 K a year. Someone working as a tobacco farmer to raise money for cancer research has some misplaced priorities.
Replies from: ESRogs, Lumifer, CarlShulman, Osuniev, ESRogs, Eugine_Nier↑ comment by ESRogs · 2013-05-30T01:18:54.768Z · LW(p) · GW(p)
Where is that 10% number coming from? Looks to me like the limit is at least 20% in the US, and up to 50% for some organizations.
(BTW, can someone from MIRI or anyone else tell us if they're a 50% organization?)
EDIT: and by the way, that's just the limit on what's tax-deductible. There's no legal limit on how much you can actually give.
Replies from: malo, John_Maxwell_IV↑ comment by Malo (malo) · 2013-06-04T03:06:03.529Z · LW(p) · GW(p)
MIRI is a 50% organization.
See IRS Exempt Organizations Select Check and click the “Deductibility Status”
Replies from: lukeprog, ESRogs↑ comment by lukeprog · 2013-08-27T22:16:08.138Z · LW(p) · GW(p)
Malo knows this, but I'll say it publicly:
In general, we suspect there are few people for whom it's healthy to actually be giving away 50% of their income.
Replies from: somervta↑ comment by somervta · 2013-08-28T05:07:53.690Z · LW(p) · GW(p)
I understand why you said this, but most people interested in this are interested in the transition from 10% to >10% (say, 20), not in 10% to 50%. I presume you would estimate a higher number for whom this is healthy?
Replies from: lukeprog↑ comment by John_Maxwell (John_Maxwell_IV) · 2013-05-30T08:36:18.905Z · LW(p) · GW(p)
I guess we also have to worry about state and maybe even city-specific tax laws too, huh?
↑ comment by Lumifer · 2013-08-28T15:51:55.945Z · LW(p) · GW(p)
I were to be a tax lawyer, I would be directly harming the ability of the US government to spend on social welfare programs.
You could always go work for the IRS. It employs a lot of tax lawyers.
But there's a bigger issue: you think that the work of a (privately employed) tax lawyer intrinsically harms the ability of the US government to spend? That belief has LOTS of issues. I'll start with two: One, why do you think the capability of the US government to spend is an unalloyed good thing? And two, do you happen to know the volume (say, in feet of shelf space) of the current tax laws, regulations, and rulings? I'd recommend you find out and then think about whether any moderately complicated business can comply with them without the help of a tax lawyer.
If I worked on Wall Street anywhere but Vanguard I would be bilking people out of their life savings
Sigh. First, Vanguard is not part of Wall Street. Second... you really should not believe everything the popular media keeps feeding you.
Replies from: wubbles, private_messaging↑ comment by wubbles · 2013-08-29T01:42:52.518Z · LW(p) · GW(p)
By "Wall Street" I'm including the Buy Side as well as the Sell Side. The big buyside firms like Fidelity and Charles Schwab sell products that most people shouldn't buy. Insurance probably has a better case to buy some actively managed products, or some exotic derivatives, but I don't know why it can't do it itself.
To the extent that finance reallocates risk it can provide a positive utility benefit. However, the very productive businesses have questionable utility. Promoting active trading, picking hot funds etc, all eat into the returns clients can expect. Justify the existence of Charles Schwab's S&P 500 index fund, with expense ratio twice that of Vanguards. The most profitable divisions of investment banks tend to be the ones with the least competition, and hence most questionable social benefit.
I'm aware Dodge and Cox is in SF, and Vanguard in Valley Forge, Blackrock in Princeton, etc. However, they are all on "the Street".
The IRS doesn't pay well: for government pay one might as well work for NASA and accomplish something fun.
↑ comment by private_messaging · 2013-08-28T17:08:26.771Z · LW(p) · GW(p)
Tax lawyers can not decrease taxes taken by US government in the long run, because US government gets to make the law adjusting for the existence of tax lawyering. This is why I have absolutely no qualms about employing a tax lawyer in the US.
↑ comment by CarlShulman · 2013-08-29T02:50:00.624Z · LW(p) · GW(p)
10% is the charitable giving limit.
Not in the U.S. (note these are in pre-tax earnings, so they translate into less in foregone consumption than they do in donations made).
There are limits to how much you can deduct, but they're very high.
For most people, the limits on charitable contributions don't apply. Only if you contribute more than 20% of your adjusted gross income to charity is it necessary to be concerned about donation limits. If the contribution is made to a public charity, the deduction is limited to 50% of your contribution base. For example, if you have an adjusted gross income of $100,000, your deduction limit for that year is $50,000.
Regarding this:
If I worked on Wall Street anywhere but Vanguard I would be bilking people out of their life savings, and at Vanguard I wouldn't be making $100 K a year. Someone working as a tobacco farmer to raise money for cancer research has some misplaced priorities.
See this essay:
Goldman has 32,000 employees. An upper bound for the harm caused by the marginal employee is thus the total harm caused divided by 32,000. For the harm to outweigh the good, Goldman would therefore have to be killing at least 3.2 million young people each year, or doing something else that is similarly harmful. That would mean that Goldman Sachs would need to be responsible for around 5% of all deaths in the world. Bear in mind that Goldman Sachs only makes up 22% of American investment banking, and 3% of the American financial industry - if the rest of finance is similarly bad, then it would imply that finance is doing something as bad as causing all the deaths in the world..
Let’s consider the American financial industry in general. Upcoming Giving What We Can research estimates that it would take $200 billion a year to move everyone in the world above the $1.25 poverty line. That figure will only be $74 billion in 2030. The employees of the financial sector could do this if they transferred (e.g. via GiveDirectly) 30-75% of their salaries to those in extreme global poverty (depending on what date you want to achieve the goal by). In other words, if everyone in finance were Earning to Give, it would be possible to end extreme global poverty within the next twenty years. Harm would only dominate if the financial sector is doing something roughly as bad as single handedly causing all global poverty.
↑ comment by Osuniev · 2013-08-28T22:54:48.476Z · LW(p) · GW(p)
THIS. Although I`m unsure about the particulars you mention here, being an European, people and effective altruists need to realize that your job is INSIDE the world you live in. Estimating how much good you're producing is not just about how much money/time you're giving to effective charities, but also how much your way of life is helping/damaging the world.
Replies from: ygert↑ comment by ygert · 2013-08-29T02:59:30.192Z · LW(p) · GW(p)
I'm not convinced. The amount of saved lives, QALYs, or whatever you are counting that the US government welfare program gets per dollar is (or seems to be to me) quite a bit less than the amount that, say, the AMF could get with that money. I don't know how many dollars per QALY US government welfare manages to get, but I wouldn't be surprised if it were on the order of $1000-$10000 per QALY. And that's not even counting the fact that even if the US goverment had that bit more money from you not being a tax lawyer, that money would not all go to welfare and other such efficient (relative to what else the government spends money on) projects. I would imagine a fair portion would go to, say, bombing Syria, or hiring an extra parking-meter enforcer, or such inefficent stuff, that get an even worse $/QALY result.
And that is still not to mention the fact that some of that money would go to, say, funding the NSA to spy on your phone calls and read your email, or to the TSA to harass, strip-search, and detain you, which are net negatives.
And even that is not counting that MIRI may end up having a QALY/$ result far, far higher than anything the AMF or whoever could ever hope of possibly getting.
I'm not saying you're flat-out wrong, and it is something to take into consideration when figuring out the altruistic impact of your job, but taking into account these objections, it seems highly unlikely that the marginal dollar from the government goes far enough to weigh very heavily in ones analysis.
Replies from: Sithlord_Bayesian, Osuniev↑ comment by Sithlord_Bayesian · 2013-08-29T21:36:49.088Z · LW(p) · GW(p)
On the topic of how much it takes to save a QALY in the US:
"Most, but not all, decision makers in the United States will conclude that interventions that cost less than $50,000 to $60,000 per QALY gained are reasonably efficient. An example is screening for hypertension, which costs $27,519 per life-year gained in 40-year-old men.3, 8 For interventions that cost $60,000 to approximately $175,000 per QALY, certain decision makers may find the interventions sufficiently efficient; most others will not agree."
-from http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1497852/
The first paragraph of this gives more on the cost of QALYs in the US. So, kidney dialysis is an intervention that is paid for by the government in the US, and it comes in at more than $100,000 per QALY saved.
Since marginal funding generally goes to pay for interventions which are no more effective than those already being paid for, I wouldn't expect the cost of a marginal QALY to be below (say) $50,000.
↑ comment by Osuniev · 2013-08-29T17:53:49.683Z · LW(p) · GW(p)
I'm not sure if you were answering my comment or wubbles's one. What I was saying was that you need to take into account the negative impact your job and way of life have on the world.
I agree that the US government probably is terrible at using tax money to better the world.
↑ comment by ESRogs · 2013-05-30T01:31:55.778Z · LW(p) · GW(p)
If I worked on Wall Street ... I would be bilking people out of their life savings
Do you actually think that a finance professional who donated a significant portion of their income to effective charity would be doing more harm than good? Even given that you can save the life of a child in the developing world for on the order of $2000?
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2013-06-03T12:24:24.916Z · LW(p) · GW(p)
I don't think the problem is finance professionals in general-- it's finance professionals in particularly corrupt parts of the industry.
Figuring out in advance that a job is doing particularly corrupt work seems to be something that people are very bad at-- I don't know whether it's mostly that it would be hard for a neutral observer, or that people don't want the problems of dealing with the consequences to their own lives if they find that their job is destructive.
Replies from: ESRogs↑ comment by ESRogs · 2013-06-05T04:22:11.620Z · LW(p) · GW(p)
Hmm, I was thinking the assumption (which I don't necessarily entirely agree with) was that finance professionals were simply earning money without providing any benefit to society, and so a net negative. It sounds like your comment assumes that some of them are actually actively doing harm (though perhaps unintentionally), beyond just taking their own paycheck's worth out of the productive economy. Is that your understanding?
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2013-06-05T05:38:44.902Z · LW(p) · GW(p)
The mortgage crisis was a result of banks being able to sell mortgages to other banks. This meant that the bank making the loan could make money just by the mortgage being initiated-- the first bank no longer had a strong interest in the loan being repaid.
There were some other pieces to the situation that I don't have clear in my mind at the moment, but I think there were incentives for the mortgage to actually not be repaid and the house to be taken by a bank.
One piece that I am clear on is that there were people who decided it wasn't worth it for banks to keep accurate track of who owned which mortgage, or what had been paid, or what had been agreed to.
This is stealing people's houses. It's a degree of damage which it's hard to imagine being covered by charity.
The other side of the story is that not every bank behaved like that-- not all of finance is fraudulent.
Replies from: Lumifer, ESRogs↑ comment by Lumifer · 2013-08-28T15:43:35.791Z · LW(p) · GW(p)
The mortgage crisis was a result of banks being able to sell mortgages to other banks.
This is not true. In fact, this is probably not even wrong...
Replies from: EHeller↑ comment by EHeller · 2013-08-28T16:34:41.635Z · LW(p) · GW(p)
Its at least somewhat true, if perhaps not well stated- packaged mortgages and derivatives based on packaged mortgages (mortgages sold as investment vehicles to other banks and funds) played a very large role in the crisis.
Without "selling mortgages to other banks" the popping of the housing bubble wouldn't have turned into the liquidity crunch that started in 2008.
Replies from: Lumifer↑ comment by Lumifer · 2013-08-28T17:34:16.400Z · LW(p) · GW(p)
So were mortgages by themselves. Without the widespread availability of mortgages "the popping of the housing bubble wouldn't have turned into the liquidity crunch" too. Or, for that matter, without the fact that the "standard" mortgage is a 30-year fixed -- not, say, a 1/1 ARM.
But anyway, the reason for the contagion from mortgages to liquidity wasn't the ability to sell mortgages. It was the mispricing of mortgage derivatives, specifically the widespread belief that certain tranches of collateralized mortgage obligations (CMOs) were effectively risk-free.
If you want to dig deeper, the real cause was the global asset bubble helped by the too-loose monetary policy in the mid-2000s.
Financial economics are complicated. Snap judgements from popular press rarely have much relationship to reality.
Replies from: EHeller↑ comment by EHeller · 2013-08-28T20:17:56.664Z · LW(p) · GW(p)
So were mortgages by themselves. Without the widespread availability of mortgages "the popping of the housing bubble wouldn't have turned into the liquidity crunch" too.
Well, sure. And also without houses themselves...
My point is that the statement wasn't false on its face. Repackaging and reselling was A proximate cause of the liquidity crunch, its not the only cause but its a part of what happened.
↑ comment by ESRogs · 2013-06-05T23:38:01.077Z · LW(p) · GW(p)
This is stealing people's houses. It's a degree of damage which it's hard to imagine being covered by charity.
Is it really? I suppose this depends on how many houses any individual is responsible for and how much money they capture per house. I guess that second part is the real issue -- any individual who would be giving to charity probably only captures a fraction of what they earn for their firm.
But if you could capture the whole value of a predatory mortgage and convert it into developing world lives saved, it's not hard to imagine the numbers adding up. (One American family goes bankrupt and 20 Malawian children who otherwise would have don't die in childhood? On the face of it that looks like a pretty positive net outcome.)
If you can do outsized damage significantly beyond what you can capture as income though, then I suppose it gets a bit tougher to justify.
Replies from: Desrtopa↑ comment by Desrtopa · 2013-06-06T00:51:21.301Z · LW(p) · GW(p)
(One American family goes bankrupt and 20 Malawian children who otherwise would have don't die in childhood? On the face of it that looks like a pretty positive net outcome.)
If we're talking about donations on the scale of the activities that went into the mortgage crisis, I think you'd start to suffer seriously diminishing returns.
Even if you didn't, there are other problems you'd run into, such as the limited ability of the Malawian (or other impoverished African) society and economy to accommodate such a sudden spike in children surviving to adulthood. The lives that you save from extermination at the hands of malaria or other preventable causes are probably mostly going to be relatively lousy or short due to other causes, pending much further investment.
Replies from: ESRogs↑ comment by ESRogs · 2013-06-06T18:41:04.278Z · LW(p) · GW(p)
As I understood it, the hypothetical was a single individual deciding to work in finance and donate a large portion of their income to efficient charity. In that case I don't think the diminishing returns are so much of an issue.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2013-06-15T04:06:56.835Z · LW(p) · GW(p)
I would worry more about negative flow-through effects of a decline in trust and basic decency in society. I think those are much more clear than flow-through effects of positive giving. I'm not sure if this outweighs the 20-to-1 ratio.
↑ comment by Eugine_Nier · 2013-08-28T04:44:57.428Z · LW(p) · GW(p)
If I were to be a tax lawyer, I would be directly harming the ability of the US government to spend on social welfare programs.
Government social welfare spending is notoriously inefficient. So if your client is at all generous with his money you're coming out ahead. Heck even if he doesn't give to charity but does use the money to invest in productive enterprises, you're probably coming out ahead. And that's before taking into account how you spend your money.
↑ comment by Lumifer · 2013-08-28T01:02:31.861Z · LW(p) · GW(p)
The cost (to an organization) of an employee is more than just his salary, often considerably more. There is health insurance and other benefits, payroll taxes, infrastructure support (e.g. a computer, a desk to put it on, a room to put the desk in), etc.
↑ comment by katydee · 2013-05-30T01:31:39.627Z · LW(p) · GW(p)
One potentially relevant note for anyone considering this is that 100 - 36 = 64, not 74.
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2013-05-30T08:33:25.100Z · LW(p) · GW(p)
Thanks, fixed. I appreciate the correction... no need to retract your comment! :)
Replies from: katydee↑ comment by katydee · 2013-05-30T09:22:56.630Z · LW(p) · GW(p)
I know the comment was probably fine, but overall it seemed like it could be read as unnecessarily snarky and hence lower the tone-- PMing you the correction would have been a better move.
All in all I think that the standard for discussion here on LessWrong could be increased a lot if people stopped giving "wiseass" replies to things, were more forgiving of minor errors (while still pointing them out), and so on.
Be the change you want to see on LessWrong!
Replies from: somervta↑ comment by JonahS (JonahSinick) · 2013-05-28T19:44:42.413Z · LW(p) · GW(p)
Every full-time employee at a nonprofit requires at least 10 unusually generous donors or 1 exceptionally generous donor to pay their salary.
If you define "generous" by "amount of capital" then this is tautologically true. But by this standard, extraordinarily wealthy people are capable of being exceptionally exceptionally exceptionally generous. I'd recur to my remark about the Giving Pledge. I believe that the projects of highest humanitarian value will generally get funded.
I should pause here to remark [...] but this can possibly be purchased elsewhere via living on the West or East Coast and hanging around with others who are earning-to-give or working directly.
In principle this could fall under the "unusual values" consideration that I raise above. But I don't think that the sociological phenomenon that you seem to be implying to exist prevails in practice. I think that there a lot of funders who are not risk-averse, and indeed, many who are actively attracted to high risk projects.
Replies from: Eliezer_Yudkowsky, JonahSinick↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-28T20:37:22.203Z · LW(p) · GW(p)
Well, if James Simons wanted to retire from Renaissance and work on FAI full-time, it would not be entirely obvious to me that this was a bad move, but only if Simons had enough in the bank to also pay as much other top-flight math talent as could reasonably be used, and was already so paying, such that there was no marginal return to his further earning power relative to existing funds.
This situation has not yet arisen. Unfortunately.
Replies from: JonahSinick↑ comment by JonahS (JonahSinick) · 2013-05-28T21:12:38.106Z · LW(p) · GW(p)
I think that James Simons is an example of someone with an unusually strong comparative advantage at making money. But this wouldn't necessarily have been clear a priori: if you put yourself in Simons' shoes in 1980 the expected earnings of going into finance would be much lower than his actual earnings turned out to be. So it's not clear that he would have done better to "earn to give" than doing something of direct humanitarian value (though maybe it was clear from the outset that his comparative advantage was in finance.)
↑ comment by JonahS (JonahSinick) · 2013-05-28T20:07:10.004Z · LW(p) · GW(p)
Edit: [Moved comment to a different place]
↑ comment by JonahS (JonahSinick) · 2013-05-28T20:28:53.557Z · LW(p) · GW(p)
I'll also highight another point implicit in my post: even if one assumes that there's not enough funding in the nonprofit world for the projects of highest value, there may be such funding available in other contexts (for-profit, academic and government). This makes the argument for earning to give weaker.
I recognize that I haven't addressed the specific subject of Friendly AI research, and will do so in future posts.
Replies from: Eliezer_Yudkowsky, MichaelVassar↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-28T20:32:29.131Z · LW(p) · GW(p)
I understand if your priorities aren't our priorities. My concrete example reflex was firing, that's all.
Replies from: JonahSinick↑ comment by JonahS (JonahSinick) · 2013-05-28T20:36:01.497Z · LW(p) · GW(p)
I think that there's substantial overlap between my values and MIRI staff's values, and that the difference regarding the relative value of "earning to give" is epistemic rather than normative. But obviously there's a great deal more that needs to be said about the epistemic side, with reference to the concrete example of Friendly AI.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-28T20:43:20.708Z · LW(p) · GW(p)
I can imagine someone thinking that FHI was a better use of money than MIRI, or CFAR, or CSER, or the Foresight Institute, or brain-scanning neuroscience, or rapid-response vaccines, or any number of startups, but considering AMF as being in the running at all seems to require either a value difference or really really different epistemics about what affects the fate of future galaxies.
Replies from: Benja, JonahSinick↑ comment by Benya (Benja) · 2013-05-28T22:43:43.858Z · LW(p) · GW(p)
Realistic amounts of difference in epistemics + the "humans best stick to the mainline probability" heuristic seem enough (where by "realistic" I mean "of the degree actually found in the world"). I.e., I honestly believe that there are many people out there who would care the hell about the fate of future galaxies if they alieved that they had any non-vanishing chance of significantly influencing that fate (and to choose the intervention that influences it in the desired direction).
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-28T23:10:16.587Z · LW(p) · GW(p)
If you're one of 10^11 sentients to be born on Ancient Earth with a golden opportunity to influence a roughly 10^80-sized future, what exactly is a 'vanishing chance'... eh, let's all save it until later.
Replies from: Benja, Mitchell_Porter, shminux↑ comment by Benya (Benja) · 2013-05-28T23:56:57.492Z · LW(p) · GW(p)
I meant that the alieved probability is small in absolute terms, not that it is small compared to the payoff. That's why I mentioned the "stick to the mainline probability" heuristic. I really do believe that there are many people who, if they alieved that they (or a group effort they could join) could change the probability of a 10^80-sized future by 10%, would really care; but who do not alieve that the probability is large enough to even register, as a probability; and whose brains will not attempt to multiply a not-even-registering probability with a humongous payoff. (By "alieving a probability" I simply mean processing the scenario the way one's brain processes things it assigns that amount of credence, not a conscious statement about percentages.)
This is meant as a statement about people's actual reasoning processes, not about what would be reasonable (though I did think that you didn't feel that multiplying a very small success probability with a very large payoff was a good reason to donate to MIRI; in any case seems to me that the more important unreasonableness is requesting mountains of evidence before alieving a non-vanishing probability for weird-sounding things).
[ETA: I find it hard to put a number on the not-even-registering probability the sort of person I have in mind might actually alieve, but I think a fair comparison is, say, the "LHC will create black holes" thing -- I think people will tend to process both in a similar way, and this does not mean that they would shrug it off if somebody counterfactually actually did drop a mountain of evidence about either possibility on their head.]
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-29T22:48:00.533Z · LW(p) · GW(p)
though I did think that you didn't feel that multiplying a very small success probability with a very large payoff was a good reason to donate to MIRI
Because on a planet like this one, there ought to be some medium-probable way for you and a cohort of like-minded people to do something about x-risk, and if a particular path seems low probability, you should look for one that's at least medium-probability instead.
Replies from: Benja↑ comment by Benya (Benja) · 2013-05-29T22:57:04.536Z · LW(p) · GW(p)
Ok, fair enough. (I had misunderstood you on that particular point, sorry.)
↑ comment by Mitchell_Porter · 2013-05-31T01:45:32.583Z · LW(p) · GW(p)
If there was ever a reliable indicator that you're wrong about something, it is the belief that you are special to the order of 1 in 10^70.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-31T03:36:01.080Z · LW(p) · GW(p)
So do you believe in the Simulation Hypothesis or the Doomsday Argument, then? All attempts to cash out that refusal-to-believe end in one or the other, inevitably.
Replies from: Mitchell_Porter, komponisto, shminux↑ comment by Mitchell_Porter · 2013-05-31T14:28:59.420Z · LW(p) · GW(p)
From where I stand, it's more like arcane meta-arguments about probability are motivating a refusal-to-doubt the assumptions of a prized scenario.
Yes, I am apriori skeptical of anything which says I am that special. I know there are weird counterarguments (SIA) and I never got to the bottom of that debate. But meta issues aside, why should the "10^80 scenario" be the rational default estimation of Earth's significance in the universe?
The 10^80 scenario assumes that it's physically possible to conquer the universe and that nothing would try to stop such a conquest, both enormous assumptions... astronomically naive and optimistic, about the cosmic prospects that await an Earth which doesn't destroy itself.
Replies from: Eliezer_Yudkowsky, shminux↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-31T17:40:52.339Z · LW(p) · GW(p)
Okay, so that's the Doomsday Argument then: Since being able to conquer the universe implies we're 10^70 special, we must not be able to conquer the universe.
Calling the converse of this an arcane meta-argument about probability hardly seems fair. You can make a case for Doomsday but it's not non-arcane.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2013-05-31T21:53:06.335Z · LW(p) · GW(p)
Perhaps this is hairsplitting but the principle I am employing is not arcane: it is that I should doubt theories which imply astronomically improbable things. The only unusual step is to realize that theories with vast future populations have such an implication.
I am unable to state what the SIA counterargument is.
Replies from: Luke_A_Somers, Eliezer_Yudkowsky↑ comment by Luke_A_Somers · 2013-06-02T15:08:41.573Z · LW(p) · GW(p)
In the theory that there are astronomically large numbers of people, it is a certainty that some of them came first. The probability that YOU are one of those people equal to the probability that YOU are any one of those other people. However, it does define a certain small narrow equivalence class that you happen to be a member of.
It's a bit like the difference between theorizing that: A) given that you bought a ticket, you'll win the lottery, and B) given that the lottery folks gave you a large sum, that you had the winning ticket.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2013-06-02T22:48:54.827Z · LW(p) · GW(p)
That's not the "SIA counterargument", which is what I want to hear (in a compact form, that makes it sound straightforward). You're just saying "accept the evidence that something ultra-improbable happened to you, because it had to happen to someone".
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2013-06-03T11:17:17.772Z · LW(p) · GW(p)
I was only replying to the first paragraph, really. Even under the SSA there's no real problem here. I don't see how the SIA makes matters worse.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-31T22:22:13.137Z · LW(p) · GW(p)
The only unusual step is to realize that theories with vast future populations have such an implication.
Right. That's arcane. Mundane theories have no need to measure the population of the universe.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2013-05-31T22:44:23.686Z · LW(p) · GW(p)
But it's still a simple idea once you grasp it. I was hoping you could state the counterargument with comparable simplicity. What is the counterargument at the level of principles, which neutralizes this one?
↑ comment by Shmi (shminux) · 2013-05-31T15:12:14.050Z · LW(p) · GW(p)
I largely agree with your skepticism. I would go even farther and say that even the 10^80 scenario happens, what we do now can only influence it by random chance, because the uncertainty in the calculations of the consequences of our actions in the near term on the far future overwhelms the calculations themselves. That said, we should still do what we think is best in the near term (defined by our estimates of the uncertainty being reasonably small), just not invoke the 10^80 leverage argument. This can probably be formalized, by assuming that the prediction error grows exponentially with some relevant parameter, like time or the number of choices investigated, and calculating the exponent from historical data.
↑ comment by komponisto · 2013-05-31T04:33:23.800Z · LW(p) · GW(p)
Doomsday for me, I think. Especially when you consider that it doesn't mean doomsday is literally imminent, just "imminent" relative to the kind of timescale that would be expected to create populations on the order of 10^80.
In other words, it fits with the default human assumption that civilization will basically continue as it is for another few centuries or millennia before being wiped out by some great catastrophe.
↑ comment by Shmi (shminux) · 2013-05-31T04:30:14.818Z · LW(p) · GW(p)
Do you mind elaborating on this inevitability? It seems like there ought to be other assumptions involved. For example, I can easily imagine that humans will never be able to colonize even this one galaxy, or even any solar system other than this one. Or that they will artificially limit the number of individuals. Or maybe the only consistent CEV is that of a single superintelligence of which human minds will be tiny parts. All of these result in the rather small total number of individuals existing at any point in time.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-31T17:45:34.218Z · LW(p) · GW(p)
For example, I can easily imagine that humans will never be able to colonize even this one galaxy, or even any solar system other than this one.
Counts as Doomsday, also doesn't work because this solar system could support vast numbers of uploads for vast amounts of time (by comparison to previous population).
Or that they will artificially limit the number of individuals.
This is a potential reply to both Doomsday and SA but only if you think that 'random individual' has more force than a similar argument from 'random observer-moment', i.e. to the second you reply, "What do you mean, why am I near the beginning of a billion-year life rather than the middle? Anyone would think that near the beginning!" (And then you have to not translate that argument back into a beginning-civilization saying the same thing.)
Or maybe the only consistent CEV is that of a single superintelligence of which human minds will be tiny parts.
...whereupon we wonder something about total 'experience mass', and, if that argument doesn't go through, why the original Doomsday Argument / SH should either.
Replies from: shminux, army1987↑ comment by Shmi (shminux) · 2013-05-31T20:48:22.312Z · LW(p) · GW(p)
Thanks, I'll chew on that a bit. I don't understand the argument in the second and third paragraphs. Also, it's not clear to me whether by "counts as doomsday" you mean the standard doomsday with the probability estimates attached, or some generalized doomsday, with no clear timeline or total number of people estimated.
Anyway, the feeling I get from your reply is that I'm missing some basic background stuff here I need to go through first, not the usual "this guy is talking out of his ass" impression when someone invokes anthropics in an argument.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-31T21:40:35.299Z · LW(p) · GW(p)
No, this is talking-out-of-our-ass anthropics, it's just that the anthropic part comes in when you start arguing "No, you can't really be in a position of that much influence", not when you're shrugging "Sure, why shouldn't you have that much influence?" Like, if you're not arriving at your probability estimate for "Humans will never leave the solar system" just by looking at the costs of interstellar travel, and are factoring in how unique we'd have to be, this is where the talking-out-of-our-ass anthropics comes in.
Though it should be clearly stated that, as always, "We don't need to talk out of our ass!" is also talking out of your ass, and not necessarily a nicer ass.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-05-31T21:57:03.973Z · LW(p) · GW(p)
it's just that the anthropic part comes in when you start arguing "No, you can't really be in a position of that much influence", not when you're shrugging "Sure, why shouldn't you have that much influence?"
Or when you (the generic you) start arguing "Yes, I am indeed in a position of that much influence", as opposed to "There is an unknown chance of me being in such a position, which I cannot give a ballpark estimate for without talking out of my ass, so I won't"?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-31T22:21:25.871Z · LW(p) · GW(p)
When you try to say that there's something particularly unknown about having lots of influence, you're using anthropics.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-05-31T22:36:32.224Z · LW(p) · GW(p)
Huh. I don't understand how refusing to speculate about anthropics counts as anthropics. I guess that's what you meant by
Though it should be clearly stated that, as always, "We don't need to talk out of our ass!" is also talking out of your ass, and not necessarily a nicer ass.
I wonder if your definition of anthropics matches mine. I assume that any statement of the sort
All other things equal, an observer should reason as if they are randomly selected from the set of
is anthropics. I do not see how refusing to reason based on some arbitrary set of observers counts as anthropics.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-31T23:18:55.637Z · LW(p) · GW(p)
Right. So if you just take everything at face value - the observed laws of physics, the situation we seem to find ourselves in, our default causal model of civilization - and say, "Hm, looks like we're collectively in a position to influence the future of the galaxy," that's non-anthropics. If you reply "But that's super improbable a priori!" that's anthropics. If you counter-reply "I don't believe in all this anthropic stuff!" that's also an implicit theory of anthropics. If you treat the possibility as more "unknown" than it would be otherwise, that's anthropics.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-06-01T00:19:35.882Z · LW(p) · GW(p)
OK, I think I understand your point now. I still feel uneasy about the projection like your influencing 10^80 people in some far future, mainly because I think it does not account for the unknown unknowns and so is lost in the noise and ought to be ignored, but I don't have a calculation to back up this uneasiness at the moment.
Replies from: Arkanj3l↑ comment by Arkanj3l · 2013-06-01T15:52:04.340Z · LW(p) · GW(p)
Does he?
Replies from: shminux↑ comment by Shmi (shminux) · 2013-06-01T20:31:22.916Z · LW(p) · GW(p)
Does he what?
↑ comment by A1987dM (army1987) · 2013-05-31T20:07:38.047Z · LW(p) · GW(p)
if you think that 'random individual' has more force than a similar argument from 'random observer-moment'
I've had a vague idea as to why the random observer-moment argument might not be as strong as the random individual one, though I'm not very confident it makes much sense. (But neither argument sounds anywhere near obviously wrong to me.)
↑ comment by Shmi (shminux) · 2013-05-29T08:26:07.798Z · LW(p) · GW(p)
I wonder if this argument can be made precise enough to have its premises and all the intermediate assumptions examined. I remain skeptical of any forecast that far into the future. You presumably mean your confidence in the UFAI x-risk within the next 20-100 years as the minimum hurdle to overcome, with the eternal FAI paradise to follow.
↑ comment by JonahS (JonahSinick) · 2013-05-28T20:52:04.535Z · LW(p) · GW(p)
My reason for mentioning AMF and global health is that doing so provides a concrete, pretty robustly researched example, rather than as to compare with efforts to improve the far future of humanity.
I think that working in global health in a reflective and goal directed way is probably better for improving global health than "earning to give" to AMF. Similarly, I think that working directly on things that bear on the long term future of humanity is probably a better way of improving the far future of humanity than "earning to give" to efforts along these lines.
I'll discuss particular opportunities to impact the far future of humanity later on.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-28T22:25:36.116Z · LW(p) · GW(p)
My reason for mentioning AMF and global health is that doing so provides a concrete, pretty robustly researched example
That depends on what you want to know, doesn't it? As far as I know the impact of AMF on x-risk, astronomical waste, and total utilons integrated over the future of the galaxies, is very poorly researched and not at all concrete. Perhaps some other fact about AMF is concrete and robustly researched, but is it the fact I need for my decision-making?
(Yes, let's talk about this later on. I'm sorry to be bothersome but talking about AMF in the same breath as x-risk just seems really odd. The key issues are going to be very different when you're trying to do something so near-term, established, without scary ambiguity, etc. as AMF.)
Replies from: JonahSinick↑ comment by JonahS (JonahSinick) · 2013-05-29T00:27:40.151Z · LW(p) · GW(p)
I'm somewhat confused by the direction that this discussion has taken. I might be missing something, but I believe that the points related to AMF that I've made are:
GiveWell's explicit cost-effectiveness estimate for AMF is much higher than the cost per DALY saved implied by the figure that MacAskill cited.
GiveWell's explicit estimates for the cost-effectiveness of the best giving opportunities in the field of direct global health interventions have steadily gotten lower, and by conservation of expected evidence, one can expect this trend to continue.
The degree of regression to the mean observed in practice suggests that there's less variance amongst the cost-effectiveness of giving opportunities than may initially appear to be the case.
By choosing an altruistic career path, one can cut down on the number of small probability failure modes associated with what you do.
I don't remember mentioning AMF and x-risk reduction together at all. I recognize that it's in principle possible that the "earning to give" route is better for x-risk reduction than it is for improving global health, but I believe the analogy between the two domains is sufficiently strong that my remarks on AMF have relevance (on a meta-level, not on an object level).
Replies from: Eliezer_Yudkowsky, ESRogs↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-29T01:36:04.154Z · LW(p) · GW(p)
Yeah, I also have the feeling that I'm questioning you improperly in some fashion. I'm mostly driven by a sense that AMF is very disanalogous to the choices that face somebody trying to optimize x-risk charity (or rather total utilons over all future time, but x-risk seems to be the word we use for that nowadays). It seems though that we're trying to have a discussion in an ad-hoc fashion that should be tabled and delayed for explicit discussion in a future post, as you say.
Replies from: loup-vaillant↑ comment by loup-vaillant · 2013-05-29T12:48:24.395Z · LW(p) · GW(p)
If I may list some differences I perceive between AMF and MIRI:
- AMF's impact is quite certain. MIRI's impact feels more like a long shot —or even a pipe dream.
- AMF's impact is sizeable. MIRI's potential impact is astronomic.
- AMF's impact is immediate. MIRI's impact is long term only.
- AMF's have photos of children. MIRI have science fiction.
- In mainstream circles, donating to AMF gives you pats in the back, while donating to MIRI gives you funny looks.
Near mode thinking will most likely direct one to AMF. MIRI probably requires one to shut up and multiply. Which is probably why I'm currently giving a little money to Greenpeace, despite being increasingly certain that it's far, far from the best choice.
Replies from: elharo↑ comment by elharo · 2013-05-29T13:59:43.377Z · LW(p) · GW(p)
One more difference:
AMF's impact is very likely to be net positive for the world under all reasonable hypotheses.
MIRI appears to me to have a chance to be massively net negative for humanity. I.e. if AI of the level they predict is actually possible, MIRI might end up creating or assisting in the creation of UFAI that would not otherwise be created, or perhaps not created as soon.
Replies from: Eliezer_Yudkowsky, wedrifid, nshepperd↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-29T17:20:38.525Z · LW(p) · GW(p)
But what if AMF saves a child who grows up to be a biotechnologist and goes on to weaponize malaria and spread it to millions?
If you try hard enough, you can tell a story where any effort to accomplish X somehow turns out to accomplish ~X, but one must distinguish possibility from the balance of probability.
Replies from: elharo↑ comment by elharo · 2013-05-29T21:51:43.680Z · LW(p) · GW(p)
Yes, and the story where the child who grows up to be a biotechnologist and goes on to weaponize malaria and spread it to millions doesn't pass the balance of probability test. The story that MIRI creates a dangerous AI fails to pass the balance of probability test only to the extent that one believes it is improbable that anyone can create such an AI. I do indeed consider it far more likely than not that there will never be the all-powerful AI you fear. And by that standard donations to MIRI are simply ineffective compared to donations to AMF.
However if I'm wrong about that and powerful FOOMing UFAIs are in fact possible, then I need to consider whether MIRI's work is wise. If AIs do FOOM, there seems to me to be a very real possibility that MIRI's work will either create a UFAI while trying to create a FAI, or alternatively enable others to do so. I'm not sure that's more likely than that MIRI will one day create a FAI, but you can't just multiply by the value of a very positive and very speculative outcome without including the possibility of a very negative and very speculative outcome.
Replies from: MugaSofer↑ comment by MugaSofer · 2013-05-30T10:42:55.578Z · LW(p) · GW(p)
The story that MIRI creates a dangerous AI fails to pass the balance of probability test only to the extent that one believes it is improbable that anyone can create such an AI.
[...]
However if I'm wrong about that and powerful FOOMing UFAIs are in fact possible, then I need to consider whether MIRI's work is wise. If AIs do FOOM, there seems to me to be a very real possibility that MIRI's work will either create a UFAI while trying to create a FAI, or alternatively enable others to do so.
If you increase the probability of uFAI in order for MIRI to kill everyone, the probability of someone else doing it goes up even more.
Replies from: elharo↑ comment by elharo · 2013-05-30T11:32:27.610Z · LW(p) · GW(p)
Maybe. I'm not sure about that though. MIRI is the only person or organization I'm aware of that seems to want to create a world controlling AI; and it's the world-controlling part that I find especially dangerous. That could send MIRI's AI in directions others won't go. Are there other organizations attempting to develop AIs to control the world? Is anyone else trying to build a benevolent dictator?
Replies from: loup-vaillant, Richard_Kennaway↑ comment by loup-vaillant · 2013-05-30T18:48:54.203Z · LW(p) · GW(p)
MIRI's stated goal is more meta:
The Machine Intelligence Research Institute exists to ensure that the creation of smarter-than-human intelligence benefits society.
They are well aware of the dangers of creating a uFAI, and you can be certain they will be real careful before they push a button that have the slightest chance of launching the ultimate ending (good or bad). Even then, they may very well decide that "being real careful" is not enough.
Are there other organizations attempting to develop AIs to control the world?
It probably doesn't matter, as any uFAI is likely to emerge by mistake:
Anthropomorphic ideas of a “robot rebellion,” in which AIs spontaneously develop primate-like resentments of low tribal status, are the stuff of science fiction. The more plausible danger stems not from malice, but from the fact that human survival requires scarce resources: resources for which AIs may have other uses.
Many AIs will converge toward being optimizing systems, in the sense that, after self-modification, they will act to maximize some goal. For instance, AIs developed under evolutionary pressures would be selected for values that maximized reproductive fitness, and would prefer to allocate resources to reproduction rather than supporting humans.
↑ comment by Richard_Kennaway · 2013-05-30T12:43:27.979Z · LW(p) · GW(p)
Are there other organizations attempting to develop AIs to control the world? Is anyone else trying to build a benevolent dictator?
Is MIRI attempting to develop any sort of AI? I understood the current focus of its research to be the logic of Friendly AGI, i.e. given the ability to create a superintelligent entity, how do you build one that we would like to have created? This need not involve working on developing one.
↑ comment by wedrifid · 2013-05-30T08:16:04.168Z · LW(p) · GW(p)
AMF's impact is very likely to be net positive for the world under all reasonable hypotheses.
That seems like a bizarre belief to hold. Or perhaps just overwhelmingly shortsighted. There are certainly reasonable hypotheses in which more people alive right now result in worse outcomes a single generation down the line, without even considering extinction level threats and opportunities. The world isn't nearly easy enough to model and optimize for us to be that certain a disruptive influence on that scale will be a net positive under all reasonable hypotheses.
Replies from: elharo, Kawoomba↑ comment by elharo · 2013-05-30T10:27:34.797Z · LW(p) · GW(p)
Would you care to cite any such reasonable hypotheses? I.e. under what assumptions do you think that saving a random poor person's life is likely to be a net negative? Sum over the number of lives saved and even if one person grows up to be a serial killer, the total is still way positive. Can you really defend a situation in which it is preferable to have living people today die from malaria?
The problem with MIRI-hypothesized AI (beyond its implausibility) is that we don't get to sum over all possible results. We get one result. Even if the chance of a good result is 80%, the chance of a disastrous result is still way too high for comfort.
Replies from: wedrifid↑ comment by wedrifid · 2013-05-31T09:24:20.652Z · LW(p) · GW(p)
Would you care to cite any such reasonable hypotheses? I.e. under what assumptions do you think that saving a random poor person's life is likely to be a net negative? Sum over the number of lives saved and even if one person grows up to be a serial killer, the total is still way positive.
Most obviously it could cause an increase in world GDP without a commensurate acceleration in various risk prevention mechanisms. Species can evolve themselves to extinction and in a similar way humans could easily develop themselves to extinction if they are not careful or lucky. Messing around with various aspects of the human population would influence this... in one direction or another. It's damn hard to predict.
Having a heuristic "short term lives saved == good" is useful. It massively simplifies calculations and if you have no information either way about side effects of the influence then it works well enough. But it would a significant epistemic error to mistake the heuristic for operating under uncertainty with confidence about the unpredictable (or difficult to predict) system in which you are operating.
Can you really defend a situation in which it is preferable to have living people today die from malaria?
What is socially defensible is not the same thing as what is accurate. But that isn't the point here. All else being equal I would prefer AMF to have an extra million dollars to spend than to not have that extra million dollars. The expected value is positive. What I criticise is "very likely under all reasonable hypotheses" which is just way off. I do not have the epistemic resources to arrive at that confidence and I believe that you are arriving at that conclusion in error, not because of additional knowledge or probabilistic computational resources.
↑ comment by Kawoomba · 2013-05-30T08:27:03.132Z · LW(p) · GW(p)
In fact, I'd expect AMF to have a net-negative impact (and a large one at that) a few decades down the line, unless there are unrealistic, unprecedented, imperialistic-in-scope, gigantic efforts to educate and provide for the dozen then-adult children (and their dozen children) a saved-from-malaria child can typically have.
Here's Tom Friedman in his recent "Tell Me How This Ends" column:
Replies from: elharoI’ve been traveling to Yemen, Syria and Turkey to film a documentary on how environmental stresses contributed to the Arab awakening. As I looked back on the trip, it occurred to me that three of our main characters — the leaders of the two Yemeni [different countries, same dynamic] villages that have been fighting over a single water well and the leader of the Free Syrian Army in Raqqa Province, whose cotton farm was wiped out by drought — have 36 children among them: 10, 10 and 16.
It is why you can’t come away from a journey like this without wondering not just who will rule in these countries but how will anyone rule in these countries?
↑ comment by elharo · 2013-05-30T10:37:32.696Z · LW(p) · GW(p)
Do you really want to propose that it is better to let children in poor countries die of disease now than to save them, because they might have more children later? My prior on this is that you're trolling, but if you really believe that and are willing to state it that baldly; then it might be worth having a serious conversation about population.
Replies from: Kawoomba↑ comment by Kawoomba · 2013-05-30T11:00:39.918Z · LW(p) · GW(p)
I'm not trolling. It's a very touchy subject for sure. I would certainly highly prefer a world in which AMF succeeds if it is coupled with the necessary, massive changes to deal with the consequences of AMF succeeding.
A world in which just AMF succeeds, but in which the changes to deal with the 5 or 6 additional persons for every child surviving malaria do not happen is heading towards even greater disaster. The birth rate is not a "might have more children", it's a probabilistic certainty, without the aforementioned new pseudo-imperialism.
However, the task of nation-building and uplifting civil-war ravaged tribal societies is a task that dwarfs AMF (plenty of recent examples), or even the worldwide charity budget. Yet without it, what's gonna happen, other than mass famines and other catastrophes?
I'm not talking about general Malthusian dynamics, but about countries whose population far exceeds the natural resources to support it, and which often do not offer the political environment, the infrastructure or the skills to exploit and develop what resources they have, other than trade them to the Chinese to prop up the ruling classes.
I'd expect a world in which AMF succeeds, leading to predictable tragedies on a more massive scale down the line, to be off worse than a world without AMF, with tragedies on a smaller scale. (To reiterate: A world with AMF succeeding and a long-term perspective for the survivers would be much better still.)
I'd rather contribute to charities which do not promise short-term benefits with probable long-term calamities, but rather to e.g. education projects and the development of stable civil institutions in such countries. (The picture gets fuzzied because eliminating certain disruptive diseases also has such positive externalities, but to a smaller degree.)
Replies from: None, blogospheroid↑ comment by [deleted] · 2013-05-30T11:18:12.514Z · LW(p) · GW(p)
This ignores the social-scientific consensus that reducing infant mortality leads to reductions in family sizes. The moral dilemma you're worried about doesn't exist.
Replies from: Kawoomba↑ comment by Kawoomba · 2013-05-30T11:29:26.609Z · LW(p) · GW(p)
Citations needed. The relevant time horizons here are only 2-3 generations, do you suggest that societal norms will adapt faster than that (Edit: without accompanying larger efforts to build civil institutions)? The population explosion in, say, Bangladesh (1951: 42 million, 2011: 142 million) seems to suggest otherwise.
Replies from: satt↑ comment by satt · 2013-05-30T23:54:07.499Z · LW(p) · GW(p)
Citations needed.
The phenomenon HaydnB refers to is the demographic transition, the theory of which is perhaps the best-established theory in the field of demography. Here are two highly-cited reviews of the topic.
The relevant time horizons here are only 2-3 generations, do you suggest that societal norms will adapt faster than that? The population explosion in, say, Bangladesh (1951: 42 million, 2011: 142 million) seems to suggest otherwise.
HaydnB's referring to family size, you're referring to population, and it's quite possible for the second to increase even as the first drops. This appears to be what happened in Bangladesh. I have not found any data stretching back to 1951 for completed family size in Bangladesh, but here is a paper that plots the total fertility rate from 1963 to 1996: it dropped from just under 8 to about 3½. I did find family size data going back to 1951 for neighbouring India: it fell from 6.0 in 1951 to 3.3 in 1997, with a concurrent decrease in infant mortality.
So I'm not HaydnB, but I have to answer your question with a "yes": fertility norms can change, and have changed, greatly in the course of 2-3 generations. Bangladesh's population, incidentally, is due to top out in about 40 years at ~200 million, only 40% higher than its current population.
Replies from: Kawoomba↑ comment by Kawoomba · 2013-05-31T09:55:32.121Z · LW(p) · GW(p)
From the first review:
During the transition, first mortality and then fertility declined, causing population growth rates first to accelerate and then to slow again, moving toward low fertility, long life and an old population.
From the second review:
It is true, however, that mortality reductions in poor countries and the consequent rapid growth of population may impede capital formation and other aspects of development. (Goes on to call the consequences mostly positive.)
Like Democratic Peace Theory, the demographic transition has historically been modeled after the now developed countries. At least that is where we get the latter "stages" from. Countries in which the reduction in mortality was achieved from within the country, a token of the relative strength of some aspects of its civil society. Not countries in which mortality reduction would be a solely external influence, transplanted from a more developed society into a tribal society.
but I have to answer your question with a "yes": fertility norms can change, and have changed
Note that the question was whether societal norms will adapt faster than that, not whether they can and have in e.g. European countries. Especially if - and that's the whole point of the dilemma - there are stark interventions (AMF) only in infant and disease mortality, without the much more difficult and costly interventions in nation building.
Will reducing infant / disease mortality alone thrust a country into a more developed status? Rather the contrary, since even the sources agree that the immediate effect would be even more of the already catastrophic population growth. Once you're over the brink, a silver lining at the horizon isn't as relevant.
As with the Bangladesh example, "only 40% higher than its current population" (and Bangladesh is comparatively developed anyways), if that figure translated (which it doesn't) to Sub-Saharan populations, that would already be a catastrophe right there.
The question is, without nation building, would such countries be equipped to deal with just a 40% population rise over 40 years, let alone the one that's actually prognosticated?
HaydnB doesn't see the dilemma, since he seems to say that taking a tribal society, then externally implementing mortality reductions without accompanying large scale nation building will still reduce family sizes drastically, to the point that there are no larger scale catastrophes, even without other measures.
Replies from: satt↑ comment by satt · 2013-05-31T22:02:08.160Z · LW(p) · GW(p)
[quotations from reviews about population growth, emphasizing rapid/accelerating population growth]
These are consistent with what I wrote. Moreover, the world has already passed through the phase of accelerating population growth. The world's population was increasing most rapidly 20-50 years ago (the exact period depends on whether one considers relative or absolute growth rates).
Like Democratic Peace Theory, the demographic transition has historically been modeled after the now developed countries. [...] Not countries in which mortality reduction would be a solely external influence, transplanted from a more developed society into a tribal society.
True enough, but mostly a moot point nowadays, because we're no longer just predicting a fertility decline based on history; we're watching it happen before our eyes. The global total fertility rate (not just mortality) has been in freefall for 50 years and even sub-Saharan Africa has had a steadily falling TFR since 1980.
Note that the question was whether societal norms will adapt faster than that, not whether they can and have in e.g. European countries.
Right, but the fact that they can change, have changed, and continue to change (in two large, poor, and very much non-European countries) is good evidence they'll carry on changing. If medical interventions and other forms of non-institutional aid haven't arrested the TFR decline so far, why would they arrest it in future?
Will reducing infant / disease mortality alone thrust a country into a more developed status? Rather the contrary, since even the sources agree that the immediate effect would be even more of the already catastrophic population growth.
The long-run effect matters more than the immediate effect (which ended decades ago).
The question is, without nation building, would such countries be equipped to deal with just a 40% population rise over 40 years, let alone the one that's actually prognosticated?
The question I was addressing was the narrower one of whether reducing infant mortality reduces family sizes. Correlational evidence suggests (though does not prove) it does, maybe with a lag of a few years. I know of no empirical evidence that reductions in infant mortality increase family size in the long run, although they might in the short run.
Still, I might as well comment quickly on the broader question. As far as I know, the First World already focuses on stark interventions (like mass vaccination) more than nation building, and has done since decolonization. This has been accompanied by large declines in infant mortality, TFRs & family sizes, alongside massive population growth. It's unclear to me why carrying on along this course will unleash disaster, not least because the societies you're talking about are surely less "tribal" now than they were 10 or 20 or 50 years ago.
I don't want to come off as Dr. Pangloss here. It's quite possible global disaster awaits. But if it does happen, I'd be very surprised if it were because of the mechanism you're proposing.
↑ comment by blogospheroid · 2013-06-07T07:52:18.025Z · LW(p) · GW(p)
if development of newer institutions is what you are interested in, you can choose to contribute to charter cities or seasteading. That would be an intermediate risk-reward option between a low risk option like AMF and high risk high reward one like MIRI/FHI.
↑ comment by nshepperd · 2013-05-30T14:25:21.966Z · LW(p) · GW(p)
I'll grant that MIRI could accelerate the creation of AGI, if their efforts to educate people about UFAI risks are particularly ineffective. But as far as UFAI creation at all is concerned, there are any number of very smart idiots in the world who would love to be on the news as "the first person to program an artificial general intelligence". Or to be the first person to use a general AI to beat the stock market, as soon as enough parts of the puzzle have been worked out to make one by pasting together published math results. (Maybe a slightly more self-aware variation of AIXI-mc would do the trick.)
In my view, AGI is more or less inevitable, and MIRI is seemingly the only group publically interested in making it safe.
↑ comment by ESRogs · 2013-05-30T02:15:08.682Z · LW(p) · GW(p)
by conservation of expected evidence, one can expect this trend to continue
Not really related to the current discussion, but I want to make sure I understand the above statement. Is this assuming that the trend has not already been taken into account in forming the estimates?
Replies from: JonahSinick↑ comment by JonahS (JonahSinick) · 2013-05-30T04:33:16.131Z · LW(p) · GW(p)
Yes — the cost-effectiveness estimate has been adjusted every time a new issue has arisen, but on a case by case basis, without an attempt to extrapolate based on the historical trend.
↑ comment by MichaelVassar · 2013-05-29T13:19:01.545Z · LW(p) · GW(p)
I tend to think that if one can make a for-profit entity, that's the best sort of vehicle to pursue most tasks, though occasionally, churches or governments have some value too.
comment by CarlShulman · 2013-05-28T06:00:02.116Z · LW(p) · GW(p)
But a worker has more capacity than a donor does to learn whether small probability failure modes prevail in practice, and can switch to a different job if he or she finds that such a failure mode prevails.
This part seems exactly wrong. When GiveWell or Giving What We Can change their recommendations based on new data or arguments and explain their reasoning, the donations switch rapidly and en masse. EA donations have very little inertia.
Building an organization in a specific field, accumulating field-specific human capital (experience, CV, education), these involve putting years of effort into a particular project or vision. If you later find out that cancer biology was a bad move and you think that renewable energy is more important, your years doing a PhD in that area are now substantially wasted. Careers have very high inertia and investment in cause-specific capital, while earning power is flexible and donations can be highly responsive to new inputs.
I acknowledge that Jobs is a cherry picked example, but I think that the general principle still holds.
It is highly cherry-picked from two directions. Jobs gave up most of his Apple stock so that he captured a relatively small share of Apple's recent rise, and he is generally believed to have had more irreplaceable impact on his company than virtually all CEOs (although still Apple stock did not plummet with his death).
Replies from: Eliezer_Yudkowsky, JonahSinick, Nick_Beckstead↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-28T17:02:31.397Z · LW(p) · GW(p)
Jobs's death was known to be on the way. It would be surprising if the stock plummeted enough at that point to produce a predictable profit for someone shorting it.
↑ comment by JonahS (JonahSinick) · 2013-05-28T06:23:03.821Z · LW(p) · GW(p)
This seems exactly wrong. When GiveWell or Giving What We Can change their recommendations based on new data or arguments and explain their reasoning, the donations switch rapidly and en masse. EA donations have very little inertia.
MacAskill mentioned this in his original article. My response was cost-effectiveness doesn't vary as much as initially appears to be the case (though I recognize that my discussion is specific to global health).
Building an organization in a specific field, accumulating field-specific human capital (experience, CV, education), these involve putting years of effort into a particular project or vision. If you later find out that cancer biology was a bad move and you think that renewable energy is more important, your years doing a PhD in that area are now substantially wasted. Careers have very high inertia and investment in cause-specific capital, while earning power is flexible and donations can be highly responsive to new inputs.
I view this as more of an argument in favor of building transferable skills (rather than highly specialized skills) than an argument in favor of earning to give.
It is highly cherry-picked from two directions. Jobs gave up most of his Apple stock so that he captured a relatively small share of Apple's recent rise, and he is generally believed to have had more irreplaceable impact on his company than virtually all CEOs (although still Apple stock did not plummet with his death).
I don't mind having cherry-picked the example – I chose it to get people thinking rather than with the intent of weaving a tight argument.
↑ comment by Nick_Beckstead · 2013-05-28T12:37:31.947Z · LW(p) · GW(p)
I'd add that it isn't obvious whether people working "in the field" are more attuned to small probability failure modes than, say GiveWell. One reason is that these people only tend to know about small probability failure modes within their own field, and certain very closely related fields. So they don't have a strong basis for comparison. In addition, workers may only know about the low probability failure modes within their own part of the operation, so they may have less of a sense than charity evaluators of how it all hangs together.
Replies from: JonahSinick↑ comment by JonahS (JonahSinick) · 2013-05-28T16:19:00.121Z · LW(p) · GW(p)
I agree with this point as stated, but think that by thinking about how it all hangs together (or by listening to those who have) before choosing a career trajectory and by choosing a career that leaves sufficiently many options open, one can "have one's cake and eat it too" — getting getting both the epistemic benefits from being on the ground and the epistemic benefits from looking at things in a broader way.
Replies from: fburnabycomment by diegocaleiro · 2013-05-29T00:46:38.199Z · LW(p) · GW(p)
There were more than two hundred applicants in GWWC last time they opened places for a position (or two) where you have no security and hold nearly no income. That is a hundred probably well connected, smart people in the effective altruist community fighting for the tiny little one space and little money there was available for them. (Source: Personal conversation)
This seems to me to be evidence in favour of earning to give...
Replies from: nielbowerman, JonahSinick↑ comment by nielbowerman · 2013-06-05T13:02:29.802Z · LW(p) · GW(p)
This isn't quite right, and sorry if I had misinformed you about this Diego. I don't have the numbers to hand (I can find them if this information becomes central the the argument), but it was almost certainly less than 200 and I think more like 100.
One relevent data point is that neither Giving What We Can nor 80,000 Hours hired permenant staff in that recruitment round despite wanting to, though they did hire temporary staff and interns, some of whom may take permenant roles in the future.
Disclaimer: I was involved in the recruitment round at the Centre for Effective Altruism, which includes both Giving What We Can and 80,000 Hours.
↑ comment by JonahS (JonahSinick) · 2013-05-29T01:08:15.131Z · LW(p) · GW(p)
- Applying to a job isn't the same as being willing to take it.
- Being willing to take a job isn't the same as being willing to stay at it for a long time.
- It's unclear how much the applicants could make in earnings outside of GWWC. Being smart and well connected within the effective altruist community doesn't necessarily transfer to having high earning power. So the expected donations from these people might not be so high, even if they were to try to maximize income.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-29T01:39:36.967Z · LW(p) · GW(p)
I don't think 1-3 combined can modify the conclusion that most of these applicants should be earning to give to support the one selected applicant, creating a prior of 200:1. The only realistic way this could be false is if the premise has been misremembered, or if people are vastly more willing to work for GWWC than to earn money and give it to GWWC (the motivational issue mentioned before).
Replies from: JonahSinick↑ comment by JonahS (JonahSinick) · 2013-05-29T02:05:04.964Z · LW(p) · GW(p)
But there's not a dichotomy "work at GWWC" vs. "earn to give" – the 200 people can do other work of direct social value. You seem to be making an assumption that differences in comparative advantage (those aren't picked up by the market mechanism, but that are nevertheless useful for having a positive social impact) are sufficiently small so that one should ignore them, or making assumption that having someone work at GWWC is far more valuable than having someone work somewhere else, or some combination of these things, or another assumption that I'm not picking up on.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-29T03:22:22.923Z · LW(p) · GW(p)
or making assumption that having someone work at GWWC is far more valuable than having someone work somewhere else
Ah, right, I'm thinking in MIRIan terms where you can't go off and do comparable direct work somewhere else.
comment by NancyLebovitz · 2013-05-31T11:17:14.936Z · LW(p) · GW(p)
I recommend an adjustment for the possibility of causing harm while maximizing income. There are people in finance who did more damage than they could make up with charity.
Replies from: JonahSinick↑ comment by JonahS (JonahSinick) · 2013-06-01T16:22:20.056Z · LW(p) · GW(p)
Somebody made the same remark on Facebook and I wrote:
As I've written elsewhere, the cost-effectiveness figures in the Washington Post article are wildly inaccurate, but if nothing else, one can give to GiveDirectly http://en.wikipedia.org/wiki/GiveDirectly. If someone in finance makes $500k/yr and donates $300k/yr to GiveDirectly, they're giving 270 African families about a years' worth of their income, every year. The harm that the finance work does would have to be really big to outweigh that.
There are people who did more damage than they could make up with charity (doing harm way out of proportion with their earnings), but I think that they're rare.
comment by Decius · 2013-05-29T03:02:36.264Z · LW(p) · GW(p)
Isn't the most effective way to leverage large amounts of money politics? Would it be possible for the effective altruist movement to create or subvert a political party and influence enough money at the government level to be cost-effective?
Replies from: None, John_Maxwell_IV↑ comment by [deleted] · 2013-05-30T11:22:46.342Z · LW(p) · GW(p)
The EA community is beginning to look into this in a serious way - trying to go beyond simply describing political advocacy as 'high reward, high risk'. There should be quite a few blog posts coming out about this topic in the next few weeks and months.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2013-05-29T07:02:42.554Z · LW(p) · GW(p)
Lobbying seems more standard. I've been wondering if there were any "lobbyist-as-a-service" firms for a while now.
Replies from: nielbowerman, Decius↑ comment by nielbowerman · 2013-06-05T13:11:20.073Z · LW(p) · GW(p)
One of the more promising routes that I've seen working well here is people who have put themselves inside organisations with large budgets and helped decide where that money goes. For example, if you are concerned with global poverty you could become a programme manager at the World Bank and quite plausably move $100m to more effective causes. If you cared about x-risk, an option would be to work for DARPA or IARPA and move $10m's to more effective research, and to help prioritise the research so that technolgoies are developed in an order that we think is less likely to cause x-risk. Another example would be to locate yourself within a large grant-making foundation.
The big downside with this approach is that the funds are usually less fungable than personal funds. How to weigh this against earning to give depends on your beliefs on the relative value of different activities that you would or wouldn't be able to fund. Through this approach you are typically able to control larger amounts of money than you would through earning to give.
comment by [deleted] · 2013-05-28T22:37:50.422Z · LW(p) · GW(p)
This is a very interesting piece Jonah. These are all good considerations, and it could well be that for particular individuals, a career of the type you describe does do more good. However I still think that there's (at least) two good reasons for continuing to treat Earning to Give (E2G?) as the baseline.
Firstly it's somewhat calculable: you can work out average earnings in various fields, compare this with estimates on 'how much to save a year of healthy life' (or how many research-hours can I purchase, etc) and arrive at an estimate of much good you can do with your career. Then the challenge is to show that you're likely to do better than that in another career.
Secondly it's challenging. Because its slightly counter-intuitive, it can act as an interesting and provocative prompt into a discussion of career choice. It begins to lead people away from the 'ethical careers are the best' mindset, and gets them to question the good that various careers can do.
I'd just like to add that it's great to see more people thinking about and writing on career choice - its the best way for us to make progress.
Replies from: JonahSinick↑ comment by JonahS (JonahSinick) · 2013-05-29T00:44:14.129Z · LW(p) · GW(p)
Thanks for the kind words.
My post is mostly a response to the position "except in exceptional cases, the best way to do good is by donating as much as possible," which is different from using "earning to give" it as a baseline.
I find your position reasonable, but I worry that salience of the "earning to give" meme and ambiguity aversion may conspire to bias people in favor of earning to give, simply because it's calculable. There is an argument for restricting ones' scope to activities where outputs are calculable, but it's possible to go too far in that direction.
Replies from: None↑ comment by [deleted] · 2013-05-30T11:29:39.831Z · LW(p) · GW(p)
I think that it is a very real worry, and there has perhaps been too much emphasis put on 'earning to give', especially in conjunction with 'to a cost-effective public health charity'. (Although to an extent this emphasis has been important for movement-growing, and so is justifiable). Thankfully 80,000 Hours (et al) have launched research programs on other career options and other aims: animal welfare, xrisk, political advocacy, research, other non-profits, etc
Replies from: nielbowerman↑ comment by nielbowerman · 2013-06-05T13:34:28.402Z · LW(p) · GW(p)
I find it interesting that 80,000 Hours has become so associated with earning to give in people's minds. We have always stressed that it is only one possible option, but I suppose the idea was sticky.
For example, even in Dylan Matthew's recent Washington Post article about earning to give that went viral, he says:
To be clear, neither MacAskill nor Ord nor their organizations believe that what they call “earning to give” is necessarily the best choice for all or even most people.
Not everyone is cut out to spend 80,000 hours trading derivatives. They emphasize that, say, policy work, advocacy and scientific research are other careers that could save a large number of lives. Indeed, Ord and MacAskill plan to keep up their advocacy rather than earning to give.
Yet in all of the follow up articles and discussion that this has prompted in the media, this nuance seems to have been missed.
This, in addition to less wrong posts such as this one, have reiterated to me that only the most memorable parts of a message are kept as memes evolve, while the more nuanced components, such as earning to give not being the only option, are lost.
Full disclosure: I work for 80,000 Hours
comment by [deleted] · 2013-06-02T22:59:11.026Z · LW(p) · GW(p)
RE: lobbying as EA, there seem to be serious low-hanging fruit in EA outreach/advocacy:
See http://blog.againstmalaria.com/post/2013/05/30/The-impact-of-Peter-Singers-recent-TED-talk.aspx
Quick caveats: (1) Singer is a big name and is more likely to convince people because of name-recognition. (2) Singer is a much better public speaker than most EA advocates.
But working in EA advocacy need not necessarily mean going on a speaking tour. It could mean, for instance, organizing a Peter Singer speaking tour.
comment by benkuhn · 2013-05-28T21:33:57.670Z · LW(p) · GW(p)
EDIT: I see that CarlShulman brought up essentially the same points, which I somehow managed to miss while posting this. I've left up the original comment for posterity but feel free to ignore.
a worker has more capacity than a donor does to learn whether small probability failure modes prevail in practice, and can switch to a different job if he or she finds that such a failure mode prevails.
I have some questions about this.
First, at least for the example small-probability failure modes that you gave as an example in your previous article, an individual worker would not be in a much better position to assess them than a donor (or at least, would be in no better position to assess them than a donor's inputs, like GiveWell or other third parties). Can you give some examples where workers would be in a better position?
Second, donors seem to be in a much better position than workers to react to failure. If you're doing something that hasn't been done before--which seems to be the place where direct work is most obviously better than earning to give--then you need to spend a lot of time figuring out things that nobody else has figured out yet. This means that the costs of switching to a different cause are quite high. On the other hand, a donor can simply change their beneficiary organization, which is much easier.
Replies from: JonahSinick↑ comment by JonahS (JonahSinick) · 2013-05-29T01:11:39.337Z · LW(p) · GW(p)
I thought that my response to Carl addressed your second question rather than your first, and was planning to try to address your first. If you'd like more thoughts on the first question I can give them, though I think that my reasoning on this point can be inferred from a close reading of and reflection on my recent blog posts.
I'd be happy to correspond about these things: feel free to email me at jsinick (at) gmail (dot) com
comment by Vaniver · 2013-05-28T16:14:18.218Z · LW(p) · GW(p)
It may be fruitful to consider the competition angle. When someone working at a charity advises you to go into donation rather than requesting for donations, they're asking you to be one of their customers, not one of their competitors.
Replies from: JonahSinick, ThisSpaceAvailable↑ comment by JonahS (JonahSinick) · 2013-05-28T16:19:44.962Z · LW(p) · GW(p)
This is partially true, but I think that it's a very small motivation in practice.
↑ comment by ThisSpaceAvailable · 2013-06-07T21:31:50.143Z · LW(p) · GW(p)
Your second sentence is quite unclear.
Replies from: Vaniver↑ comment by Vaniver · 2013-06-07T22:20:52.790Z · LW(p) · GW(p)
Suppose there are two classes- donors and doers. Doers compete with other doers for donor funds, and donors compete with other donors and non-donors to generate those funds in the first place. When a doer says "if you really want to help, become a donor, not a doer!", they're advocating for a shift that will increase the average available funds per doer. Is that clearer, or should I try again?
comment by michael_b · 2015-02-01T14:29:25.523Z · LW(p) · GW(p)
There's a bit of a false dichotomy here between 'earning to give' and 'altruistic career'. I'll talk about one of them which we'll need to go macro to see. I will also implicitly complain that 'earning to give' allows companies to deflect on charitable giving in a way that satisfies individual actors but may result in less charitable giving overall.
Working on Wall Street and practicing an 'earning to give' plan may not be a great way to maximize giving to high-ROI aligned charities.
I work at an HFT of order 1000 employees. The HFT itself makes no charitable donations. When employees ask about charity the HFT points at their large bonuses and says if a cause is important to you go ahead and donate your (huge) bonus to that cause. The HFT occasionally invites representatives from GiveWell and similar to come talk to us about pledging to donate a portion of our income to high-ROI charities. The bonuses can be big and if you do some multiplication you really can come away with the impression that you and your coworkers are collectively contributing enormous value to charity.
Is this a fantasy though? I can't know how everyone spends their bonuses but projecting from intimate discussions with my peer group has led me to believe that most employees, if they do donate, donate token amounts of $500-5000. Usually to more "name brand" causes like MSF or EFF. I'm aware of one person who donates their entire bonus to charity and a handful of others who have taken the GiveWell pledge.
(Don't take this as an endorsement of GiveWell or anything, I'm simply holding them up as a symbol of the idea)
I don't mean to overstate the progressiveness of the company either. There are also the more familiar Hollywood renditions of Wall Street employees who appear to spend their bonuses on sweet apartments and fancy cars before they've even been paid.
Obviously this is anecdotal. I'm only describing a practice inside of one Wall Street firm. This may not be an accurate picture; it's frowned upon to discuss compensation with one another so it's hard to know the amounts at stake. There could be a handful of extremely high earners that secretly plow all of their wealth into high-ROI charities, which would more than make up for all of the modest earners who save their bonuses or take very nice holidays in the tropics.
(btw, I'm not making an absolute value judgment on how people spend their bonuses, only speaking about how to maximize value to charity)
Wait, why are we talking about what the entire company does when we're trying to figure out what individual actors should do? Here's why.
All-in, if you can't make a more valuable altruistic career choice it's not strictly true that earning to give on Wall Street is the only reasonable alternative. If you could find an organization whose policy is to donate, say, 10% of all profits to charity, that number may be much larger than the discretionary charitable gifts made by employees at an equivalently sized Wall Street firm. If you're concerned that the company with the 10% policy is donating to the wrong charities, you could consider joining that firm and campaigning to allocate more of the firm's giving to charities aligned with GiveWell.
comment by OneBox · 2013-06-04T15:27:28.670Z · LW(p) · GW(p)
Well thought through, good work! Though I wonder if you have any insight into what (intuition?) generates the conclusion:
For the most part, the people who have had the biggest positive impact on the world haven’t had their impact by “earning to give."
Thorough out human history it's probably true, though I wonder if that is partly because 1) "earn to give" has never been practiced to any large extent (at least to my knowledge) 2) people (including myself) tend narrate great advances/discoveries in terms discoverers and persons in close proximity to the event - but not so much to the people in the background that nonetheless made it possible. I'd like to hear your thoughts!
comment by John_Maxwell (John_Maxwell_IV) · 2013-05-29T06:31:11.429Z · LW(p) · GW(p)
This suggests that at the margin, very high value humanitarian efforts require highly skilled and highly motivated laborers.
I don't see how this follows from what came before, although I agree it's a possibility.
comment by MaxwellFritz · 2013-05-28T21:10:57.634Z · LW(p) · GW(p)
Thanks for the article, it's a position I've been wanting to see taken.
I think you've made a good defensive case that people should stay squishy in their conviction about earning-to-give being the way to go. I'm convinced about some of the overextensions of the earning-to-give argument, and the value of making further adjustments based on considerations like counterexamples, illusory superiority bias, etc.
I'm not at all convinced, though, in the conclusion that earning to give is reasonably likely to have less impact. There seems to be a jump from "here are reasons to be cautious/skeptical of earning-to-give as optimal" to the bolded thesis. There are lots of good reasons to be cautious/skeptical of going the direct route, as well. I'd be curious to read your analysis of what you think the most compelling reasons to be skeptical of going the direct route, and why they ultimately aren't strong enough in your mind (perhaps for another post).
Part of the reason I might feel this way is I've been earning-to-give at a trading firm, partly because I'm unsure of my value-add through more direct means. I'm fascinated by the idea of a more direct career, but I'm skeptical of the magnitude of my potential value-add, especially with my donations from trading as a baseline for comparison. I'd be interested to learn more about the types of things you think I should be considering (can provide more information in this vein as of course it will be different depending on my skillset).
Replies from: JonahSinick↑ comment by JonahS (JonahSinick) · 2013-05-28T21:16:45.266Z · LW(p) · GW(p)
I'm not at all convinced, though, in the conclusion that earning to give is reasonably likely to have less impact.
I agree that I didn't make a tight argument: what my post offers is a bunch of counterarguments against the "earning to give" position, together with an expression of my intuitions, supported with limited empirical data. I do have a fair amount (~80%) of confidence in my intuition, but it's difficult to explicate why.
Part of the reason I might feel this way is I've been earning-to-give at a trading firm, partly because I'm unsure of my value-add through more direct means. I'm fascinated by the idea of a more direct career, but I'm skeptical of the magnitude of my potential value-add, especially with my donations from trading as a baseline for comparison. I'd be interested to learn more about the types of things you think I should be considering (can provide more information in this vein as of course it will be different depending on my skillset).
Given that you've proven yourself at a trading firm and not in other contexts, your (expected) comparative advantage may be in earning to give.
I'd be happy to discuss these things more: you can email me at jsinick (at) gmail (dot) com.
Replies from: MaxwellFritz↑ comment by MaxwellFritz · 2013-05-28T21:27:32.651Z · LW(p) · GW(p)
I think you probably could write a similar collection of arguments against your own position with the same or greater strength. I'm surprised at the 80% number - that's pretty confident. Then again, it's a little strange to boil it down to a number like that - as you allude to, it's going to vary considerably based on who the person is, and different people likely mean different things when they talk about the population of people who should be considering earning-to-give over direct careers. I think it's clear there are people who will have more impact in each category, and we're debating where the line is and what sort of people belong in what group.
I'll shoot you an email - thanks for that.
comment by katydee · 2013-05-28T08:56:45.064Z · LW(p) · GW(p)
I like this article and would like to see you elaborate on these points in greater depth.
Replies from: JonahSinick↑ comment by JonahS (JonahSinick) · 2013-05-28T16:08:36.455Z · LW(p) · GW(p)
Which points in particular?
Replies from: katydee↑ comment by katydee · 2013-05-28T18:17:40.201Z · LW(p) · GW(p)
I'd primarily be interested in seeing a more detailed examination of altruistic careers outside the nonprofit world and comparative advantage. I also would be interested in seeing someone write up an examination of the risks of "value/lifestyle drift," which strike me as closely related.
Replies from: JonahSinick, MaxwellFritz↑ comment by JonahS (JonahSinick) · 2013-05-28T21:51:19.669Z · LW(p) · GW(p)
Thanks, this will be forthcoming.
↑ comment by MaxwellFritz · 2013-05-28T22:06:45.408Z · LW(p) · GW(p)
This might be exact same question or just the other side of the coin - but I'm primarily interested in a detailed examination of highest opportunity, least saturated altruistic careers inside the nonprofit world. One article I read (I think it was MacAskill's) about the subject labeled the nonprofit world "people rich, money poor" - where is this most not the case?
I realize the truly brilliant, Steve Jobs equivalents can have outsized impacts in a lot of roles, but where might be the highest potential for slightly more ordinary but still very talented folks? How about even more general areas where it really is the case that the nonprofit world is just people poor, if they exist?
comment by EALE · 2014-04-01T14:40:18.731Z · LW(p) · GW(p)
I have a fairly strong intuition that “if you don’t fund it, somebody else will” is more true than “if you don’t do it, somebody else will” so that this counter-consideration is outweighed. It’s important to note that many projects of high social value are the first of their kind, and that finding somebody else to execute such a project is highly nontrivial. I think that it’s also relevant that 114 billionaires have signed the Giving Pledge, committing to giving 50+% of their wealth away in their lifetimes.
On the other hand, the vast majority of people who want to do good in the world try to "do it" rather than "fund it" (hence why "Earning to Give" is considered a novel, controversial idea), which makes me think that “if you don’t do it, somebody else will” is more true than “if you don’t fund it, somebody else will”. Convincing other people to donate as much as you would have done in an EtG career is also highly nontrivial. And I think that those Giving Pledge stats are about as relevant as the fact that the nonprofit sector employs about 10 million people in the US.
(Still very glad to see this post on LW though, lest any of us should forget that Will's article, and probably many of the articles discussing EtG, were written for audiences quite different from LW! I especially liked the sections on Discrepancy in Earnings and Replaceability.)
comment by RandomThinker · 2013-06-06T04:53:28.002Z · LW(p) · GW(p)
One thing those articles don't consider is if your career is causing high negative externalities in the world. Which banking arguable does (depend on what exactly you do, and your political views).
If so, then you need to give even more than what you earned just to undo what you did in your career.
Replies from: JonahSinick↑ comment by JonahS (JonahSinick) · 2013-06-06T05:09:18.483Z · LW(p) · GW(p)
See my response to Nancy Lebovitz on this point.
comment by ThisSpaceAvailable · 2013-06-05T00:13:33.132Z · LW(p) · GW(p)
I don't know whether this is too obvious to be worth pointing out, but money is just token for goods of inherent value. If you give money to a charity, you aren't directly increasing net global wealth, you are just moving resources from one area to another. This might, however, end up increasing global net wealth if the resources are more productive in this new area. So, anyone who gives to charity is implicitly asserting that they are able to identify areas in which resources are more productive than how the economy would otherwise employ them. If you can't identify such areas, the world would be just as well off if you were to simply burn your money rather than give it to charity. So, suppose you think that you can increase the productivity of resources by directing their allocation rather than letting other people direct their allocation. I'll label the quantification of this effect the WAP (for Wise Allocation Premium). WAP = (Value of your allocation - value of default allocation) / value of default allocation.
It also needs to be noted that jobs have social value apart from what one does with one's salary. So, if you work as a heart surgeon, and you save a bunch of lives, then you're contributed to society, even if you don't give your money to charity. There are different ways of accounting for this. One is to estimate what the difference between the social value of one's job and one's salary is, and then add in the social value of what one does with one's salary, minus the resources that are consumed through one's expenditures. So:
- Total social value = (social value of job - salary) + (social value of expenditures – opportunity cost of expenditures) *
However, this is a needlessly complicated way of expressing it that obscures the true nature of the calculation. The equation can be simplified quite a bit to just this:
- Total social value = social value of career path + salaryWAP
If your WAP is low, you should focus on doing something that benefits society, and not give much weight to how much you're making. If your WAP is zero, then your salary is completely irrelevant altruistically, except insofar as it reflects social value. As your WAP increases, you should put more and more emphasis on making money, rather than doing work that matters. If your WAP is really high, then the social value of your career path is largely irrelevant. For instance, if you think that your WAP is 20, then (at least from a utilitarian perspective) it would make sense to break into people's houses, steal their stuff, sell it, and then give the money to charity.
It should also be noted that while one can altruistically allocate resources by making a large salary and then donating that money, one can also allocated resources through one's conduct in one's job. For instance, if you own a monopoly, you can set your price according to what maximizes your profit, and then give that money to charity, or you can just sell your good at cost. If the deadweight percentage is greater than your WAP, then the latter course makes more sense. If you own a construction company, you can choose projects that have the greatest social value. Etc.
Replies from: Vaniver↑ comment by Vaniver · 2013-06-05T00:42:46.272Z · LW(p) · GW(p)
I don't know whether this is too obvious to be worth pointing out, but money is just token for goods of inherent value. If you give money to a charity, you aren't directly increasing net global wealth, you are just moving resources from one area to another.
Inherent value? Most models see value as subjective, which allows for gains from trade. Charity, in particular, can increase wealth if the donor and recipient are both more satisfied afterwards, as swapping an apple and an orange can increase wealth if the traders are both more satisfied afterwards.
It also needs to be noted that jobs have social value apart from what one does with one's salary. So, if you work as a heart surgeon, and you save a bunch of lives, then you're contributed to society, even if you don't give your money to charity.
Part of the argument for earning to give is that high-compensation fields are generally restricted entry; only so many people work on Wall Street, or as doctors, or so on, and so the more altruists seek to enter those fields, the more resources that those fields control will be directed towards altruistic purposes. This seems very dependent on the field- the number of working doctors will not be altered by a marginal student deciding to apply to medical school, but the number of FAI philosophers may be altered by a marginal student deciding whether or not to work as one.
Replies from: ThisSpaceAvailable↑ comment by ThisSpaceAvailable · 2013-06-07T00:56:29.823Z · LW(p) · GW(p)
Most models see value as subjective, which allows for gains from trade.
I'm not clear on what meaning you're giving to "subjective", that it means that it allows gains from trade. Whatever label we give, there is clearly a difference between money, and, say, gasoline. If I burn $20 worth of gasoline, there is now $20 less wealth in the world. If I burn a $20 bill, the net global change in wealth is negligible.
Replies from: Vaniver↑ comment by Vaniver · 2013-06-07T02:09:52.288Z · LW(p) · GW(p)
I'm not clear on what meaning you're giving to "subjective", that it means that it allows gains from trade.
Suppose Bob has five gallons of gasoline. The "value" of those gallons is a two place word; I might want those gallons so I can fuel my car so I can visit friends, and Bob might want those gallons for their resale value. If I want the gallons more than Bob does, it makes sense for me to give him money and for him to give me the gasoline. The "price" of those gallons is the same for each of us- but the "value" is higher than the price for me, and lower than the price for him (or else the trade would not occur voluntarily).
Thus, if the transfer of tokens can lead to increased global wealth in the context of voluntary exchange, it is thus also possible for the transfer of tokens to lead to increased global wealth in the context of charitable donations. The same safeguards are not in place, and so one might argue that it is less likely, but to argue that moving resources from one area to another cannot increase global wealth, as I understood you were doing in the great-grandparent, is arguing that trade doesn't increase wealth, which is a basic result in economics.