Posts

A Critique of Functional Decision Theory 2019-09-13T19:23:22.532Z
Meta Decision Theory and Newcomb's Problem 2013-03-05T01:29:40.786Z
Responses to questions on donating to 80k, GWWC, EAA and LYCS 2012-11-20T22:41:57.686Z
Giving What We Can, 80,000 Hours, and Meta-Charity 2012-11-15T20:34:54.680Z

Comments

Comment by wdmacaskill on Donating to MIRI vs. FHI vs. CEA vs. CFAR · 2013-12-28T12:07:57.342Z · LW · GW

Argh! Original post didn't go through (probably my fault), so this will be shorter than it should be:

First point:

I know very little about CEA, and a brief check of their website leaves me a little unclear on why Luke recommends them, aside from the fact that they apparently work closely with FHI.

CEA = Giving What We Can, 80,000 Hours, and a bit of other stuff

Reason -> donations to CEA predictably increase the size and strength of the EA community, a good proportion of whom take long-run considerations very seriously and will donate to / work for FHI/MIRI, or otherwise pursue careers with the aim of extinction risk mitigation. It's plausible that $1 to CEA generates significantly more than $1's worth of x-risk-value [note: I'm a trustee and founder of CEA].

Second point:

Don't forget CSER. My view is that they are even higher-impact than MIRI or FHI (though I'd defer to Sean_o_h if he disagreed). Reason: marginal donations will be used to fund program management + grantwriting, which would turn ~$70k into a significant chance of ~$1-$10mn, and launch what I think might become one of the most important research institutions in the world. They have all the background (high profile people on the board; an already written previous grant proposal that very narrowly missed out on being successful). High leverage!

Comment by wdmacaskill on Donating to MIRI vs. FHI vs. CEA vs. CFAR · 2013-12-27T20:43:36.705Z · LW · GW

CEA and CFAR don't do anything, to my knowledge, that would increase these odds, except in exceedingly indirect ways.

People from CEA, in collaboration with FHI, have been meeting with people in the UK government, and are producing policy briefs on unprecedented risks from new technologies, including AI (the first brief will go on the FHI website in the near future). These meetings arose as a result of GWWC media attention. CEA's most recent hire, Owen Cotton-Barratt, will be helping with this work.

Comment by wdmacaskill on 'Effective Altruism' as utilitarian equivocation. · 2013-11-26T15:40:08.427Z · LW · GW

your account of effective altruism seems rather different from Will's: "Maybe you want to do other things effectively, but >then it's not effective altruism". This sort of mixed messaging is exactly what I was objecting too.

I think you've revised the post since you initially wrote it? If so, you might want to highlight that in the italics at the start, as otherwise it makes some of the comments look weirdly off-base. In particular, I took the initial post to aim at the conclusion:

  1. EA is utilitarianism in disguise which I think is demonstrably false.

But now the post reads more like the main conclusion is:

  1. EA is vague on a crucial issue, which is whether the effective pursuit of non-welfarist goods counts as effective altruism. which is a much more reasonable thing to say.
Comment by wdmacaskill on 'Effective Altruism' as utilitarian equivocation. · 2013-11-26T15:32:05.395Z · LW · GW

I think the simple answer is that "effective altruism" is a vague term. I gave you what I thought was the best way of making it precise. Weeatquince, and Luke Muelhauser wanted to make it precise in a different way. We could have a debate about which is the more useful precisifcation, but I don't think that here is the right place for that.

On either way of making the term precise, though, EA is clearly not trying to be the whole of morality, or to give any one very specific conception of morality. It doesn't make a claim about side-constraints; it doesn't make a claim about whether doing good is supererogatory or obligatory; it doesn't make a claim about the nature of welfare. EA is broad tent, and deliberately so: very many different ethical perspectives will agree, for example, that it's important to find out which charities do the most to improve the welfare of those living in extreme poverty (as measured by QALYs etc), and then encouraging people to give to those charities. If so, then we've got an important activity that people of very many different ethical backgrounds can get behind - which is great!

Comment by wdmacaskill on 'Effective Altruism' as utilitarian equivocation. · 2013-11-25T22:18:36.675Z · LW · GW

Hi,

Thanks for this post. The relationship between EA and well-known moral theories is something I've wanted to blog about in the past.

So here are a few points:

1. EA does not equal utilitarianism.

Utilitarianism makes many claims that EA does not make:

EA does not claim whether it's obligatory or merely supererogatory to spend one's resources helping others; utilitarianism claims that it is obligatory.

EA does not make a claim about whether there are side-constraints - certain things that it is impermissible to do, even if it were for the greater good. Utilitarianism claims that it's always obligatory to act for the greater good.

EA does not claim that there are no other things besides welfare that are of value; utilitarianism does claim this.

EA does not make a precise claim about what promoting welfare consists in (for example, whether it's more important to give one unit of welfare to someone who is worse-off than someone who is better-off; or whether hedonistic, preference-satisfactionist or objective list theories of wellbeing are correct); any specific form of utilitarianism does make a precise claim about this.

Also, note that some eminent EAs are not even consequentialist leaning, let alone utilitarian: e.g. Thomas Pogge (political philosopher) and Andreas Mogensen (Assistant Director of Giving What We Can) explicitly endorse a rights-based theory of morality; Alex Foster (epic London EtG-er) and Catriona MacKay (head of the GWWC London chapter) are both Christian (and presumably not consequentialist, though I haven't asked).

2. Rather, EA is something that almost every plausible moral theory is in favour of.

Almost every plausible moral theory thinks that promoting the welfare of others in an effective way is a good thing to do. Some moral theories that promoting the welfare of others is merely supererogatory, and others think that there are other values at stake. But EA is explicitly pro promoting welfare; it's not anti other things, and it doesn't claim that we're obligated to be altruistic, merely that it's a good thing to do.

3. Is EA explicitly welfarist?

The term 'altruism' suggests that it is. And I think that's fine. Helping others is what EAs do. Maybe you want to do other things effectively, but then it's not effective altruism - it's "effective justice", "effective environmental preservation", or something. Note, though, that you may well think that there are non-welfarist values - indeed, I would think that you would be mistaken not to act as if there were, on moral uncertainty grounds alone - but still be part of the effective altruism movement because you think that, in practice, welfare improvement is the most important thing to focus on.

So, to answer your dilemma:

EA is not trying to be the whole of morality.

It might be the whole of morality, if being EA is the only thing that is required of one. But it's not part of the EA package that EA is the whole of morality. Rather, it represents one aspect of morality - an aspect that is very important for those living in affluent countries, and who have tremendous power to help others. The idea that we in rich countries should be trying to work out how to help others as effectively as possible, and then actually going ahead and doing it, is an important part of almost every plausible moral theory.

Comment by wdmacaskill on Robustness of Cost-Effectiveness Estimates and Philanthropy · 2013-05-25T11:01:11.713Z · LW · GW

I explicitly address this in the second paragraph of the "The history of GiveWell’s estimates for lives saved per dollar" section of my post as well as the "Donating to AMF has benefits beyond saving lives" section of my post.

Not really. You do mention the flow-on benefits. But you don't analyse whether your estimate of "good done per dollar" has increased or decreased. And that's the relevant thing to analyse. If you argued "cost per life saved has had greater regression to your prior than you'd expected; and for that reason I expect my estimates of good done per dollar to regress really substantially" (an argument I think you would endorse), I'd accept that argument, though I'd worry about how much it generalises to cause-areas other than global poverty. (e.g. I expect there to be much less of an 'efficient market' for activities where there are fewer agents with the same goals/values, like benefiting non-human animals, or making sure the far-future turn out well). Optimism bias still holds, of course.

You say that "cost-effectiveness estimates skew so negatively." I was just pointing out that for me that hasn't been the case (for good done per $), because long-run benefits strike me as swamping short-term benefits, a factor that I didn't initially incorporate into my model of doing good. And, though I agree with the conclusion that you want as many different angles as possible (etc), focusing on cost per life saved rather than good done per dollar might lead you to miss important lessons (e.g. "make sure that you've identified all crucial normative and empirical considerations"). I doubt that you personally have missed those lessons. But they aren't in your post. And that's fine, of course, you can't cover everything in one blog post. But it's important for the reader not to overgeneralise.

I agree with this. I don't think that my post suggests otherwise.

I wasn't suggesting it does.

Comment by wdmacaskill on Robustness of Cost-Effectiveness Estimates and Philanthropy · 2013-05-24T17:11:57.977Z · LW · GW

Good post, Jonah. You say that: "effective altruists should spend much more time on qualitative analysis than on quantitative analysis in determining how they can maximize their positive social impact". What do you mean by "qualitative analysis"? As I understand it, your points are: i) The amount by which you should regress to your prior is much greater than you had previously thought, so ii) you should favour robustness of evidence more than you had previously. But that doesn't favour qualitative vs non-qualitative evidence. It favours more robust evidence of lower but good cost-effectiveness over less robust evidence of higher cost-effectiveness. The nature of the evidence could be either qualitative or quantitative, and the things you mention in "implications" are generally quantitative.

In terms of "good done per dollar" - for me that figure is still far greater than I began with (and I take it that that's the question that EAs are concerned with, rather than "lives saved per dollar"). This is because, in my initial analysis - and in what I'd presume are most people's initial analyses - benefits to the long-term future weren't taken into account, or weren't thought to be morally relevant. But those (expected) benefits strike me, and strike most people I've spoken with who agree with the moral relevance of them, to be far greater than the short-term benefits to the person whose life is saved. So, in terms of my expectations about how much good I can do in the world, I'm able to exceed those by a far greater amount than I'd previously thought likely. And that holds true whether it costs $2000 or $20000 to save a life. I'm not mentioning that either to criticise or support your post, but just to highlight that the lesson to take from past updates on evidence can look quite different depending on whether you're talking about "good done per dollar" or "lives saved per dollar", and the former is what we ultimately care about.

Final point: Something you don't mention is that, when you find out that your evidence is crappier than you'd thought, two general lessons are to pursue things with high option value and to pay to gain new evidence (though I acknowledge that this depends crucially on how much new evidence you think you'll be able to get). Building a movement of people who are aiming to do the most good with their marginal resources, and who are trying to work out how best to do that, strikes me as a good way to achieve both of these things.

Comment by wdmacaskill on Meta Decision Theory and Newcomb's Problem · 2013-03-05T15:37:51.647Z · LW · GW

Thanks for mentioning this - I discuss Nozick's view in my paper, so I'm going to edit my comment to mention this. A few differences:

As crazy88 says, Nozick doesn't think that the issue is a normative uncertainty issue - his proposal is another first-order decision theory, like CDT and EDT. I argue against that account in my paper. Second, and more importantly, Nozick just says "hey, our intuitions in Newcomb-cases are stakes-sensitive" and moves on. He doesn't argue, as I do, that we can explain the problematic cases in the literature by appeal to decision-theoretic uncertainty. Nor does he use decision-theoretic uncertainty to respond to arguments in favour of EDT. Nor does he respond to regress worries, and so on.

Comment by wdmacaskill on Meta Decision Theory and Newcomb's Problem · 2013-03-05T15:33:44.433Z · LW · GW

Don't worry, that's not an uncomfortable question. UDT and MDT are quite different. UDT is a first-order decision theory. MDT is a way of extending decision theories - so that you take into account uncertainty about which decision theory to use. (So, one can have meta causal decision theory, meta evidential decision theory, and (probably, thought I haven't worked through it) meta updateless decision theory.)

UDT, as I understand it (and note I'm not at all fluent in UDT or TDT) always one-boxes; whereas if you take decision-theoretic uncertainty into account you should sometimes one-box and sometimes two-box, depending on the relative value of the contents of the two boxes. Also, UDT gets what most decision-theorists consider the wrong answer in the smoking lesion case, whereas the account I defend, meta causal decision theory, doesn't (or, at least, doesn't, depending on one's credences in first-order decision theories).

To illustrate, consider the case:

High-Stakes Predictor II (HSP-II) Box C is opaque; Box D, transparent. If the Predictor predicts that you choose Box C only, then he puts one wish into Box C, and also a stick of gum. With that wish, you save the lives of 1 million terminally ill children. If he predicts that you choose both Box C and Box D, then he puts nothing into Box C. Box D — transparent to you — contains an identical wish, also with the power to save the lives of 1 million children, so if one had both wishes one would save 2 million children in total. However, Box D contains no gum. One has two options only: choose Box C only, or both Box C and Box D.

In this case, intuitively, should you one box, or two box? My view is clear: that if someone one-boxes in the above case, they made the wrong decision. And it seems to me that this is best explained with appeal to decision-theoretic uncertainty.

Other questions: Bostrom's parliamentary model is different. Between EDT and CDT, the intertheoretic comparisons of value are easy, so there's no need to use the parliamentary analogy - one can just straightforwardly take an expectation over decision theories.

Pascal's Mugging (aka the "Fanaticism" worry). This is a general issue for attempts to take normative uncertainty into account in one's decision-making, and not something I discuss in my paper. But if you're concerned about Pascal's mugging and, say, think that a bounded Decision Theory is the best way to respond to the problem - then at the meta level you should also have a bounded decision theory (and at the meta meta level, and so on).

Comment by wdmacaskill on CEA does not seem to be credibly high impact · 2013-03-04T01:38:23.073Z · LW · GW

(part 3; final part)

Second: The GWWC Pledge. You say:

“The GWWC site, for example, claims that from 291 members there will be £72.68M pledged. This equates to £250K / person over the course of their life. Claiming that this level of pledging will occur requires either unreasonable rates of donation or multi-decade payment schedules. If, in line with GWWC's projections, around 50% of people will maintain their donations, then assuming a linear drop off the expected pledge from a full time member is around £375K. Over a lifetime, this is essentially £10K / year. It seems implausible that expected mean annual earnings for GWWC members is of order £100K.”

Again, there are quite a few mistakes:

First, in comments you twice say that “£112.8M” has been pledged rather than “$112.8M”. I know that’s just a typo but it’s an important one.

Second, you say that the GWWC site claims that, “there will be £72.68M pledged” (future tense). It doesn’t, it says, “$112.8mn pledged” (past tense). It’s a pretty important difference – the pledging is something that has happened, not something that will happen. This might partly explain the confusion discussed in point 4, below. Third, and more substantively, you don’t consider the idea, raised in other comments, that some donors might be donating considerably more than 10%, or that some donors might be donating considerably more than the mean. Both are true of GWWC pledgers.

Fourth, you seem to wilfully misunderstand the verb ‘to pledge’. I regularly make the following statement: “I have pledged to give everything I earn above £20 000 p.a. [PPP and inflation-adjusted to Oxford 2009]”. Am I lying when I say that? Using synonyms, I could have said “I promise to give…”, “I commit to give…” or “I sincerely intend to give…”. None of these entail “I am certain that I will donate everything above £20 000 p.a.”. Using my belief that I will earn on average over £42 000 p.a. [PPP and inflation-adjusted to Oxford 2009] over the course of my life, and that I will work until I’m 68, I can infer that I’ve pledged to give over £1 000 000 over the course of my life, which is also something I say. Am I lying when I say that? (Also note that if only 73 people made the same pledge as me, then we would have jointly pledged the current GWWC amount).

Fifth, I don’t know why you took us to use the $100mn pledged figure as an estimate of our impact. In fact you had evidence to the contrary. In a blog post that you cite I said: “As of last March, we’d invested $170 000’s worth of volunteer time into Giving What We Can, and had moved $1.7 million to GiveWell or GWWC top-recommended development charities, and raised a further $68 million in pledged donations. Taking into account the facts that some proportion of this would have been given anyway, there will be some member attrition, and not all donations will go to the very best charities (and using data for all these factors when possible), we estimate that we had raised $8 in realised donations and $130 in future donations for every $1’s worth of volunteer time invested in Giving What We Can.” (emphasis added).

Finally, I think that the GWWC pledge is misleading only if it’s taken to be a measure of our impact. But we don’t advertise it as that. We could try to make it some other number. We could adjust the number downwards, in order to take into account: how much would have been given anyway; member attrition; a discount rate. Or we could adjust the number upwards, in order to take into account: overgiving; real growth of salaries, and inflation. It could also be adjusted downward to take into account that not all donations are to GW or GWWC recommended charities, or (perhaps) upwards to take into account the idea that we will have better evidence about the best giving opportunities in a few years’ time, and thereby be able to donate to charities better than AMF, SCI or DtW. But any number we gave based on these adjustments would be more misleading and arbitrary than the literal amount pledged. It would also be more confusing for the large majority of our website viewers who haven’t thought about things like counterfactual giving or whether the discount rate should be positive or negative over the next few years; they’re used to the social norm which is to advertise pledges as stated. Until you, no-one who does understand issues such as counterfactual giving and discount rates has understood the amount pledged figure as an impact-assessment.

In comments there was some uncertainty about how we come up with the total pledged figure. What we do is as follows. Each member, when they return their pledge form, states a) what percentage they commit to (or, if taking the Further Pledge, the baseline income above which they give everything); b) their birthdate; c) their expected average earnings per annum. Assuming a (conservative) standard retirement age, that allows us to calculate their expected donations. In some cases, members understandably don’t want to reveal their expected earnings. What we used to do, in such cases, is to use the mean earnings of all the other members who have given their incomes. However, when, recently, one member joined with very large expected earnings (pursuing earning to give), we raised the question whether this method suffers from sample bias, because people who expect to earn a lot will be more likely to report. I’m not sure that’s true: I could imagine that people who earn more often don’t want to flaunt that fact. However, wanting to be conservative, we decided instead to use the mean earnings of the country in which the member works.

Bottom Line for Readers If you’re interested in the question of whether 80,000 Hours and Giving What We Can have acted optimally or will act optimally in the future, the answer is simple: certainly not. We inevitably do some things worse than we could have done, and we value your input on concrete suggestions about how our organisations can improve.

If you’re interested in the question of whether $1 invested in 80,000 Hours or Giving What We Can produces more than $1’s worth of value for the best causes, read here, here, here and here and, most of all, contact me for the calculations and, if you’d like, our latest business plan, at will dot crouch at 80000hours.org. So far, I haven’t seen any convincing arguments to the conclusion that we fail to have a ROI greater than 1; however, it’s something I’d love additional input on, as the outside view makes me wary about believing that I work for the best charity I know of.

Comment by wdmacaskill on CEA does not seem to be credibly high impact · 2013-03-04T01:38:11.365Z · LW · GW

(part 2) The most important mistakes in the post

Bizarre Failures to Acquire Relevant Evidence As lukeprog noted, you did not run this post by anyone within CEA who had sufficient knowledge to correct you on some of the matters given above. Lukeprog describes this as ‘common courtesy’. But, more than that, it’s a violation of a good epistemic principle that one should gain easily accessible relevant information before making a point publicly.

The most egregious violation of this principle is that, though you say you focus on the idea that donating to CEA has a ROI greater than 1, and though you repeatedly ask for a ‘calculation’ of impact and claim that CEA is not credible for not being able to provide such a calculation, you haven’t contacted me for the calculation of GWWC’s impact per dollar invested. This isn’t something I’ve been shy about — in a blog post that you link to (as well as elsewhere) I prominently describe this calculated impact-assessment, and invite people to contact me if they want the spreadsheet with the calculation. Insofar as this was the cornerstone of your concern, it’s odd that you didn’t contact me for the spreadsheet. Comments on that impact-assessment would have been helpful, but as far as I’m aware you haven’t read it.

Another example is where you suggest that little thought went into the change of the 80,000 Hours’ declaration of intent. Again, this is information that would have been easily accessible via a quick email to me or Ben Todd. As it happens, the declaration has gone through several iterations; there has been discussion on the core 80,000 Hours’ lists; Ben, myself and other have independently written proposals; and we commissioned one of our best interns to research the topic as part of our general marketing strategy. We concluded that having a lower initial barrier to entry was wise, because it would increase the total number of members, allow us to be more mainstream, and increase the total (though not the proportion) of members who make significant changes to their careers and thereby make the world a significantly better place. (We are also currently discussing whether to introduce a further pledge along the lines of “I intend to dedicate my life to whatever does the most good.”) It wouldn’t be an underestimate to say that several person-weeks of thought and research have gone into the pledges.

A further example is where you guess the number of researchers we have. Again, you could have e-mailed for this information, rather than trying to guess on the basis of the names listed on the website. For this reason, you substantially overestimated how many person-hours we command. Between CEA, over the last six months we have had the equivalent of 3.7 full-time staff. The first 2.6 of these started in July last year, another joined in late September and another in January. GWWC currently has the equivalent of two full-time staff; 80,000 Hours has the equivalent of two and a half full-time staff. For this reason (and perhaps also the planning fallacy), I think you severely overestimate the amount of research we could reasonably expect to deliver in that time.

Another example is where you quote the number of people we have on our mailing lists. This is a good example, because it’s one where I spoke incorrectly in Cambridge. I said that one third of Oxford students were on our mailing list; what I should have said was that about 20% of students coming through fresher’s fair were on our mailing list. It’s precisely errors like these — easy to make in the context of an impromptu group discussion — that show the value of making sure that one’s evidence is reliable.

A further example is where you say “it has been stated that GWWC has an internal price of around £1700 for new pledges” and then, in your response to my query about where this number came from, said that it came from Jacob Trefethen — a volunteer at a chapter, and not currently involved with core GWWC and 80k activities. Again, this is not the sort of evidence on which it’s rational to base a critique — when the option of simply asking me or someone else who works on strategy within CEA was merely an email away.

Another example was: “a large fraction of the people involved with 80,000 hours or GWWC behave like dilettantes”… “Nor do they seem to act as if they wish to seriously optimise the world.” But, as far as I know, you know only one person who works at CEA, Adam Casey, who is an unpaid intern, and you have about one hour’s worth of contact with me. I doubt that, if you knew us personally, and not through material written for an audience encountering the ideas of effective altruism for the first time, you would doubt our intention and commitment to "seriously optimise the world" as you put it. Seeing as this is LessWrong, I'll quote Eliezer Yudkowsky (stated in an independent internet conversation on Ycombinator). In response to the question, “What application of $4B would, right now, generate the most utility for humanity?” he replied: “If you know the word "utility", the people who actually seriously try to figure out the answer to that question live at:

Embarrassingly Poor Arguments First: You ask: “For example, the world bank throws ~$43B/year around. Which is easier: To upscale GWWC by a factor of ~17000, or double the mean effectiveness of the World Bank? This should not be a hypothetical question; it should be answered.”

There are a few mistakes here:

First, your comment suggests that you know that we haven’t thought about this. But that’s misleading, because you haven’t ever asked us if we’ve thought about it.

Second, I have no idea where your numbers come from. After searching (inc. here) I still don’t know where $43bn number comes from. And, after trying to figure it out, I also don’t know where your “17 000” figure comes from. GWWC has so far moved $2.5 million and raised $100mn in pledges. Even discounting the literal pledges by 99% and valuing them at $1mn (which would be far too steep in my view), the appropriate figure would be 12 300. So, whatever the basis, 17 000 seems too high.

Third, even neglecting the above points, your figure would only be correct if the cost-effectiveness of the World Bank’s spending were the same as the cost-effectiveness of GWWC top-recommended charities. But we think, and presumably you agree, that the cost-effectiveness of GWWC’s top-recommended charities are significantly better than the World Bank’s mean cost-effectiveness. Aside from anything else, there’s a major difference between donations and loans. Fourth, if you want to maximize impact yours is not the correct question to ask. If it will get progressively harder to grow GWWC, and if one think that the likelihood of achieving either outcome is very low (both reasonable assumptions), then it could be true that (i) it is easier to double the mean effectiveness of the World Bank than to increase GWWC’s size by a factor of 17000 and (ii) that one ought to use one’s marginal time and resources to grow GWWC. The reason these could both be true is that the marginal benefits from growing GWWC are greater than the marginal benefits of trying to double the effectiveness of the World Bank. Given this, it’s unclear why this question “should be answered”. Fifth, the question implicitly neglects the fact that growing GWWC has substantial knock-on benefits, including increasing the ability of some GWWC members to influence major international organisations like the World Bank (see the background on Toby’s activities, above).

In general: i) Starting with something smaller and easier to achieve has instrumental cumulative benefits and option value in a way that staking everything on one big goal does not. ii) Directly doubling the effectiveness of the World Bank – and other similar projects – is not the comparative advantage of existing EAs in Oxford. Given our success generating and mobilising talented altruists, I think the team here will have greater success taking an indirect route than by attempting to do it directly ourselves. We can use e.g. 80,000 Hours to identify precisely those who have or could develop the requisite skills, credentials and values, and provide them the encouragement, information and practical assistance required to get into positions of major influence over aid effectiveness. Finding and convincing someone to pursue this career is much easier than dedicating your entire life to it yourself, which is what led us to set up 80,000 Hours in the first place.

That’s not to say we aren’t open to the idea. It’s one of my main concerns about my current activities. But it’s misleading to suggest that you have good evidence to believe that we haven’t considered it.

Comment by wdmacaskill on CEA does not seem to be credibly high impact · 2013-03-04T01:36:51.311Z · LW · GW

(part 1) Summary Thanks once again, Jonathan, for taking the time to write publicly about CEA, and to make some suggestions about ways in which CEA might be falling short. In what follows I’ll write a candid response to your post, which I hope you’ll take as a sign of respect — this is LW and I know that honesty in this community is valued far more than sugarcoating. Ultimately, we’re all aiming here to proportion our beliefs to our evidence, and beating around the bush doesn’t help with that aim.

In your post you raise some important issues — often issues that those within CEA have also been thinking about. In general, however, the methodology by which you researched and wrote your post was poor. For this reason, there are crucial factual errors in your post that could easily have been avoided, and errors of argumentation that border on embarrassing. This is unfortunate. Powerful criticism of CEA’s activities is extremely important to us: in fact, in the absence of more direct forms of feedback (like profit and loss), it’s vital. But writing poorly researched and poorly thought-through criticism adds more noise than signal; this makes it harder for us in the future to distinguish the incisive and well-evidenced criticism from the rest, which just harms everyone.

I’ll mention some of the issues that you’ve raised that I think are important to think about, before going on to detail some of the mistakes you make in your post. I’ll note just now that, because of other commitments, this post will be the last I make on this thread.

Some Important Points

Individuals vs Large Organizations You ask why we focus on individuals, rather than large foundations, or governments, or intergovernmental institutions like the World Bank. This is a good question, and something we wrestle with. Indeed, it’s also something we’ve pursued. The media attention generated by Giving What We Can has provided a platform for Dr Toby Ord, the principal founder of GWWC, to travel to and speak to the UK Secretary of State for Development, the UK’s Department for International Development, the Centre for Global Development, 10 Downing Street, the Disease Control Priorities Network, the WHO and as it happens, the World Bank, about aid cost effectiveness and how to increase it. He has already had some success in this regard, which wouldn’t have been possible without GWWC, and he expects to spend a significant proportion of his career on this issue.

The question of whether to spend marginal resources influencing individuals versus governmental and international organisations is non-trivial to answer: international organisations have larger budgets, but are more difficult to access and more difficult to influence. If you think it obvious that we should be influencing the latter, I’d be interested to know your reasons. Later in this response, I’ll discuss your suggestion in more depth.

Transparency You raised concerns about the transparency of GWWC and 80,000 Hours. I agree that this is something that both organisations could work on. We have taken steps so far in the direction of transparency, especially in making the organisations transparent to donors and potential donors. Both 80k and GWWC have in-depth 6-monthly reviews, where their progress is assessed internally by the trustees (myself, Nick Beckstead and Toby Ord), and externally, by people, often donors, within the effective altruism community who are not closely involved with the running of the organisation. GWWC has posted on this here, and noted that if you wanted to read the reports from the review you are able to request them. 80,000 hours will make a similar post soon.

In addition, at the request of Giles, I opened CEA up for questioning on LessWrong, and wrote a detailed response to the questions posted there. I try to provide in-depth responses to any questions I receive via e-mail. And I provide the spreadsheet and explanation of an in-depth calculation of GWWC’s impact per dollar to anyone who asks (accurate as of ~March 2012 – we plan to do this annually).

One issue in keeping a start-up organisation transparent is that the nature of our activities changes rapidly. The very idea of 80,000 Hours as primarily a service organization, providing free careers advice, was only thought up in early July 2012. People switch positions regularly while we get a better understanding of whose comparative advantage lies where. It’s difficult to be transparent and non-misleading when you know that the facts might change radically within the space of a few months. There are also many things to be done, and investing in increased transparency has to be weighed against raising more money, or pledges, or making more career changes. So far, we’ve focused on being transparent to our donors and potential donors, which I still think is the right call — but it’s important to think about and reassess this on a regular basis. I’d welcome further thoughts on if you think that we’ve made the wrong trade-off here.

Publishing You briefly suggest the idea that we should use publications as a metric of research output. This is also something that’s worth thinking about. Publishing increases one’s academic reputability, and the scrutiny of peer review improves the reliability of one’s research. However, it is far more time-consuming than one might expect, because one has to tailor one’s research to the norms of the journal, and is especially slow if one is publishing within philosophy journals. (A paper of mine was under review for 10 months from one journal.) It also biases research towards ideas that are publishable, even if less important. So it’s a difficult issue.

For reasons of time, GiveWell don’t publish at all (but the resulting lack of peer review is something I’ve raised as a concern about their research); whereas, in order to boost reputation, MIRI are aiming to publish. At the moment, publishing isn’t a high priority for us, but we do some. I’ve published the central argument in favour of earning to give (it’s forthcoming in Ethical Theory and Moral Practice, available here), and I’m planning to write a book on effective altruism over the next year, from which I might publish a few articles. But beyond that, we’d rather focus on getting the ideas right. However, that’s something we could easily be mistaken about, and is worthy of discussion.

With these points noted, I’ll move on to the mistakes made within the post.

Some misleading aspects of the post

Factual Errors I mention these in my other comment on this post.

One other thing to note is that the 80k pledge was never focused on global poverty. The previous declaration was: I declare that I aim to pursue a career as an effective altruist.

This means that I intend to: (i) Devote a significant proportion of my time or resources to helping others. (ii) Use the time or resources I give as effectively as possible in helping others. (iii) Choose my career based at least in part on how it enables me to further my altruistic aims. And prior to that the declaration was: I pledge that, over my lifetime, I will dedicate 10% of my time or money (or any combination of the two) to those causes that I believe will do the most good with the resources I give them. I understand that it is difficult to know the best way of doing good in the world, and so I will choose those cause(s) on the basis of the best evidence that is available to me at the time. Further, I will deliberately pursue a career that will considerably improve my ability to further those causes I believe to be best.

The new declaration is: "I intend, at least in part, to use my career in an effective way to make the world a better place." More discussion on these changes later.

Misleading statements “In recent conversation with Will Crouch”… “In conversation with Will Crouch”… “these discussions” I mention this in my other post but it’s worth repeating. Though your post suggests that we had at least two one-on-one conversations, this never happened. We spoke only during a question-and-answer session after a short talk I gave.

“There wasn't any particular defence of the choice of wording [of the 80k declaration of intent] or any indication that there had been deep thought about precisely what that pledge should constitute.” This is technically true. However, it’s misleading insofar as I wasn’t asked why the declaration of intent was changed, nor was I asked how much time had gone into thinking about revising the declaration of intent.

“The key argument in favour of donating money to CEA which was presented by Will was that by donating $1 to CEA you produce more than $1 in donations to the most effective charities. We present some apparent difficulties with this remaining true on the margin.” This suggests that your post was primarily about difficulties with inferring marginal cost-effectiveness from past average cost-effectiveness. I think that that’s a very important topic (hey, maybe 99.9% of the value of CEA comes from me! In which case marginal cost-effectiveness would be much lower than past average cost-effectiveness), but as far as I can tell in your post you don’t address that issue anywhere.

Comment by wdmacaskill on CEA does not seem to be credibly high impact · 2013-02-21T23:54:54.222Z · LW · GW

Hi Jonathan,

First off, thanks for putting so much time into writing this extensive list of questions and doubts you have about CEA. Unlike for-profit activities, we don't have immediate feedback effects telling us when we're doing well and when we're doing badly, so criticism is an important countermeasure to make sure we do things as well as possible. We therefore really welcome people taking a critical eye to our activities.

As the person who wrote the original CEA material here on LessWrong, and the person who you mention above, I feel I should be the one to collate a response to your questions. However, because of other commitments (managing; fundraising; writing my first piece for a magazine column), it will be a few days before I can get this to you in a form I'd feel happy with. I hope that's ok.

Before then I'll just mention a few things in order to make things a bit clearer to the audience.

  • In what you wrote a couple of comments made it sound as if you'd had an in-depth conversation with me on these issues; whereas really the context of the only exchange we've had is my giving a short talk to a group of about 15 people, of very varied backgrounds. You asked a few questions and there was discussion afterwards, but this must have only taken up about 10-15 minutes of time. Though I would very much like to, I haven't ever spoken with you or Alexey one-on-one.

  • Similarly, in your response to Luke you say that Adam works full-time at CEA. I think there's some disagreement between the two of you on the extent to which he had signed off on the content. But, at any rate, it's worth noting that Adam is an intern at CEA. This means he does contribute a full working week for CEA, but he is not an employee. He's therefore not the person to go when it comes to high-level evaluation of CEA.

  • You mention an internal estimate of £1700 for the value of a new pledge. None of us are familiar with this figure, and we're confused about where it could have come from.

  • You suggest that CEA has ~4000 people on its mailing lists. The correct figure is less than half that (unless you include TLYCS, which you might have been thinking of, which does have in excess of 4000 on its mailing list).

  • You estimate GWWC's research capacity at 6 staff for last year. This is actually more than an order of magnitude higher than the true figure. In fact, the average number of paid employees (full-time equivalent) we have had working on all aspects of 80,000 Hours and Giving What We Can over the last six months is only 3.7.

As a more general point, I think we should also be careful to distinguish whether CEA has acted optimally in terms of utility-maximization (to which the answer is certainly not), and whether it gets a return on investment which is better than 1:1.

In my follow-up comment, I'll talk about some of the many concerns you've raised that we share, and the issues over which we might be making big mistakes. I'll also be able to give a bit more background about our activities, and I'll be able to answer your questions. Thanks again for taking the time to comment.

Best Wishes,

Will

Comment by wdmacaskill on Responses to questions on donating to 80k, GWWC, EAA and LYCS · 2012-11-23T18:27:48.336Z · LW · GW

At the moment the best thing to do would be to link to each of the organisations' websites individually.

Comment by wdmacaskill on Responses to questions on donating to 80k, GWWC, EAA and LYCS · 2012-11-23T18:24:33.326Z · LW · GW

It's a good point. So far it hasn't been an issue. But if there was someone who we thought was worth the money, and for some good reason simply wouldn't work for less than a certain amount, then we'd pay a higher amount - we don't have a policy that we aren't able to pay any more than £18k.

Comment by wdmacaskill on Responses to questions on donating to 80k, GWWC, EAA and LYCS · 2012-11-23T18:20:34.667Z · LW · GW

Thank you!

Comment by wdmacaskill on Giving What We Can, 80,000 Hours, and Meta-Charity · 2012-11-20T22:42:42.902Z · LW · GW

My response was too long to be a comment so I've posted it here. Thanks all!

Comment by wdmacaskill on Questions from potential donors to Giving What We Can, 80,000 Hours and EAA · 2012-11-12T17:08:52.790Z · LW · GW

Can I clarify: I think you meant "CEA" rather than "EAA" in your first question?

Comment by wdmacaskill on Giving What We Can, 80,000 Hours, and Meta-Charity · 2012-11-12T16:30:56.530Z · LW · GW

Hi - answer to this will be posted along with the responses to other questions on Giles' discussion page. If you e-mail me (will [dot] crouch [at] givingwhatwecan.org) then I can send you the calculations.

Comment by wdmacaskill on Questions from potential donors to Giving What We Can, 80,000 Hours and EAA · 2012-11-12T01:18:30.137Z · LW · GW

It's a good question! I was going to respond, but I think that, rather than answering questions on this thread, I'll just let people keep asking questions, and then I'll respond to them all at once - hopefully that'll make readability clearer for other users.

Comment by wdmacaskill on Questions from potential donors to Giving What We Can, 80,000 Hours and EAA · 2012-11-12T01:14:55.821Z · LW · GW

Here is the CEA website - but it's just a stub linking to the others.

And no. To my knowledge, we haven't contacted her. From the website, it seems like our approaches are quite different, though the terms we use are similar.

Comment by wdmacaskill on Giving What We Can, 80,000 Hours, and Meta-Charity · 2012-11-12T01:10:36.202Z · LW · GW

These are all good questions! Interestingly, they are all relevant to the empirical aspect of a research grant proposal I'm writing. Anyway, our research team is shared between 80,000 Hours and GWWC. They would certainly be interested in addressing all these questions (I think it would officially come under GWWC). I know that those at GiveWell are very interested in at least some of the above questions as well; hopefully they'll write on them soon.

Comment by wdmacaskill on Giving What We Can, 80,000 Hours, and Meta-Charity · 2012-11-11T17:21:39.747Z · LW · GW

Feel free to post the questions just now, Giles, in case that there are others that people want to add.

Comment by wdmacaskill on Giving What We Can, 80,000 Hours, and Meta-Charity · 2012-11-11T17:20:20.112Z · LW · GW

Thanks for this, this is a common response to earning to give. However, we already have a number of success stories: people who have started their EtG jobs and are loving them.

It's rare that someone had their heart set on a particular career, such as charity work, then completely changes their plans and begins EtG. Rather, much more common is that someone is thinking "I really want to do [lucrative career X], but I should do something more ethical" or that they think "I'm undecided between lucrative career X, and other careers Y and Z; all look like good options." It's much easier to convince these people.

We certainly want to track behaviour. We will have an annual survey of members, to find out what they are doing, and how much they are giving, and so on. If someone really isn't complying with the spirit of 80k, or with their stated goals, then we'll ask them to leave.

Comment by wdmacaskill on Giving What We Can, 80,000 Hours, and Meta-Charity · 2012-11-11T17:14:03.146Z · LW · GW

Thanks for this. Asking people "how much would you have pledged?" is of course only a semi-reliable method of ascertaining how much someone actually would have pledged. Some people - like yourself - might neglect that fact that they would have been convinced by the same arguments from other sources; others might be overoptimistic about how their future self would live up to their youthful ideals. We try to be as conservative as reasonable with our assumptions in this area: we take the data and then err on the side of caution. We assumed that 54% of the pledged donations would have happened anyway, that 25% of donations would have gone to comparably good charities, and that we have a dropout rate amortized over time equivalent to 50% of people dropping out immediately. It's possible that these assumptions still aren't conservative enough.

Comment by wdmacaskill on Giving What We Can, 80,000 Hours, and Meta-Charity · 2012-11-11T17:07:45.532Z · LW · GW

That's right. If there's a lot of concern, we can write up what we already know, and look into it further - we're very happy to respond to demand. This would naturally go under EAA research.

Comment by wdmacaskill on Giving What We Can, 80,000 Hours, and Meta-Charity · 2012-11-11T17:03:40.010Z · LW · GW

Thanks benthamite, I think everything you said above was accurate.

Comment by wdmacaskill on Giving What We Can, 80,000 Hours, and Meta-Charity · 2012-11-11T17:02:26.259Z · LW · GW

It would be good to have more analysis of this.

Is saving someone from malaria really the most cost-effective way to speed technological progress per dollar?

The answer is that I don't know. Perhaps it's better to fund technology directly. But the benefit:cost ratio tends to be incredibly high for the best developing world interventions. So the best developing world health interventions would at least be contenders. In the discussion above, though, preventing malaria doesn't need to be the most cost-effective way of speeding up technological progress. The point was only that that benefit outweighs the harm done by increasing the amount of farming.

Comment by wdmacaskill on Giving What We Can, 80,000 Hours, and Meta-Charity · 2012-11-11T16:58:48.807Z · LW · GW

On (a). The argument for this is based on the first half of Bostrom's Astronomical Waste. In saving someone's life (or some other good economic investment), you move technological progress forward by a tiny amount. The benefit you produce is the difference you make at the end of civilisation, when there's much more at stake than there is now.

It's almost certainly more like -10,000N I'd be cautious about making claims like this. We're dealing with tricky issues, so I wouldn't claim to be almost certain about anything in this area. The numbers I used in the above post were intended to be purely illustrative, and I apologise if they came across as being more definite than that.

Why might I worry about the -10,000N figure? Well, first, the number you reference is the number of animals eaten in a lifetime by an American - the greatest per capita meat consumers in the world. I presume that the number is considerably smaller for those in developing countries, and there is considerably less reliance on factory farming.

Even assuming we were talking about American lives, is the suffering that an American causes 10,000 times as great as the happiness of their lives? Let's try a back of the envelope calculation. Let's accept that 21000 figure. I can't access the original source, but some other digging suggests that this breaks down into: 17,000 shellfish, 1700 other fish, 2147 chickens, with the rest constituting a much smaller number. I'm really not sure how to factor in shellfish and other fish: I don't know if they have lives worth living or not, and I presume that most of these are farmed, so wouldn't have existed were it not for farming practices. At any rate, from what I know I suspect that factory farmed chickens are likely to dominate the calculation (but I'm not certain). So let's focus on the chickens. The average factory farmed chicken lives for 6 weeks, so that's 252 factory farmed chicken-years per American lifetime. Assuming the average American lives for 70 years, one American life-year produces 3.6 factory farmed chicken years. What should our tradeoff be between producing factory farmed chicken-years and American human-years? Perhaps the life of the chicken is 10x as bad as the American life is good (that seems a high estimate to me, but I really don't know): in which case we should be willing to shorten an American's life by 10 years in order to prevent one factory-farmed chicken-year. That would mean that, if we call one American life a good of unit 1, the American's meat consumption produces -36 units of value.

In order to get this estimate up to -10 000 units of value, we'd need to multiply that trade-off of 277: we should be indifferent between producing 2770 years of American life and preventing the existence of 1 factory farmed chicken-year (that is, we should be happy letting 4 vegan American children die in order to prevent 1 factory farmed chicken-year). That number seems too high too me; if you agree, perhaps you think that fish or shellfish suffering is the dominant consideration. Or you might bring in non-consequentialist considerations; as I said above, I think that the meat eater problem is likely more troubling for non-consequentialists.

At any rate, this is somewhat of a digression. If one thought that meat eater worries were strong enough that donating to GWWC or 80k was a net harm, I would think that a reasonable view (and one could give further arguments in favour of it, that we haven't discussed), though not my own one for the reasons I've outlined. We knew that something animal welfare focused had been missing from CEA for too long and for that reason set up Effective Animal Activism - currently a sub-project of 80k, but able to accept restricted donations and, as it grows, likely to become an organisation in its own right. So if one thinks that animal welfare charities are likely to be the most cost-effective charities, and one finds the meta-charity argument plausible, then one might consider giving to EAA.

Comment by wdmacaskill on Welcome to Less Wrong! (July 2012) · 2012-11-11T00:01:15.677Z · LW · GW

Interesting. The deeper reasons why I reject average utilitarianism is that it makes the value of lives non-seperable.

"Separability" of value just means being able to evaluate something without having to look at anything else. I think that, whether or not it's a good thing to bring a new person into existence depends only on facts about that person (assuming they don't have any causal effects on other people): the amount of their happiness or suffering. So, in deciding whether to bring a new person into existence, it shouldn't be relevant what happened in the distant past. But average utilitarianism makes it relevant: because long-dead people affect the average wellbeing, and therefore affect whether it's good or bad to bring that person into existence.

But, let's return to the intuitive case above, and make it a little stronger.

Now suppose:

Population A: 1 person suffering a lot (utility -10)

Population B: That same person, suffering an arbitrarily large amount (utility -n, for any arbitrarily large n), and a very large number, m, of people suffering -9.9.

Average utilitarianism entails that, for any n, there is some m such that Population B is better than Population A. I.e. Average utilitarianism is willing to add horrendous suffering to someone's already horrific life, in order to bring into existence many other people with horrific lives.

Do you still get the intuition in favour of average here?

Comment by wdmacaskill on Giving What We Can, 80,000 Hours, and Meta-Charity · 2012-11-10T23:56:39.722Z · LW · GW

By the way, thanks for the comments! Seeing as the post is getting positive feedback, I'm going to promote it to the main blog.

Comment by wdmacaskill on Giving What We Can, 80,000 Hours, and Meta-Charity · 2012-11-10T23:49:58.771Z · LW · GW

In order to get exceptional value for money you need to (correctly) believe that you are smarter than the big donors - >otherwise they'd already have funded whatever you're planning on funding to the point where the returns diminish to the >same level as everything else.

That's if you think that the big funders are rational and have similar goals as you. I think assuming they are rational is pretty close to the truth (though I'm not sure: charity doesn't have the same feedback mechanisms as business, because if you get punished you don't get punished in the same way). beoShaffer suggests that they just have different goals - they are aiming to make themselves look good, rather than do good. I think that could explain a lot of cases, but not all - e.g. it just doesn't seem plausible to me for the Gates Foundation.

So I ask myself: why doesn't Gates spend much more money on increasing revenue to good causes, through advertising etc? One answer is that he does spend such money: the Giving Pledge must be the most successful meta-charity ever. Another is that charities are restricted in how they can act by cultural norms. E.g. if they spent loads of money on advertising, then their reputation would take a big enough hit to outweigh the benefits through increased revenue.

Comment by wdmacaskill on Giving What We Can, 80,000 Hours, and Meta-Charity · 2012-11-10T23:41:06.528Z · LW · GW

I wouldn't want to commit to an answer right now, but the Hansonian Hypothesis does make the right prediction in this case. If I'm directly helping, it's very clear that I have altruistic motives. But if I'm doing something much more indirect, then my motives become less clear. (E.g. if I go into finance in order to donate, I no longer look so different from people who go into finance in order to make money for themselves). So you could take the absence of meta-charity as evidence in favour of the Hansonian Hypothesis.

Comment by wdmacaskill on Giving What We Can, 80,000 Hours, and Meta-Charity · 2012-11-10T23:33:34.221Z · LW · GW

That's the hope! See below.

Comment by wdmacaskill on Giving What We Can, 80,000 Hours, and Meta-Charity · 2012-11-10T23:33:13.181Z · LW · GW

Hey,

80k members give to a variety of causes. When we surveyed, 34% were intending to give to x-risk, and it seems fairly common for people who start thinking about effective altruism to ultimately think that x-risk mitigation is one of or the most important cause area. As for how this pans out with additional members, we'll have to wait and see. But I'd expect $1 to 80k to generate significantly more than $1's worth of value even for existential risk mitigation alone. It certainly has done so far.

We did a little bit of impact-assessment for 80k (again, with a sample of 26 members). When we did, the estimates were even more optimistic than for GWWC. But we'd like to get firmer data set before going public with any numbers.

Though I was deeply troubled by the poor meater problem for some time, I've come to the conclusion that it isn't that bad (for utilitarians - I think it's much worse for non-consequentialists, though I'm not sure).

The basic idea is as follows. If I save the life of someone in the developing world, almost all the benefit I produce is through compounding effects: I speed up technological progress by a tiny margin, giving us a little bit more time at the end of civilisation, when there are far more people. This benefit dwarfs the benefit to the individual whose life I've saved (as Bostrom argues in the first half of Astronomical Waste). Now, I also increase the amount of animal suffering, because the person whose life I've saved consumes meat, and I speed up development of the country, which means that the country starts factory farming sooner. However, we should expect (or, at least, I expect) factory farming to disappear within the next few centuries, as cheaper and tastier meat substitutes are developed. So the increase in animal suffering doesn't compound in the same way: whereas the benefits of saving a life continue until the humanity race (or its descendants) dies out, the harm of increasing meat consumption ends only after a few centuries (when we move beyond farming).

So let's say the benefit to the person from having their live saved is N. The magnitude of the harm from increasing factory farming might be a bit more than N: maybe -10N. But the benefit from speeding up technological progress is vastly greater than that: 1000N, or something. So it's still a good thing to save someone's life in the developing world. (Though of course, if you take the arguments about x-risk seriously, then alleviating global poverty is dwarfed by existential risk mitigation).

Comment by wdmacaskill on Welcome to Less Wrong! (July 2012) · 2012-11-09T19:48:26.164Z · LW · GW

Thanks for that. I guess that means I'm not a rationalist! I try my best to practice (1). But I only contingently practice (2). Even if I didn't care one jot about increasing happiness and decreasing suffering in the world, then I think I still ought to increase happiness and decrease suffering. I.e. I do what I do not because it's what I happen to value, but because I think it's objectively valuable (and if you value something else, like promoting suffering, then I think you're mistaken!) That is, I'm a moral realist. Whereas the definition given in Eliezer's post suggests that being a rationalist presupposes moral anti-realism. When I talk with other LW-ers, this often seems to be a point of disagreement, so I hope I'm not just being pedantic!

Comment by wdmacaskill on Welcome to Less Wrong! (July 2012) · 2012-11-09T19:42:13.120Z · LW · GW

Haha! I don't think I'm worthy of squeeing, but thank you all the same.

In terms of the philosophy, I think that average utilitarianism is hopeless as a theory of population ethics. Consider the following case:

Population A: 1 person exists, with a life full of horrific suffering. Her utility is -100.

Population B: 100 billion people exist, each with lives full of horrific suffering. Each of their utility levels is -99.9

Average utilitarianism says that Population B is better than Population A. That definitely seems wrong to me: bringing into existence people whose lives aren't worth living just can't be a good thing.

Comment by wdmacaskill on Welcome to Less Wrong! (July 2012) · 2012-11-09T19:38:52.394Z · LW · GW

Thanks! Yes, I'm good friends with Nick and Toby. My view on their model is as follows. Sometimes intertheoretic value comparisons are possible: that is, we can make sense of the idea that the difference in value (or wrongness) between two options A and B one one moral theory is greater, lesser, or equal to the difference in value (or wrongness) between two options C and D on another moral theory. So, for example, you might think that killing one person in order to save a slightly less happy person is much more wrong according to a rights-based moral view than it is according to utilitarianism (even though it's wrong according to both theories). If we can make such comparisons, then we don't need the parliamentary model: we can just use expected utility theory.

Sometimes, though, it seems that such comparisons aren't possible. E.g. I add one person whose life isn't worth living to the population. Is that more wrong according to total utilitarianism or average utilitarianism? I have no idea. When such comparisons aren't possible, then I think that something like the parliamentary model is the right way to go. But, as it stands, the parliamentary model is more of a suggestion than a concrete proposal. In terms of the best specific formulation, I think that you should normalise incomparable theories at the variance of their respective utility functions, and then just maximise expected value. Owen Cotton-Barratt convinced me of that!

Sorry if that was a bit of a complex response to a simple question!

Comment by wdmacaskill on Welcome to Less Wrong! (July 2012) · 2012-11-09T17:57:42.979Z · LW · GW

Hi All,

I'm Will Crouch. Other than one other, this is my first comment on LW. However, I know and respect many people within the LW community.

I'm a DPhil student in moral philosophy at Oxford, though I'm currently visiting Princeton. I work on moral uncertainty: on whether one can apply expected utility theory in cases where one is uncertain about what is of value, or what one one ought to do. It's difficult to do so, but I argue that you can.

I got to know people in the LW community because I co-founded two organisations, Giving What We Can and 80,000 Hours, dedicated to the idea of effective altruism: that is, using one's marginal resources in whatever way the evidence supports as doing the most good. A lot of LW members support the aims of these organisations.

I woudn't call myself a 'rationalist' without knowing a lot more about what that means. I do think that Bayesian epistemology is the best we've got, and that rational preferences should conform to the von Neumann-Morgenstern axioms (though I'm uncertain - there are quite a lot of difficulties for that view). I think that total hedonistic utilitarianism is the most plausible moral theory, but I'm extremely uncertain in that conclusion, partly on the basis that most moral philosophers and other people in the world disagree with me. I think that the more important question is what credence distribution one ought to have across moral theories, and how one ought to act given that credence distribution, rather than what moral theory one 'adheres' to (whatever that means).

Comment by wdmacaskill on [LINK] The most important unsolved problems in ethics · 2012-10-22T20:32:43.787Z · LW · GW

Hi all,

It's Will here. Thanks for the comments. I've responded to a couple of themes in the discussion below over at the 80,000 hours blog, which you can check out if you'd like. I'm interested to see the results of this poll!