Transparency and Accountability

post by multifoliaterose · 2010-08-21T13:01:24.750Z · LW · GW · Legacy · 145 comments

Contents

  Added 08/23: 
None
145 comments

[Added 02/24/14: After writing this post, I discovered that I had miscommunicated owing to not spelling out my thinking in sufficient detail, and also realized that it carried unnecessary negative connotations (despite conscious effort on my part to avoid them). See Reflections on a Personal Public Relations Failure: A Lesson in Communication. SIAI (now MIRI) has evolved substantially since 2010 when I wrote this post, and the criticisms made in the post don't apply to MIRI as presently constituted.

Follow-up to: Existential Risk and Public Relations, Other Existential Risks, The Importance of Self-Doubt

Over the last few days I've made a string of posts levying strong criticisms against SIAI. This activity is not one that comes naturally to me. In The Trouble With Physics Lee Smolin writes

...it took me a long time to decide to write this book. I personally dislike conflict and confrontation [...] I kept hoping someone in the center of string-theory research would write an objective and detailed critique of exactly what has and has not been acheived by the theory. That hasn't happened.

My feelings about and criticisms of SIAI are very much analogous to Smolin's feelings about and criticisms of string theory. Criticism hurts feelings and I feel squeamish about hurting feelings. I've found the process of presenting my criticisms of SIAI emotionally taxing and exhausting. I fear that if I persist for too long I'll move into the region of negative returns. For this reason I've decided to cut my planned sequence of posts short and explain what my goal has been in posting in the way that I have.

Edit: Removed irrelevant references to VillageReach and StopTB, modifying post accordingly.  

As Robin Hanson never ceases to emphasize, there's a disconnect between what humans say that what they're trying to do and what their revealed goals are. Yvain has written about this topic recently under his posting Conflicts Between Mental Subagents: Expanding Wei Dai's Master-Slave Model. This problem becomes especially acute in the domain of philanthropy. Three quotes on this point:

(1) In Public Choice and the Altruist's Burden Roko says:

The reason that we live in good times is that markets give people a selfish incentive to seek to perform actions that maximize total utility across all humans in the relevant economy: namely, they get paid for their efforts. Without this incentive, people would gravitate to choosing actions that maximized their own individual utility, finding local optima that are not globally optimal. Capitalism makes us all into efficient little utilitarians, which we all benefit enormously from.  

The problem with charity, and especially efficient charity, is that the incentives for people to contribute to it are all messed up, because we don't have something analogous to the financial system for charities to channel incentives for efficient production of utility back to the producer.

(2) In My Donation for 2009 (guest post from Dario Amodei) Dario says:

I take Murphy’s Law very seriously, and think it’s best to view complex undertakings as going wrong by default, while requiring extremely careful management to go right. This problem is especially severe in charity, where recipients have no direct way of telling donors whether an intervention is working.

(3) In private correspondence about career choice, Holden Karnofsky said:

For me, though, the biggest reason to avoid a job with low accountability is that you shouldn't trust yourself too much.  I think people respond to incentives and feedback systems, and that includes myself.  At GiveWell I've taken some steps to increase the pressure on me and the costs of I behave poorly or fail to add value.  In some jobs (particularly in nonprofit/govt) I feel that there is no system in place to help you figure out when you're adding value and incent you to do so.  That matters a lot no matter how altruistic you think you are.

I believe that the points that Robin, Yvain, Roko, Dario and Holden have made provide a compelling case for the idea that charities should strive toward transparency and accountability. As Richard Feynman has said:

The first principle is that you must not fool yourself – and you are the easiest person to fool.

Because it's harder to fool others than it is to fool oneself, I think that the case for making charities transparent and accountable is very strong.

SIAI does not presently exhibit high levels of transparency and accountability. I agree with what I interpret to be Dario's point above: that in evaluating charities which are not transparent and accountable, we should assume the worst. For this reason together with the concerns which I express about Existential Risk and Public Relations, I believe that saving money in a donor-advised-fund with a view toward donating to a transparent and accountable future existential risk organization has higher expected value than donating to SIAI now does.

Because I take astronomical waste seriously and believe in shutting up and multiplying, I believe that reducing existential risk is ultimately more important than developing world aid. I would very much like it if there were a highly credible existential risk charity. At present, I do not feel that SIAI is a credible existential risk charity. One LW poster sent me a private message saying:

I've suspected for a long time that the movement around EY might be a sophisticated scam to live off donations of nonconformists

I do not believe that Eliezer is consciously attempting to engage in a scam to live off of the donations but I believe that (like all humans) he is subject to subconscious influences which may lead him to act as though he were consciously running a scam to live off of the donations of nonconformists. In light of Hanson's points, it would not be surprising if this were the case. The very fact that I received such a message is a sign that SIAI has public relations problems.

I encourage LW posters who find this post compelling to visit and read the materials available at GiveWell which is, as far as I know, the only charity evaluator which places high emphasis on impact, transparency and accountability. I encourage LW posters who are interested in existential risk to contact GiveWell expressing interest in GiveWell evaluating existential risk charities. I would note that it may be useful for LW posters who are interested in finding transparent and accountable organizations to donate to GiveWell's recommended charities to signal seriousness to the GiveWell staff.

I encourage SIAI to strive toward greater transparency and accountability. For starters, I would encourage SIAI to follow the example set by GiveWell and put a page on its website called "Mistakes" publically acknowledging its past errors. I'll also note that GiveWell incentivizes charities to disclose failures by granting them a 1-star rating. As Elie Hassenfeld explains

As usual, we’re not looking for marketing materials, and we won’t accept “weaknesses that are really strengths” (or reports that blame failure entirely on insufficient funding/support from others). But if you share open, honest, unadulterated evidence of failure, you’ll join a select group of organizations that have a GiveWell star.

I believe that the fate of humanity depends on the existence of transparent and accountable organizations. This is both because I believe that transparent and accountable organizations are more effective and because I believe that people are more willing to give to them. As Holden says:

I must say that, in fact, much of the nonprofit sector fits incredibly better into Prof. Hanson’s view of charity as “wasteful signaling” than into the traditional view of charity as helping.

[...]

Perhaps ironically, if you want a good response to Prof. Hanson’s view, I can’t think of a better place to turn than GiveWell’s top-rated charities. We have done the legwork to identify charities that can convincingly demonstrate positive impact. No matter what one thinks of the sector as a whole, they can’t argue that there are no good charitable options - charities that really will use your money to help people - except by engaging with the specifics of these charities’ strong evidence.

Valid observations that the sector is broken - or not designed around helping people - are no longer an excuse not to give.

Because our Bayesian prior is so skeptical, we end up with charities that you can be confident in, almost no matter where you’re coming from.

I believe that at present the most effective way to reduce existential risk is to work toward the existence of a transparent and accountable existential risk organization.

 


 

Added 08/23:

145 comments

Comments sorted by top scores.

comment by Airedale · 2010-08-21T16:32:09.486Z · LW(p) · GW(p)

Your posts on SIAI have had a veneer of evenhandedness and fairness, and that continues here. But given what you don’t say in your posts, I cannot avoid the impression that you started out with the belief that SIAI was not a credible charity and rather than investigating the evidence both for and against that belief, you have marshaled the strongest arguments against donating to SIAI and ignored any evidence in favor of donating to SIAI. I almost hesitate to link to EY lest you dismiss me as one of his acolytes, but see, for example, A Rational Argument.

In your top-level posts you have eschewed references to any of the publicly visible work that SIAI does such as the Summit and the presentation and publication of academic papers. Some of this work is described at this link to SIAI’s description of its 2009 achievements. The 2010 Summit is described here. As for Eliezer’s current project, at the 2009 achievements link, SIAI has publicized the fact that he is working on a book on rationality:

Yudkowsky is now converting his blog sequences into the planned rationality book, which he hopes will significantly assist in attracting and inspiring talented individuals to effectively work towards the aims of a beneficial Singularity and reduced existential risk.

You could have chosen to make part of your evaluation of SIAI an analysis of whether or not EY’s book will ultimately be successful in this goal or whether it’s the most valuable work that EY should be doing to reduce existential risk, but I’m not sure how his work on transforming the fully public LW sequences into a book is insufficiently transparent or not something for which he and SIAI can be held accountable when it is published.

Moreover, despite your professed interest in existential risk reduction and references in others’ comments to your posts about the Future of Humanity Institute at Oxford, you suggest donating to Givewell-endorsed charities as an alternative to SIAI donations without even a mention of FHI as a possible alternative in the field of existential risk reduction. Perhaps you find FHI equally non-credible/non-accountable as a charity, but whatever FHI’s failings, it’s hard to see how they are exactly the same ones which you have ascribed to SIAI. Perhaps you believe that if a charity has not been evaluated and endorsed by Givewell, it can’t possibly be worthwhile. I can’t avoid the thought that if you were really interested in existential risk reduction, you would spend at least some tiny percentage of the time you’ve spent writing up these posts against SIAI on investigating FHI as an alternative.

I would be happy to engage with you or others on the site in a fair and unbiased examination of the case for and against SIAI (and/or FHI, the Foresight Institute, the Lifeboat Foundation, etc.). Although I may come across as strongly biased in favor of SIAI in this comment, I have my own concerns about SIAI’s accountability and public relations, and have had numerous conversations with those within the organization about those concerns. But with limited time on my hands and faced with such a one-sided and at times even polemical presentation from you, I find myself almost forced into the role of SIAI defender, so that I can least provide some of the positive information about SIAI that you leave out.

Replies from: XiXiDu, multifoliaterose, multifoliaterose
comment by XiXiDu · 2010-08-21T17:47:46.461Z · LW(p) · GW(p)

I cannot avoid the impression that you started out with the belief that SIAI was not a credible charity and rather than investigating the evidence both for and against that belief, you have marshaled the strongest arguments against donating to SIAI and ignored any evidence in favor of donating to SIAI.

"If you’re interested in being on the right side of disputes, you will refute your opponents’ arguments. But if you’re interested in producing truth, you will fix your opponents’ arguments for them. To win, you must fight not only the creature you encounter; you must fight the most horrible thing that can be constructed from its corpse." -- Black Belt Bayesian

If multifoliaterose took the position of a advocatus diaboli, what would be wrong with that?

Replies from: Airedale, wedrifid
comment by Airedale · 2010-08-21T19:06:28.830Z · LW(p) · GW(p)

Although I always love a good quote from Black Belt Bayesian (a/k/a steven0461 a/k/a my husband), I think he’s on board with my interpretation of multifoliaterose’s posts. (At least, he’d better be!)

Going on to the substance, it doesn’t seem that multifoliaterose is just playing devil’s advocate here rather than arguing his actual beliefs – indeed everything he’s written suggests that he’s doing the latter. Beyond that, there may be a place for devil’s advocacy (so long as it doesn’t cross the line into mere trolling, which multifoliaterose’s posts certainly do not) at LW. But I think that most aspiring rationalists (myself included) should still try to evaluate evidence for and against some position, and only tread into devil’s advocacy with extreme caution, since it is a form of argument where it is all too easy to lose sight of the ultimate goal of weighing the available evidence accurately.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-21T19:47:34.067Z · LW(p) · GW(p)

Although I always love a good quote from Black Belt Bayesian (a/k/a steven0461 a/k/a my husband)

Wow, I managed to walk into the lion's den there!

Going on to the substance, it doesn’t seem that multifoliaterose is just playing devil’s advocate here...

Yeah, I wasn't actually thinking that to be the case either. But since nobody else seems to be following your husbands advice...at least someone tries to argue against the SIAI. Good criticism can be a good thing.

...and only tread into devil’s advocacy with extreme caution...

I see, I'll take your word for it. I haven't thought about it too much. So far I thought your husbands quote is universally applicable.

comment by wedrifid · 2010-08-21T22:08:49.397Z · LW(p) · GW(p)

If multifoliaterose took the position of a advocatus diaboli, what would be wrong with that?

Multi has already refuted the opponent's arguments, well at least Eliezer more or less refuted them for him. Now it is time to do just what Black Belt Bayesian suggested and try to fix the SIAI's arguments for them. Because advocacy - including devil's advocacy - is mostly bullshit.

Remind SIAI of what they are clearly doing right and also just what a good presentation of their strengths would look like - who knows, maybe it'll spur them on and achieve in some measure just the kind of changes you desire!

Replies from: XiXiDu
comment by XiXiDu · 2010-08-22T12:05:11.749Z · LW(p) · GW(p)

Interesting! Levels of epistemic accuracy:

So while telling the truth is maximally accurate relative to your epistemic state, concealment is deception by misguidance which is worse than the purest form of deception that is lying (falsehood). Bullshit however is not even wrong.

I don't see how devil's advocacy fits into this as I perceive it to be a temporary adjustment of someones mental angel to look back at one's own position from a different point of view.

comment by multifoliaterose · 2010-08-23T12:37:16.420Z · LW(p) · GW(p)

Perhaps you find FHI equally non-credible/non-accountable as a charity, but whatever FHI’s failings, it’s hard to see how they are exactly the same ones which you have ascribed to SIAI. Perhaps you believe that if a charity has not been evaluated and endorsed by Givewell, it can’t possibly be worthwhile. I can’t avoid the thought that if you were really interested in existential risk reduction, you would spend at least some tiny percentage of the time you’ve spent writing up these posts against SIAI on investigating FHI as an alternative.

See my response to Jordan's comment.

comment by multifoliaterose · 2010-08-21T17:48:57.199Z · LW(p) · GW(p)

Hi Airedale,

Thanks for your thoughtful comments. I'm missing a good keyboard right now so can't respond in detail, but I'll make a few remarks.

I'm well aware that SIAI has done some good things. The reason why I've focusing on the apparent shortcomings of SIAI is to encourage SIAI to improve its practices. I do believe that at the margin the issue worthy of greatest consideration is transparency and accountability and I believe that this justifies giving to VillageReach over SIAI.

But I'm definitely open to donating to and advocating that others donate to SIAI and FHI in the future provided that such organizations clear certain standards for transparency and accountability and provide a clear and compelling case for room for more funding.

Again, I would encourage you (and others) who are interested in existential risk to write to the GiveWell staff requesting that GiveWell evaluate existential risk organizations including SIAI and FHI. I would like to see GiveWell do such work soon.

Replies from: WrongBot, Jordan, wedrifid
comment by WrongBot · 2010-08-21T21:32:29.662Z · LW(p) · GW(p)

I do believe that at the margin the issue worthy of greatest consideration is transparency and accountability and I believe that this justifies giving to VillageReach over SIAI.

What about everything else that isn't the margin? What is your expected value of SIAI's public accomplishments, to date, in human lives saved? What is that figure for VillageReach? Use pessimistic figures for SIAI and optimistic ones for VillageReach if you must, but come up with numbers and then multiply them. Your arguments are not consistent with expected utility maximization.

You would be much better off if you were directly offering SIAI financial incentives to improve the expected value of its work. Donating to VillageReach is not the optimal use of money for maximizing what you value.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-21T21:37:45.988Z · LW(p) · GW(p)

You would be much better off if you were directly offering SIAI financial incentives to improve the expected value of its work. Donating to VillageReach is not the optimal use of money for maximizing what you value.

You may well be right about this, I'll have to think some more about this :-). Thanks for raising this issue.

comment by Jordan · 2010-08-21T20:26:46.663Z · LW(p) · GW(p)

You've provided reasons for why you are skeptical of the ability of SIAI to reduce existential risk. It's clear you've dedicated a good amount of effort to your investigation.

Why are you content to leave the burden of investigating FHI's abilities to GiveWell, rather than investigate yourself, as you have with SIAI?

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-23T12:36:19.355Z · LW(p) · GW(p)

Why are you content to leave the burden of investigating FHI's abilities to GiveWell, rather than investigate yourself, as you have with SIAI?

The reason that I have not investigated FHI is simply because I have not gotten around to doing so. I do plan to change this soon. I investigated SIAI first because I came into contact with SIAI before I came into contact with FHI.

My initial reaction to FHI is that it looks highly credible to me, but that I doubt that it has room for more funding. However, I look forward to looking more closely into this matter in the hopes of finding a good opportunity for donors to lower existential risk.

Replies from: Airedale
comment by Airedale · 2010-08-23T14:53:47.228Z · LW(p) · GW(p)

You should definitely do research to confirm this on your own, but the last I heard (somewhat informally through the grapevine) was that FHI does indeed have room for more funding, for example, in the form of funding for an additional researcher or post-doc to join their team. You can than evaluate whether an additional academic trying to research and publish in these areas would be useful, but given how small the field currently is, my impression would be that an additional such academic would probably be helpful.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-23T14:55:35.233Z · LW(p) · GW(p)

Thanks for the info.

comment by wedrifid · 2010-08-21T22:04:40.310Z · LW(p) · GW(p)

I'm well aware that SIAI has done some good tings. The reason why I've focusing on the apparent shortcomings of SIAI is to encourage SIAI to improve its practices. I do believe that at the margin the issue worthy of greatest consideration is transparency and accountability

I would very much like to see what positive observations you have made during your research into the SIAI. I know that you believe there is plenty of potential - there would be no reason to campaign for improvements if you didn't see the chance that it would make a difference. That'd be a pointless or counter-productive for your interests given that it certainly doesn't win you any high status friends!

How about you write a post on a different issue regarding SIAI or FAI in general using the same standard of eloquence that you have displayed?

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-22T20:48:12.000Z · LW(p) · GW(p)

wedrifid - Thanks for your kind remarks. As I said in my top level post, I'll be taking a long break from LW. As a brief answer to your question:

(a) I think that Eliezer has inspired people (including myself) to think more about existential risk and that this will lower existential risk. I thank Eliezer for this.

(b) I think that Less Wrong has provided a useful venue for smart people (of a certain kind) to network and find friends and that this too will lower existential risk.

(c) Most of what I know about the good things that SIAI has done on an institutional level are from Carl Shulman. You might like to ask him for more information.

comment by Scott Alexander (Yvain) · 2010-08-21T23:45:15.607Z · LW(p) · GW(p)

The bulk of this is about a vague impression that SIAI isn't transparent and accountable. You gave one concrete example of something they could improve: having a list of their mistakes on their website. This isn't a bad idea, but AFAIK GiveWell is about the only charity that currently does this, so it doesn't seem like a specific failure on SIAI's part not to include this. So why the feeling that they're not transparent and accountable?

SIAI's always done a good job of letting people know exactly how it's raising awareness - you can watch the Summit videos yourself it you want. They could probably do a bit more to publish appropriate financial records, but I don't think that's your real objection. Besides that, what? Anti-TB charities can measure how much less TB there is per dollar invested; SIAI can't measure what percentage safer the world is, since the world-saving is still in basic research phase. You can't measure the value of the Manhattan Project in "cities destroyed per year" while it's still going on.

By the Outside View, charities that can easily measure their progress with statistics like "cases of TB prevented" are better than those that can't. By the Outside View, charities that employ people who don't sound like megalomaniacal mad scientists are better than those that employ people who do. By the Outside View, charities that don't devote years of work to growing and raising awareness before really starting working full-time on their mission are better than ones that do. By the Outside View, SIAI is a sucky charity, and they know it.

There are some well-documented situations where the Outside View is superior to the Inside View, but there are also a lot of cases where it isn't - a naive Outside Viewist would have predicted Obama would've lost the election, even when he was way ahead in the polls, because by the Outside View black people don't become President. To the degree you have additional evidence, and to the degree that you trust yourself to only say you have additional evidence when you actually have additional evidence and not when you're trying to make excuses for yourself, the Inside View is superior to the Outside View. The Less Wrong Sequences are several hundred really comprehensive blog posts worth of additional evidence trying to convey Inside information on why SIAI and its strategy aren't as crazy as they sound; years of interacting with SIAI people is Inside information on whether they're honest and committed. I think these suffice to shift my probability estimates: not all the way, but preventing the apocalypse is the sort of thing one only needs a small probability to start thinking about.

The other Outside View argument would be that, whether or not you trust SIAI, it's more important to signal that you only donate to transparent and accountable organizations, in order to TDT your way into making other people only donate to transparent and accountable organizations and convince all charities to become transparent and accountable. This is a noble idea, but the world being destroyed by unfriendly AI would throw a wrench into the "improve charity" plan, so this would be an excellent time to break your otherwise reasonable rule.

Replies from: multifoliaterose, multifoliaterose
comment by multifoliaterose · 2010-08-23T11:04:34.518Z · LW(p) · GW(p)

In addition to the points that I made in my other response to your comment, I would add that the SIAI staff have not created an environment which welcomes criticism from outsiders.

The points in my other response were well considered and yet as I write, the response has been down voted three times so that it is now hidden from view.

I see Eliezer's initial response to XiXiDu's post Should I believe what the SIAI claims? as evidence that the SIAI staff have gotten in the habit of dismissing criticisms out of hand whenever they question the credibility of the critic. This habit is justified up to a point, but my interactions with the SIAI staff have convinced me that (as a group) they go too way far, creating a selection effect which subjects themselves to confirmation bias and group think.

In SIAI's defense, I'll make the following three points:

•Michael Vassar corresponded with me extensively in response to the criticisms of SIAI which I raised in the comments to my post (One reason) why capitalism is much maligned and Roko's post Public Choice and the Altruist's Burden.

•As prase remarks, the fact that my top level posts have not been censored are an indication that "LW is still far from Objectivism."

•SIAI staff member Jasen sent me a private message thanking me for making my two posts Existential Risk and Public Relations and Other Existential Risks and explaining that SIAI plans to address these points in the future.

I see these things as weak indications that SIAI may take my criticisms seriously. Nevertheless, I perceive SIAI's openness to criticism up until this point to be far lower than GiveWell's openness to criticism up until this point.

For examples of GiveWell's openness to criticism, see Holden Karnofsky's back and forth with Laura Deaton here and the threads on the GiveWell research mailing list on Cost-Effectiveness Estimates, Research Priorities and Plans and Environmental Concerns and International Aid as well as Holden's posting Population growth & health.

As you'll notice, the GiveWell staff are not arbitrarily responsive to criticism - if they were then they would open themselves up to the possibility of never getting anything done. But their standard for responsiveness is much higher than anything that I've seen from SIAI. For example, compare Holden's response to Laura Deaton's (strong) criticism with Eliezer's response to my top level post.

In order for me to feel comfortable donating to SIAI I would need to see SIAI staff exhibiting a level of responsiveness and engagement comparable to the level that the GiveWell staff have exhibited in the links above.

comment by multifoliaterose · 2010-08-22T21:16:15.270Z · LW(p) · GW(p)

Yvain,

Thanks for your feedback.

So why the feeling that they're not transparent and accountable?

•As I discuss in Other Existential Risks I feel that SIAI has not (yet) provided a compelling argument for the idea that focus on AI is the most cost-effective way of reducing existential risk. Obviously I don't expect an airtight argument (as it would be impossible to offer one), but I do feel SIAI needs to say a lot more on this point. I'm encouraged that SIAI staff have informed me that more information on this point will be forthcoming in future blog posts.

•I agree with SarahC's remarks here and here on the subject of there being a "problem with connecting to the world of professional science."

•I agree with you that SIAI and its strategy aren't as crazy as they sound at first blush. I also agree that the Less Wrong sequences suffice to establish that SIAI has some people of very high intellectual caliber and that this distinguishes SIAI from most charities. At present, despite these facts I'm very skeptical of the idea that SIAI's approach to reducing existential risk is the optimal one for the reasons given in the two bullet points above.

•Regarding:

This is a noble idea, but the world being destroyed by unfriendly AI would throw a wrench into the "improve charity" plan, so this would be an excellent time to break your otherwise reasonable rule.

Here the question is just whether there's sufficient evidence that donating to SIAI has sufficiently high (carefully calibrated) expected value to warrant ignoring incentive effects associated with high focus on transparency and accountability.

I personally am very skeptical that this question has an affirmative answer. I may be wrong.

comment by CarlShulman · 2010-08-21T18:16:45.560Z · LW(p) · GW(p)

I believe that at present GiveWell's top ranked charities VillageReach and StopTB are better choices than SIAI, even for donors like utilitymonster who take astronomical waste seriously and believe in the ideas expressed in the cluster of blog posts linked under Shut Up and multiply.

The invocation of VillageReach in addressing those aggregative utilitarians concerned about astronomical waste here seems baffling to me. Consider these three possibilities:

1) SIAI at the margin has a negative expected impact on our chances of avoiding existential risks, so shouldn't be donated to. VillageReach is irrelevant and adds nothing to the argument, you could have said "aggregative utilitarians would do better to burn their cash." Why even distract would-be efficient philanthropists with this rather than some actual existential-risk-focused endeavour, e.g. FHI, or a donor-advised fund for existential risk, or funding a GiveWell existential risk program, or conditioning donations based on some transparency milestones?

2) SIAI at the margin has significant positive expected impact on our chances of avoiding existential risks. VillageReach may very slightly and indirectly reduce existential risk by saving the lives of some of the global poor, increasing the population of poor countries, or making effective charity more prestigious, but this would be quite small in comparison and the recommendation wrong.

3) SIAI at the margin has a positive impact on the existential risk situation in the tiny region between zero and the impact of donations to VillageReach. This is a very unlikely scenario.

Now, if you were arguing against taking into account future generations, or for other values on which existential risk reduction is less important than current poverty and disease relief, VillageReach could be relevant, but in this context the quoted text is very peculiar.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-21T21:42:57.000Z · LW(p) · GW(p)

Your points are fair, I have edited the top level post accordingly to eliminate reference to VillageReach and StopTB.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-21T17:16:24.316Z · LW(p) · GW(p)

SIAI does not presently exhibit high levels of transparency and accountability... For this reason together with the concerns which I express about Existential Risk and Public Relations, I believe that at present GiveWell's top ranked charities VillageReach and StopTB are better choices than SIAI

I have difficulty taking this seriously. Someone else can respond to it.

agree with what I interpret to be Dario's point above: that in evaluating charities which are not transparent and accountable, we should assume the worst.

Assuming that much of the worst isn't rational. It would be a convenient soldier for your argument, but it's not the odds to bet at. Also, you don't make clear what constitutes a sufficient level of transparency and accountability, though of course you will now carefully look over all of SIAI's activities directed at transparency and accountability, and decide that the needed level is somewhere above that.

You say you assume the worst, and that other people should act accordingly. Would you care to state "the worst", your betting odds on it, how much you're willing to bet, and what neutral third party you would accept as providing the verdict if they looked over SIAI's finances and told you it wasn't true? If you offer us enough free money, I'll vote for taking it.

I have to say that my overall impression here is of someone who manages to talk mostly LW language most of the time, but when his argument requires a step that just completely fails to make sense, like "And this is why if you're trying to minimize existential risk, you should support a charity that tries to stop tuberculosis" or "And this is where we're going to assume the worst possible case instead of the expected case and actually act that way", he'll just blithely keep going.

With Michael Vassar in charge, SIAI has become more transparent, and will keep on doing things meant to make it more transparent, and I have every confidence that whatever it is we do, it will never be enough for someone who is, at that particular time, motivated to argue against SIAI.

Replies from: Jordan, multifoliaterose, Tyrrell_McAllister, XiXiDu
comment by Jordan · 2010-08-21T20:20:20.786Z · LW(p) · GW(p)

Normally I refrain from commenting about the tone of a comment or post, but the discussion here revolves partly around the public appearance of SIAI, so I'll say:

this comment has done more to persuade me to stop being a monthly donor to SIAI than anything else I've read or seen.

This isn't a comment about the content of your response, which I think has valid points (and which multifoliaterose has at least partly responded to).

Replies from: wedrifid, ciphergoth, Eliezer_Yudkowsky
comment by wedrifid · 2010-08-21T21:56:07.384Z · LW(p) · GW(p)

this comment has done more to persuade me to stop being a monthly donor to SIAI than anything else I've read or seen.

It is certainly Eliezer's responses and not multi's challenges which are the powerful influence here. Multi has effectively given Eliezer a platform from which to advertise the merits of SIAI as well as demonstrate that contrary to suspicions Eliezer is, in fact, able to handle situations in accordance to his own high standards of rationality despite the influences of his ego. This is not what I've seen recently. He has focussed on retaliation against multi at whatever weak points he can find and largely neglected to do what will win. Winning in this case would be demonstrating exactly why people ought to trust him to be able to achieve what he hopes to achieve (by which I mean 'influence' not 'guarantee' FAI protection of humanity.)

I want to see more of this:

With Michael Vassar in charge, SIAI has become more transparent, and will keep on doing things meant to make it more transparent

and less of this:

I have to say that my overall impression here is of someone who manages to talk mostly LW language most of the time, but when his argument requires a step that just completely fails to make sense, like "And this is why if you're trying to minimize existential risk, you should support a charity that tries to stop tuberculosis" or "And this is where we're going to assume the worst possible case instead of the expected case and actually act that way", he'll just blithely keep going.

I'll leave aside ad hominim and note that tu quoque isn't always fallacious. Unfortunately in this case it is, in fact, important that Eliezer doesn't fall into the trap that he accuses multi of - deploying arguments as mere soldiers.

Replies from: Eliezer_Yudkowsky, Jordan
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-21T23:26:10.909Z · LW(p) · GW(p)

This sort of conversation just makes me feel tired. I've had debates before about my personal psychology and feel like I've talked myself out about all of them. They never produced anything positive, and I feel that they were a bad sign for the whole mailing list they appeared on - I would be horrified to see LW go the way of SL4. The war is lost as soon as it starts - there is no winning move. I feel like I'm being held to an absurdly high standard, being judged as though I were trying to be the sort of person that people accuse me of thinking I am, that I'm somehow supposed to produce exactly the right mix of charming modesty while still arguing my way into enough funding for SIAI... it just makes me feel tired, and like I'm being held to a ridiculously high standard, and that it's impossible to satisfy people because the standard will keep going up, and like I'm being asking to solve PR problems that I never signed up for. I'll solve your math problems if I can, I'll build Friendly AI for you if I can, if you think SIAI needs some kind of amazing PR person, give us enough money to hire one, or better yet, why don't you try being perfect and see whether it's as easy as it sounds while you're handing out advice?

I have looked, and I have seen under the Sun, that to those who try to defend themselves, more and more attacks will be given. Like, if you try to defend yourself, people sense that as a vulnerability, and they know they can demand even more concessions from you. I tried to avoid that failure mode in my responses, and apparently failed. So let me state it plainly for you. I'll build a Friendly AI for you if I can. Anything else I can do is a bonus. If I say I can't do it, asking me again isn't likely to produce a different answer.

It was very clearly a mistake to have participated in this thread in the first place. It always is. Every single time. Other SIAI supporters who are better at that sort of thing can respond. I have to remember, now, that there are other people who can respond, and that there is no necessity for me to do it. In fact, someone really should have reminded me to shut up, and if it happens again, I hope someone will. I wish I could pull a Roko and just delete all my comments in all these threads, but that would be impolite.

Replies from: wedrifid, Jordan, multifoliaterose, LucasSloan, Furcas
comment by wedrifid · 2010-08-22T00:27:02.191Z · LW(p) · GW(p)

I really appreciate this response. In fact, to mirror Jordan's pattern I'll say that this comment has done more to raise my confidence in SIAI than anything else in the recent context.

I'll solve your math problems if I can, I'll build Friendly AI for you if I can, if you think SIAI needs some kind of amazing PR person, give us enough money to hire one

I'm working on it, to within the limits of my own entrepreneurial ability and the costs of serving my own personal mission. Not that I would allocate such funds to a PR person. I would prefer to allocate it to research of the 'publish in traditional journals' kind. If I was in the business of giving advice I would give the same advice you have no doubt heard 1,000 times: the best thing that you personally could do for PR isn't to talk about SIAI but to get peer reviewed papers published. Even though academia is far from perfect, riddled with biases and perhaps inclined to have a certain resistance to your impingement it is still important.

or better yet, why don't you try being perfect and see whether it's as easy as it sounds while you're handing out advice?

Now, now, I think the 'give us the cash' helps you out rather a lot more than me being perfect. Mind you me following tsuyoku naritai does rather overlap with the 'giving you cash'.

I have looked, and I have seen under the Sun, that to those who try to defend themselves, more and more attacks will be given. Like, if you try to defend yourself, people sense that as a vulnerability, and they know they can demand even more concessions from you. I tried to avoid that failure mode in my responses, and apparently failed.

You are right on all counts. I'll note that it perhaps didn't help that you felt it was a time to defend rather than a time to assert and convey. It certainly isn't necessary to respond to criticism directly. Sometimes it is better to just take the feedback into consideration and use anything of merit when working out your own strategy. (As well as things that aren't of merit but are still important because you have to win over even stupid people).

I'll build a Friendly AI for you if I can.

Thankyou. I don't necessarily expect you to succeed because the task is damn hard, takes longer to do right than for someone else to fail and we probably only have one shot to get it right. But you're shutting up to do the impossible. Even if the odds are against us targeting focus at the one alternative that doesn't suck is the sane thing to do.

It was very clearly a mistake to have participated in this thread in the first place. It always is. Every single time. Other SIAI supporters who are better at that sort of thing can respond. I have to remember, now, that there are other people who can respond, and that there is no necessity for me to do it.

Yes.

In fact, someone really should have reminded me to shut up, and if it happens again, I hope someone will.

I will do so, since you have expressed willingness to hear it. That is an option I would much prefer to criticising any responses you make that I don't find satisfactory. You're trying to contribute to saving the goddam world and have years of preparation behind you in some areas that nobody has. You can free yourself up to get on with that while someone else explains how you can be useful.

I wish I could pull a Roko and just delete all my comments in all these threads, but that would be impolite.

The sentiment is good but perhaps you could have left off the reminder of the Roko incident. The chips may have fallen somewhat differently in these threads if the ghost of Nearly-Headless Roko wasn't looming in the background.

Once again, this was an encouraging reply. Thankyou.

comment by Jordan · 2010-08-22T02:04:44.963Z · LW(p) · GW(p)

Yes, the standards will keep going up.

And, if you draw closer to your goal, the standards you're held to will dwarf what you see here. You're trying to build god, for christ's sake. As the end game approaches more and more people are going to be taking that prospect more and more seriously, and there will be a shit storm. You'd better believe that every last word you've written is going to be analysed and misconstrued, often by people with much less sympathy than us here.

It's a lot of pressure. I don't envy you. All I can offer you is: =(

comment by multifoliaterose · 2010-08-23T23:35:02.720Z · LW(p) · GW(p)

Much of what you say here sounds quite reasonable. Since many great scientists have lacked social grace, it seems to me that your PR difficulties have no bearing on your ability to do valuable research.

I think that the trouble arises from the fact that people (including myself) have taken you to be an official representative of SIAI. As long as SIAI makes it clear that your remarks do not reflect SIAI's positions, there should be no problem.

Replies from: wedrifid
comment by wedrifid · 2010-08-24T00:49:12.309Z · LW(p) · GW(p)

Much of what you say here sounds quite reasonable. Since many great scientists have lacked social grace, it seems to me that your PR difficulties have no bearing on your ability to do valuable research.

There is an interesting question. Does it make a difference if one of the core subjects of research (rationality) strongly suggests different actions be taken and the core research goal (creating or influencing the creation of an FAI) requires particular standards in ethics and rationality? For FAI research behaviours that reflect ethically relevant decision making and rational thinking under pressure matter.

If you do research into 'applied godhood' then you can be expected to be held to particularly high standards.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-24T04:03:52.194Z · LW(p) · GW(p)

Yes, these are good points.

What I was saying above is that if Eliezer wants to defer to other SIAI staff then we should seek justification from them rather than from him. Maybe they have good reasons for thinking that it's a good idea for him to do FAI research despite the issue that you mention.

Replies from: wedrifid
comment by wedrifid · 2010-08-24T04:17:07.094Z · LW(p) · GW(p)

I understand and I did vote your comment up. The point is relevant even if not absolutely so in this instance.

comment by LucasSloan · 2010-08-21T23:30:41.323Z · LW(p) · GW(p)

I wish I could pull a Roko and just delete all my comments in all these threads, but that would be impolite.

Is this now official LW slang?

Replies from: wedrifid
comment by wedrifid · 2010-08-22T01:27:50.282Z · LW(p) · GW(p)

Is this now official LW slang?

Only if we need official LW slang for "Have rigid boundaries and take decisive action in response to an irredeemable defection in a game of community involvement". I just mean to say it may be better to use a different slang term for "exit without leaving a trace" since for some "pull a Roko" would always prompt a feeling of regret. I wouldn't bring up Roko at all (particularly reference to the exit) because I want to leave the past in the past. I'm only commenting now because I don't want it to, as you say, become official LW slang.

comment by Furcas · 2010-08-22T00:02:07.605Z · LW(p) · GW(p)

I'm probably alone on this one, but "charming modesty" usually puts me off. It gives me the impression of someone slimy and manipulative, or of someone without the rigor of thought necessary to accomplish anything interesting.

comment by Jordan · 2010-08-21T23:02:36.805Z · LW(p) · GW(p)

Well said.

In the past I've seen Eliezer respond to criticism very well. His responses seemed to be in good faith, even when abrasive. I use this signal as a heuristic for evaluating experts in fields I know little about. I'm versed in the area of existential risk reduction well enough not to need this heuristic, but I'm not versed in the area of the effectiveness of SIAI.

Eliezer's recent responses have reduced my faith in SIAI, which, after all, is rooted almost solely in my impression of its members. This is a double stroke: my faith in Eliezer himself is reduced, and the reason for this is a public appearance which will likely prevent others from supporting SIAI, which is more evidence to me that SIAI won't succeed.

SIAI (and Eliezer) still has a lot of credibility in my eyes, but I will be moving away from heuristics and looking for more concrete evidence as I debate whether to continue to be a supporter.

comment by Paul Crowley (ciphergoth) · 2010-08-21T20:36:47.779Z · LW(p) · GW(p)

Tone matters, but would you really destroy the Earth because Eliezer was mean?

Replies from: wedrifid
comment by wedrifid · 2010-08-21T21:24:24.831Z · LW(p) · GW(p)

Tone matters, but would you really destroy the Earth because Eliezer was mean?

The issue seems to be not whether Eliezer personally behaves in an unpleasant manner but rather how Eliezer's responses influence predictions of just how much difference Eliezer will be able to make on the problem at hand. The implied reasoning Jordan makes is different to the one you suggest.

Replies from: Eliezer_Yudkowsky, ciphergoth, Jordan
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-21T23:33:28.896Z · LW(p) · GW(p)

BTW, when people start saying, not, "You offended me, personally" but "I'm worried about how other people will react", I usually take that as a cue to give up.

Replies from: wedrifid, XiXiDu
comment by wedrifid · 2010-08-22T00:36:53.781Z · LW(p) · GW(p)

This too is not the key point in question. Your reaction here can be expected to correlate with other reactions you would make in situations not necessarily related to PR. In particular it says something about potential responses to ego damaging information. I am sure you can see how that would be relevant to estimates of possible value (positive or negative) that you will contribute. I again disclaim that 'influences' does not mean 'drastically influences'.

Note that the reply in the parent does relate to other contexts somewhat more than this one.

comment by XiXiDu · 2010-08-22T08:59:23.893Z · LW(p) · GW(p)

Maybe because most people who talk to you know that they are not typical and also know that others wouldn't be as polite given your style of argumentation?

Replies from: Eneasz
comment by Eneasz · 2010-08-25T18:45:55.272Z · LW(p) · GW(p)

People who know they aren't typical should probably realize that their simulation of typical minds will often be wrong. A corollary to Vinge's Law, perhaps?

comment by Paul Crowley (ciphergoth) · 2010-08-21T23:08:38.305Z · LW(p) · GW(p)

That's still hard to see - that you could go from thinking that SIAI represents the most efficient form of altruistic giving to that it no longer is, because Eliezer was rude to someone on LW. He's manifestly managed to persuade quite a few people of the importance of AI risk despite these disadvantages - I think you'd have to go a long way to build a case that SIAI is significantly more likely to founder and fail on the basis of that one comment.

Replies from: wedrifid
comment by wedrifid · 2010-08-21T23:18:58.030Z · LW(p) · GW(p)

because Eliezer was rude to someone on LW

Rudeness isn't the issue* so much as what the rudeness is used instead of. A terrible response in a place where a good response should be possible is information that should have influence on evaluations.

* Except, I suppose, PR implications of behaviour in public having some effect, but that isn't something that Jordan's statement needs to rely on.

I think you'd have to go a long way to build a case that SIAI is significantly more likely to founder and fail on the basis of that one comment.

Obviously. But not every update needs to be an overwhelming one. Again, the argument you refute here is not one that Jordan made.

EDIT: I just saw Jordan's reply and saw that he did mean both these points but that he also considered the part that I had included only as a footnote.

comment by Jordan · 2010-08-21T22:43:46.690Z · LW(p) · GW(p)

Yes, thank you for clarifying that.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-21T22:56:29.593Z · LW(p) · GW(p)

Can you please be more precise about what you saw as the problem?

Replies from: Jordan
comment by Jordan · 2010-08-21T23:47:01.343Z · LW(p) · GW(p)

Yes. Your first line

I have difficulty taking this seriously. Someone else can respond to it.

was devoid of explicit information. It was purely negative.

Implicitly, I assume you meant that existential risk reduction is so important that no other 'normal' charity can compare in cost effectiveness (utility bought per dollar). While I agree that existential risk reduction is insanely important, it doesn't follow that SIAI is a good charity to donate to. SIAI may actually be hurting the cause (in one way, by hurting public opinion), and this is one of multi's points. Your implicit statement seems to me to be a rebuke of this point sans evidence, amounting to simply saying nuh-uh.

You say

you don't make clear what constitutes a sufficient level of transparency and accountability

which is a good point. But then go on to say

though of course you will now carefully look over all of SIAI's activities directed at transparency and accountability, and decide that the needed level is somewhere above that.

essentially accusing multi of a crime in rationality before he commits it. On a site devoted to rationality that is a serious accusation. It's understood on this site that there are a million ways to fail in rationality, and that we all will fail, at one point or another, hence we rely on each other to point out failures and even potential future failures. Your accusation goes beyond giving a friendly warning to prevent a bias. It's an attack.

Your comment about taking bets and putting money on the line is a common theme around here and OB and other similar forums/blogs. It's neutral in tone to me (although I suspect negative in tone to some outside the community), but I find it distracting in the midst of a serious reply to serious objections. I want a debate, not a parlor trick to see if someone is really committed to an idea they are proposing. This is minor, it mostly extends the tone of the rest of the reply. In a different, friendlier context, I wouldn't take note of it.

Finally,

I have to say that my overall impression here is of someone who manages to talk mostly LW language most of the time, but when his argument requires a step that just completely fails to make sense, like "And this is why if you're trying to minimize existential risk, you should support a charity that tries to stop tuberculosis" or "And this is where we're going to assume the worst possible case instead of the expected case and actually act that way", he'll just blithely keep going.

wedrifid has already commented on this point in this thread. What really jumps out at me here is the first part

my overall impression here is of someone who manages to talk mostly LW language most of the time

This just seems so utterly cliquey. "Hey, you almost fit in around here, you almost talk like we talk, but not quite. I can see the difference, and it's exactly that little difference that sets you apart from us and makes you wrong." The word "manages" seems especially negative in this context.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-22T00:20:25.028Z · LW(p) · GW(p)

Thank you for your specifications, and I'll try to keep them in mind!

comment by multifoliaterose · 2010-08-21T17:53:19.338Z · LW(p) · GW(p)

I'll reply in more detail later but for now I'l just say

With Michael Vassar in charge, SIAI has become more transparent, and will keep on doing things meant to make it more transparent

This sounds great

I have every confidence that whatever it is we do, it will never be enough for someone who is, at that particular time, motivated to argue against SIAI.

I'm not one of those people, I would be happy to donate to SIAI and encourage others to do so if I perceive a significant change from the status quo and there's no new superior alternative that emerges. I think that if SIAI were well constituted, donating to it would be much more cost effective than VillageReach.

I would be thrilled to see the changes that I would like to see take place. More on precisely what I'm looking for to follow.

Replies from: Torben
comment by Torben · 2010-08-21T18:54:32.552Z · LW(p) · GW(p)

I think that if SIAI were well constituted, donating to it would be much more cost effective than VillageReach.

For most realistic interest rates this statement would have made it more rational to put your previous traditional aid donation into a banking account for a year to see if your bet had come out -- and then donating to SIAI.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-21T19:19:30.697Z · LW(p) · GW(p)

I donate now using GiveWell to signal that I care about transparency and accountability to incentivize charities to improve.

Replies from: CarlShulman
comment by CarlShulman · 2010-08-21T19:49:50.158Z · LW(p) · GW(p)

You would have done that by putting the money in a DAF and announcing your policy, rather than providing incentives in what you say is the wrong field. You're signaling that you will use money wastefully (in its direct effects) by your own standards rather than withhold it until a good enough recipient emerges by your standards.

Replies from: wedrifid, gwern, utilitymonster, Tyrrell_McAllister
comment by wedrifid · 2010-08-21T21:37:07.215Z · LW(p) · GW(p)

You're signaling that you will use money wastefully (in its direct effects) by your own standards

This is an effective signal. It sounds like multi considers changing the current standard of existential risks charities from ineffective to effective far more important than boosting the income of said class of charities. This being the case showing that there is a potential supply of free money for any existential risk focussed charity that meets the standards multi considers important.

As well as influencing the world in the short term and satisfying his desire to signal and feel generous donating to Givewell effectively shows that multi is willing to put his money where is mouth is. He knows that if he wasn't currently donating Eliezer would use that as another excuse to avoid the issue, since Eliezer has previously declared that to be his policy.

comment by gwern · 2010-08-21T23:08:56.291Z · LW(p) · GW(p)

DAF?

Replies from: CarlShulman
comment by CarlShulman · 2010-08-22T09:17:21.726Z · LW(p) · GW(p)

Donor-Advised-Fund, a vehicle which allows one to place money into an investment account where it can only be used to make charitable donations. It allows you to commit yourself to giving (you can't take the money back to spend on beer) even if you don't know what's best now, or can allow you to accumulate donations over time so that you can get leverage with the simultaneous disbursal of a big chunk. Here's the Fidelity DAF site.

Replies from: wedrifid, katydee
comment by wedrifid · 2010-08-22T11:05:00.811Z · LW(p) · GW(p)

I like the site!

The limits may be a little frustrating for some potential users... you need to start with US$5,000. (And pay a minimum of $60 per year in fees).

comment by katydee · 2010-08-22T11:38:14.292Z · LW(p) · GW(p)

What exactly do you mean by leverage?

Replies from: CarlShulman
comment by CarlShulman · 2010-08-22T12:28:19.692Z · LW(p) · GW(p)

When you're giving a large donation all at once, the transaction costs of meeting your demands are smaller relative to the gains, and the transaction costs of doing nonstandard donations (e.g. teaming up with others to create a whole new program or organization) are more manageable.

Replies from: katydee
comment by katydee · 2010-08-22T21:30:27.357Z · LW(p) · GW(p)

I don't get it, why would you want to make demands? Isn't the point of donating that you think others are better positioned to accomplish goal X than you are, so they're able to make more efficient use of the money?

Replies from: CarlShulman, juliawise
comment by CarlShulman · 2010-08-23T01:55:14.263Z · LW(p) · GW(p)

E.g. demands for work to be done providing information to you, or to favor a specific project (although the latter is more murky with fungibility issues).

comment by juliawise · 2011-07-25T19:56:20.324Z · LW(p) · GW(p)

Charities sometimes favor the work they believe to be popular with donors over the work they believe would be more useful. Specifically, I'm thinking of monitoring and evaluation. By designating money for unpopular but useful tasks, you encourage them to better fund it. Before doing this, I would talk to the organizations you're considering funding and find out what unsexy projects they would like to fund more. Then decide if you think they're worth funding.

comment by utilitymonster · 2010-08-23T15:45:57.441Z · LW(p) · GW(p)

This doesn't sound like a bad idea. Could someone give reasons to think that donations to SIAI now would be better than this?

Replies from: CarlShulman, CarlShulman
comment by CarlShulman · 2010-08-23T19:34:38.616Z · LW(p) · GW(p)

In your specific case, given what you have said about your epistemic state, I would think that you subjectively-ought to do something like this (a commitment mechanism, but not necessarily with a commitment to reducing existential risk given your normative uncertainty). I'll have more to say about the general analysis in 48 hours or more, following a long flight from Australia.

comment by CarlShulman · 2010-08-23T18:58:56.231Z · LW(p) · GW(p)

Does "this" mean DAF, or signalling through waste?

comment by Tyrrell_McAllister · 2010-08-21T23:38:37.658Z · LW(p) · GW(p)

You would have done that by putting the money in a DAF and announcing your policy, rather than providing incentives in what you say is the wrong field. You're signaling that you will use money wastefully (in its direct effects) by your own standards rather than withhold it until a good enough recipient emerges by your standards.

One problem with this is that skeptics like Eliezer would assume that mulitfoliaterose will just move the goal posts when it comes time to pay.

Replies from: wedrifid
comment by wedrifid · 2010-08-22T00:43:11.294Z · LW(p) · GW(p)

Fortunately a specific public commitment does at least help keep things grounded in more than rhetoric. Even assuming no money eventuates there is value in clearly meeting the challenge to the satisfaction of observers.

comment by Tyrrell_McAllister · 2010-08-21T23:35:06.956Z · LW(p) · GW(p)

I have to say that my overall impression here is of someone who manages to talk mostly LW language most of the time, but when his argument requires a step that just completely fails to make sense, like "And this is why if you're trying to minimize existential risk, you should support a charity that tries to stop tuberculosis" or "And this is where we're going to assume the worst possible case instead of the expected case and actually act that way", he'll just blithely keep going.

Are you reading multifoliaterose carefully? He has made neither of these claims.

He said that supporting a tuberculosis charity is better than donating to SIAI, not that supporting a tuberculosis charity is the best way to fight existential risk.

And he hasn't advocated using something other than the expected case when evaluating a non-transparent charity. What you may infer is that he believes that the worst case does not significantly differ from the expected case in the context of the amount of money that he would donate. That belief may not be realistic, but it's not the belief that you impute to him.

Replies from: Airedale, Furcas
comment by Airedale · 2010-08-21T23:59:20.547Z · LW(p) · GW(p)

He said that supporting a tuberculosis charity is better than donating to SIAI, not that supporting a tuberculosis charity is the best way to fight existential risk.

I hesitate to point to language from an earlier version of the post, since multifoliaterose has taken out this language, but given that EY was responding to the earlier version, it seems fair. The original post included the following language:

I believe that at present GiveWell's top ranked charities VillageReach and StopTB are better choices than SIAI, even for donors like utilitymonster who take astronomical waste seriously and believe in the ideas expressed in the cluster of blog posts linked under Shut Up and multiply.

(emphasis added)

I believe there originally may have been some links there, but I don't have them anymore. Nonetheless, if I correctly understand the references to utilitymonster, astronomical waste, and Shut Up and multiply, I do think that that multifoliaterose was arguing that even the sorts of donors most interested in minimizing existential risk should still give to those other charities. Does that reading seem wrong?

Replies from: Tyrrell_McAllister, CarlShulman
comment by Tyrrell_McAllister · 2010-08-22T00:19:07.936Z · LW(p) · GW(p)

Does that reading seem wrong?

Here is my reading: Even in the case of utilitymonster,

  • his/her concern about tuberculosis (say) in the near term is high enough, and

  • SIAI's chances of lowering existential risk by a sufficient amount are low enough,

to imply that utilitymonster would get more expected utility from donating to StopTB than from donating to SIAI.

Also, multi isn't denying that utilitymonster's money would be better spent in some third way that directly pertains to existential risk. (However, such a denial may be implicit in multi's own decision to give to GiveWell's charities, depending on why he does it.)

Replies from: Airedale
comment by Airedale · 2010-08-22T01:03:37.512Z · LW(p) · GW(p)

I don't know that we disagree very much, but I don’t want to lose sight of the original issue as to whether EY’s characterization accurately reflected what multifoliaterose was saying. I think we may agree that it takes an extra step in interpreting multifoliaterose’s post to get to EY’s characterization, and that there may be sufficient ambiguity in the original post such that not everyone would take that step:

Also, multi isn't denying that utilitymonster's money would be better spent in some third way that directly pertains to existential risk. (However, such a denial may be implicit in multi's own decision to give to GiveWell's charities, depending on why he does it.)

I did implicitly read such a denial into the original post. As Carl noted:

The invocation of VillageReach in addressing those aggregative utilitarians concerned about astronomical waste here seems baffling to me.

For me, the references to the Givewell-approved charities and the lack of references to alternate existential risk reducing charities like FHI seemed to suggest that multifoliaterose was implicitly denying the existence of a third alternative. Perhaps EY read the post similarly.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2010-08-22T02:59:23.290Z · LW(p) · GW(p)

For me, the references to the Givewell-approved charities and the lack of references to alternate existential risk reducing charities like FHI seemed to suggest that multifoliaterose was implicitly denying the existence of a third alternative. Perhaps EY read the post similarly.

I agree that this is the most probable meaning. The only other relevant consideration I know of is multi's statement upstream that he uses GiveWell in part to encourage transparency in other charities. Maybe he sees this as a way to encourage existential-risk charities to do better, making them more likely to succeed.

comment by CarlShulman · 2010-08-22T06:37:39.309Z · LW(p) · GW(p)

Well, since multifoliaterose himself has been giving all of his charitable contributions to VillageReach, it's a sensible reading.

comment by Furcas · 2010-08-21T23:51:05.917Z · LW(p) · GW(p)

And he hasn't advocated using something other than the expected case when evaluating a non-transparent charity. What you may infer is that he believes that the worst case does not significantly differ from the expected case

That's not what Multi said. He said we should assume the worst. You only need to assume something when you know that belief would be useful even though you don't believe it. So he clearly doesn't believe the worst (or if he does, he hasn't said so).

He said that supporting a tuberculosis charity is better than donating to SIAI, not that supporting a tuberculosis charity is the best way to fight existential risk.

He also said that he "believe[s] that reducing existential risk is ultimately more important than developing world aid." How do you go from there to supporting StopTB over SIAI, unless you believe the worst?

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2010-08-22T00:05:40.049Z · LW(p) · GW(p)

And he hasn't advocated using something other than the expected case when evaluating a non-transparent charity. What you may infer is that he believes that the worst case does not significantly differ from the expected case

That's not what Multi said. He said we should assume the worst. You only need to assume something when you know that belief would be useful even though you don't believe it. So he clearly doesn't believe the worst (or if he does, he hasn't said so).

I don't use the word "assume" in the way that you describe, and I would be surprised if multi were.

He also said that he "believe[s] that reducing existential risk is ultimately more important than developing world aid." How do you go from there to supporting StopTB over SIAI, unless you believe the worst?

Here I think we more-or-less agree. On my reading, multi is saying that, right now, the probability that SIAI is a money pit is high enough to outweigh the good that SIAI would do if it weren't a money pit, relative to a tuberculosis charity. But multi is also saying that this probability assignment is unstable, so that some reasonable amount of evidence would lead him to radically reassign his probabilities.

comment by XiXiDu · 2010-08-21T17:37:02.759Z · LW(p) · GW(p)

I have difficulty taking this seriously. Someone else can respond to it.

Transparency requires reevaluation if we are talking about a small group of people that are responsible to give the thing in control of the fate of the universe and all entities in it it's plan of action. That is, if you were able to evaluate that this is (1) necessary, (2) possible, (3) you are the right people for the job.

This lack of transparency could make people think (1) this is bullshit, (2) it's impossible to do anyway, (3) you might apply a CEV of the SIAI rather than humanity, (4) given 3' the utility payoff is higher donating to GiveWell's top ranked charities.

Replies from: Furcas
comment by Furcas · 2010-08-21T17:41:21.811Z · LW(p) · GW(p)

3) you might apply a CEV of the SIAI rather than humanity

I think that would be best, actually.

comment by PhilGoetz · 2010-08-23T18:16:19.326Z · LW(p) · GW(p)

Your objections focus on EY. SIAI > EY. Therefore, SIAI's transparency <> EY's transparency. SIAI's openness to criticism > EY's openness to criticism.

Go to SIAI's website, and you can find a list of their recent accomplishments. You can also get a detailed breakdown of funding by grant, and make an earmarked donation to a specific grant. That's pretty transparent.

OTOH, EY is non-transparent. Deliberately so; and he appears to intend to continue to be so. And you can't find a breakdown on SIAI's website of what fraction of their donations go into the "Eliezer Yudkowsky black ops" part of their budget. It would be nice to know that.

If you're worried about what EY is doing and don't want to give him money, you can earmark money for other purposes. Of course, money is fungible, so that capability has limited value.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-24T09:09:36.262Z · LW(p) · GW(p)

Phil, I like your comments. Some points in response:

•My concern is not really that the fraction of SIAI donations which go to EY is too large, but rather the (significant) possibility that SIAI is attracting donations and volunteers on the premise that EY's unjustified claims about the state of the world are true. I think that it's very important that SIAI make it clear precisely how much it claims to be true.

•A large part of the issue is that there's ambiguity as to whether EY is properly regarded as a spokesperson for SIAI. As Airedale says

...I wanted to address one particular public relations problem, or at least, public relations issue, that is evident from your criticism so far – that is, there is an (understandable) perception that many observers have that SIAI and Eliezer are essentially synonymous. In the past, this perception may have been largely accurate. I don’t think that it currently holds true, but it definitely continues to persist in many people’s minds.

As I said in response to EY here, as long as SIAI makes it clear that EY is not an official representative of SIAI and points to official representatives of SIAI who accurately reflect SIAI's position, there shouldn't be a problem.

comment by timtyler · 2010-08-21T13:21:18.819Z · LW(p) · GW(p)

This recentish page was a step towards transparency:

http://singinst.org/grants/challenge

Transparency is generally desirable - but it has some costs.

Replies from: magfrump, multifoliaterose
comment by magfrump · 2010-08-21T20:51:04.523Z · LW(p) · GW(p)

I will donate $100 to SIAI after the publication of this paper.

Replies from: wedrifid
comment by wedrifid · 2010-08-22T01:18:15.457Z · LW(p) · GW(p)

To make things easier (and more enjoyable) for you to follow through with this and also to make the words 'real' for observers and the SIAI would you consider using some form of escrow service?

Pre-commitments are fun.

Replies from: magfrump, jsalvatier
comment by magfrump · 2010-08-22T23:38:31.367Z · LW(p) · GW(p)

Yes, assuming it isn't difficult, and that I can delay until the school year starts (not until September 23rd for me) and I have a reliable income.

comment by jsalvatier · 2010-08-22T07:59:49.050Z · LW(p) · GW(p)

I would do this. I think GiveWell had a similar kind of setup at one point, having people commit to donating to their recommendation.

comment by multifoliaterose · 2010-08-21T16:25:13.312Z · LW(p) · GW(p)

Yes, I approve of the page that you link :-).

Transparency is generally desirable - but it has some costs.

The point of my post is to argue in favor of the idea that the costs are worth it.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-21T17:19:26.277Z · LW(p) · GW(p)

How much will you donate to cover the costs? It's always easy to spend other people's money.

Replies from: multifoliaterose, multifoliaterose, multifoliaterose
comment by multifoliaterose · 2010-08-21T18:32:33.559Z · LW(p) · GW(p)

Okay, thinking it over for the last hour, I now have a concrete statement to make about my willingness to donate to SIAI. I promise to donate $2000 to SIAI in a year's time if by that time SIAI has secured a 2-star rating from GiveWell for donors who are interested in existential risk.

I will urge GiveWell to evaluate existential risk charities with a view toward making this condition a fair one. If after year's time GiveWell has not yet evaluated SIAI, my offer will still stand.

[Edit: Slightly rephrased, removed condition involving quotes which had been taken out of context.]

Replies from: Jasen, jimrandomh, timtyler
comment by Jasen · 2010-08-21T21:00:48.320Z · LW(p) · GW(p)

Jonah,

Thanks for expressing an interest in donating to SIAI.

(a) SIAI has secured a 2 star donation from GiveWell for donors who are interested in existential risk.

I assure you that we are very interested in getting the GiveWell stamp of approval. Michael Vassar and Anna Salamon have corresponded with Holden Karnofsky on the matter and we're trying figure out the best way to proceed.

If it were just a matter of SIAI becoming more transparent and producing a larger number of clear outputs I would say that it is only a matter of time. As it stands, GiveWell does not know how to objectively evaluate activities focused on existential risk reduction. For that matter, neither do we, at least not directly. We don't know of any way to tell what percentage of worlds that branch off from this one go on to flourish and how many go on to die. If GiveWell decides not to endorse charities focused on existential risk reduction as a general policy, there is little we can do about it. Would you consider an alternative set of criteria if this turns out to be the case?

We think that UFAI is the largest known existential risk and that the most complete solution - FAI - addresses all other known risks (as well as the goals of every other charitable cause) as a special case. I don't mean to imply that AI is the only risk worth addressing at the moment, but it certainly seems to us to be the best value on the margin. We are working to make the construction of UFAI less likely through outreach (conferences like the Summit, academic publications, blog posts like The Sequences, popular books and personal communication) and make the construction of FAI more likely through direct work on FAI theory and the identification and recruitment of more people capable of working on FAI. We've met and worked with several promising candidates in the past few months. We'll be informing interested folk about our specific accomplishments in our new monthly newsletter, the June/July issue of which was sent out a few weeks ago. You can sign up here.

(b) You publically apologize for and qualify your statements quoted by XiXiDu here. I believe that these statements are very bad for public relations. Even if true, they are only true at the margin and so at very least need to be qualified in that way.

It would have been a good idea for you to watch the videos yourself before assuming that XiXiDu's summaries (not actual quotes, despite the quotation marks that surrounded them) were accurate. Eliezer makes it very clear, over and over, that he is speaking about the value of contributions at the margin. As others have already pointed out, it should not be surprising that we think the best way to "help save the human race" is to contribute to FAI being built before UFAI. If we thought there was another higher-value project then we would be working on that. Really, we would. Everyone at SIAI is an aspiring rationalist first and singularitarian second.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-21T21:29:31.731Z · LW(p) · GW(p)

If GiveWell decides not to endorse charities focused on existential risk reduction as a general policy, there is little we can do about it. Would you consider an alternative set of criteria if this turns out to be the case?

Yes, I would consider an alternative set of criteria if this turns out to be the case.

I have long felt that GiveWell places too much emphasis on demonstrated impact and believe that in doing so GiveWell may be missing some of the highest expected value opportunities for donors.

It would have been a good idea for you to watch the videos yourself before assuming that XiXiDu's summaries (not actual quotes, despite the quotation marks that surrounded them) were accurate.

I was not sure that XiXiDu's summaries were accurate which is why I added a disclaimer to my original remark. I have edited my original comment accordingly.

I apologize to Eliezer for inadvertantly publicizing a misinterpretation of his views.

comment by jimrandomh · 2010-08-21T18:43:32.809Z · LW(p) · GW(p)

(b) You publically apologize for and qualify your statements quoted by XiXiDu here. I believe that these statements are very bad for public relations. Even if true, they are only true at the margin and so at very least need to be qualified in that way.

(Note: I have not had the chance to verify that XiXiDu is quoting you correctly because I have not had access to online video for the past few weeks - condition (b) is given on the assumption that the quotes are accurate)

It is always wrong to demand the retraction of a quote which you have not seen in context.

Replies from: multifoliaterose, multifoliaterose
comment by multifoliaterose · 2010-08-21T21:30:04.052Z · LW(p) · GW(p)

Thanks, I have edited my comment accordingly.

comment by multifoliaterose · 2010-08-21T19:23:47.777Z · LW(p) · GW(p)

As I said, condition (b) is given based on an assumption which may be wrong. In any case, for public relations purposes, I want SIAI to very clearly indicate that it does not request arbitrarily large amounts of funding from donors.

Replies from: JGWeissman
comment by JGWeissman · 2010-08-21T19:51:19.115Z · LW(p) · GW(p)

I want SIAI to very clearly indicate that it does not request arbitrarily large amounts of funding from donors.

In one of the videos XiXiDu cites as reference, Eliezer predicts that funding at the level of a billion dollars would be counterproductive, because it would attract the wrong kind of attention.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-21T21:30:29.263Z · LW(p) · GW(p)

Thanks for pointing this out. I edited my comment accordingly.

comment by timtyler · 2010-08-21T19:13:19.184Z · LW(p) · GW(p)

Statements misquoted by XiXiDu - it looks like!

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-21T21:29:42.841Z · LW(p) · GW(p)

Thanks for pointing this out

comment by multifoliaterose · 2010-08-21T21:34:04.853Z · LW(p) · GW(p)

Sorry for attaching a misrepresentation of your view to you based on descriptions of videos which I had not seen! I have edited my other comment accordingly.

comment by multifoliaterose · 2010-08-21T17:35:24.362Z · LW(p) · GW(p)

I donated 10% of my annual (graduate student) income to VillageReach a few weeks ago. As I say in my post, this is not because I have special attachment to international aid.

I would be willing to donate to SIAI if SIAI coud convince me that it would use the money well. At present I don't even know how much money SIAI receives in donations a year, much less how it's used and whether the organization has room for more funding.

I believe that there are others like me and that in the long run exhibiting transparency would allow SIAI to attract more than enough extra donations to cover the costs of transparency. Note that GiveWell leveraged 1 million dollars last year and that this amount may be increasing exponentially (as GiveWell is still quite young).

Replies from: Liron
comment by Liron · 2010-08-21T18:17:57.123Z · LW(p) · GW(p)

At present I don't even know how much money SIAI receives in donations a year, much less how it's used and whether the organization has room for more funding.

I would also like to see SIAI post a description of its finances and leadership structure.

Replies from: Airedale
comment by Airedale · 2010-08-21T18:32:53.474Z · LW(p) · GW(p)

I agree it would be good if more info on finances was readily available. There are tax returns (although I think the most recent is 2008) available on Guidestar (with free registration). But as for leadership structure, is this link the sort of thing you had in mind or were you looking for an actual org chart or something?

Replies from: Morendil, Liron
comment by Morendil · 2010-09-01T10:48:20.667Z · LW(p) · GW(p)

Having run a small non-profit operation for a few years now, the standard of transparency I now like is simply publishing our General Ledger to the Web every year.

What's nice about it: it's a) feasible (our accounts are in Xero and once you've figured out the export it's a breeze), b) the ultimate in transparency. We still do summary reports to give people an idea of what's happened with the money, but anyone who complains or for some other reason wants the raw data, I can just point at the GL.

comment by Liron · 2010-08-21T23:15:15.999Z · LW(p) · GW(p)

OK, the leadership structure info is satisfactory.

comment by XiXiDu · 2010-08-21T16:59:45.528Z · LW(p) · GW(p)

I've suspected for a long time that the movement around EY might be a sophisticated scam to live off donations of nonconformists...

Before someone brings it up. I further said, "but that's just another crazy conspiracy theory". I wouldn't say something like this and actually mean it without being reasonable confident. And I'm not, I'm just being a troll sometimes. Although other people will inevitably come to this conclusion. You just have to read this post, see the massive amount of writings about rationality and the little cherry on the cake that is AI going FOOM. Other people have called a positive Singularity the rapture of the nerds, for obvious reasons.

So yes, I said this, and you people failed an adequate response. Of course you don't think you did, I haven't read your precious sequences after all, but don't expect others will if you cannot provide a summary without telling people to read hundreds of blog posts of which most are likely not even related to the actual criticism. Lucky I'm on your side, someone who's just being a troll. But if you try to crush people like PZ Myers the same way, i.e. "we are the LW community, far above your rationality standards and that of your friends", you won't win. Yeah, maybe you'll succeed without those people, without the academics and the mainstream. But I wouldn't bet on it, I'd try to improve my attitude towards the trolls and lesser rationalists out there. Learn a lesson from Kurzweil on being polite, intelligible and concise in the face of arrogant bosh that are the outpourings of PZ Myers and the like.

And if you really think you'll win alone, well then, don't expect the critizim and mockery to be decreasing. The people who'll look at you and say, ah yes a whole bunch of crummy anecdotes, academic grandstanding & incredibly boring logic arguments to wade through. And that's supposed to be the single bright shining hope for the redemption of the humanities collective intellect? They first need to figure out how to distill their subject matter and make it more palatable for the average person who really doesn't care about being part of the Bayesian in-crowd.

Replies from: pjeby, wedrifid
comment by pjeby · 2010-08-21T17:45:49.121Z · LW(p) · GW(p)

They first need to figure out how to distill their subject matter and make it more palatable for the average person who really doesn't care about being part of the Bayesian in-crowd.

There's always HP:MOR. ;-)

comment by wedrifid · 2010-08-21T22:00:03.480Z · LW(p) · GW(p)

So yes, I said this, and you people failed an adequate response. Of course you don't think you did

Please don't generalise. I certainly can't be accused of thinking we succeed in supplying an adequate response. It's not you against lesswrong. It is you against "Oh $#!%, oh $#!%, we're all going to die! What can we do to survive and thrive?"

comment by timtyler · 2010-08-21T13:32:52.416Z · LW(p) · GW(p)

I must say that, in fact, much of the nonprofit sector fits incredibly better into Prof. Hanson’s view of charity as “wasteful signaling” than into the traditional view of charity as helping.

Charity is largely about signalling. Often, signalling has to be expensive to be credible. The "traditional view of charity" seems like naive stupidity. That is not necessarily something to lament. It is great that people want to signal their goodness, wealth, and other virtues via making the world a better place! Surely it is best if everyone involved understands what is going on in this area - rather than living in denial of human nature.

Replies from: zero_call
comment by zero_call · 2010-08-21T16:20:02.369Z · LW(p) · GW(p)

I would argue that charity is just plain good, and you don't need to take something simple and kind and turn it into an inconclusive exercise in societal interpretation.

Replies from: Torben
comment by Torben · 2010-08-21T18:47:10.821Z · LW(p) · GW(p)

Are you familiar with the Hansonian view of signaling?

comment by Larks · 2010-08-21T18:03:55.068Z · LW(p) · GW(p)

SIAI does not presently exhibit high levels of transparency and accountability... For this reason together with the concerns which I express about Existential Risk and Public Relations, I believe that at present GiveWell's top ranked charities VillageReach and StopTB are better choices than SIAI

  • Suppose SIAI were a thousand times less accountable than VillageReach.
  • Suppose this made SIAI a million times less effective than it could be.
  • Suppose that even the most efficient Existential Risk charity could only reduce P(uFAI|AGI) by 10^-9
  • Suppose the odds of an AI singularity or 'foom' were only 10^-9.
  • Suppose a negative singularity only set mankind back by 100 years, rather than paperclipping the entire light cone and destroying all human value together.*

Even then the expected lives saved by SIAI is ~10^28.

  • It's patently obvious that SIAI has an annual income of less than $10^6.

  • Suppose the marginal dollar is worth 10^3 times less than the average dollar.

Even then a yearly donation of $1 saves an expected10^18 lives.

A yearly donation of £1 to VillageReach saves 100,000,000,000,000,000,000 fewer people.

*Sorry Clippy, but multifoliaterose is damaging you here; SIAI is a lot more amenable to negotiation than anyone else.

Replies from: rwallace, cousin_it, timtyler, CarlShulman, ata, EStokes
comment by rwallace · 2010-08-22T03:41:24.223Z · LW(p) · GW(p)

A problem with Pascal's Mugging arguments is that once you commit yourself to taking seriously very unlikely events (because they are multiplied by huge potential utilities), if you want to be consistent, you must take into account all potentially relevant unlikely events, not just the ones that point in your desired direction.

To be sure, you can come up with a story in which SIAI with probability epsilon makes a key positive difference, for bignum expected lives saved. But by the same token you can come up with stories in which SIAI with probability epsilon makes a key negative difference (e.g. by convincing people to abandon fruitful lines of research for fruitless ones), for bignum expected lives lost. Similarly, you can come up with stories in which even a small amount of resources spent elsewhere, with probability epsilon makes a key positive difference (e.g. a child saved from death by potentially curable disease, may grow up to make a critical scientific breakthrough or play a role in preserving world peace), for bignum expected lives saved.

Intuition would have us reject Pascal's Mugging, but when you think it through in full detail, the logical conclusion is that we should... reject Pascal's Mugging. It does actually reduce to normality.

comment by cousin_it · 2010-08-21T18:14:24.665Z · LW(p) · GW(p)

Wow. Do you really think this sort of agrument can turn people to SIAI, rather than against any cause that uses tiny probabilities of vast utilities to justify itself?

Replies from: Nick_Tarleton, Larks
comment by Nick_Tarleton · 2010-08-21T20:18:52.680Z · LW(p) · GW(p)

Seconded, tentatively. I'm afraid that all arguments of this form (for x-risk reduction, Pascal's Wager, etc.) can't avoid being rejected out of hand by most people, due to their insatiability: buying even a little bit into such an argument seems to compel a person to spend arbitrary amounts of resources on the improbable thing, and open them to harsh criticism for holding back even a little. That said, a few sincere consequentialists actually will react positively to such arguments, so maybe making them on LW is worthwhile on balance.

comment by Larks · 2010-08-21T18:17:49.936Z · LW(p) · GW(p)

Multifoliaterose said his result held even for donors who took Astronomical Waste seriously. This seems unlikely to be the case.

Edit: I didn't vote you down, but what? SIAI is an Existential Risk charity: the point is to save the entire human race, at low probability. Of course the expected value is going to be calculated by multiplying a tiny probability by an enormous value!

Replies from: cousin_it
comment by cousin_it · 2010-08-21T18:57:15.373Z · LW(p) · GW(p)

This isn't true for all existential risks. For example, fears about nuclear war or global warming don't rely on such tiny probabilities. But discussions of many other risks remind me of this xkcd.

Replies from: CarlShulman
comment by CarlShulman · 2010-08-21T19:02:07.306Z · LW(p) · GW(p)

The usual consequentialist case for charity to reduce risks of nuclear war or catastrophic global warming feedbacks does rely on tiny probabilities of donations making a difference. Likwise for voting and for many kinds of scientific and medical research charity.

Edit: not as tiny as in Lark's comment, although the numbers there are incredibly low.

Replies from: steven0461, cousin_it
comment by steven0461 · 2010-08-21T19:34:52.165Z · LW(p) · GW(p)

If the Pascal's mugging issue is with exerting a small force on a big thing and therefore having a small probability of succeeding, I don't think that's even a coherent objection; in a chaotic universe, anything you do may save or destroy individual people and human civilization by redrawing from the distribution, and shifting the distribution of the number of saved lives up by one through e.g. aid or buying someone cryo doesn't seem fundamentally different from e.g. shifting the distribution of election outcomes. This is more clearly true if you believe in a multiverse such as MWI, but I don't think it requires that.

ETA: if your distribution on people killed is uniform for say 6 through 10, then definitely saving one person and having a 1 in 5 chance of turning a 10-death situation into a 5-death situation are the same thing except for maybe the identity of the victims, right?

comment by cousin_it · 2010-08-21T19:05:46.069Z · LW(p) · GW(p)

That's a very different value of "tiny" than the one in Larks's comment.

comment by timtyler · 2010-08-21T19:22:56.752Z · LW(p) · GW(p)

This is just Pascal's mugging again.

If you get to make up the numbers then you can make things turn out however you like.

Replies from: Larks
comment by Larks · 2010-08-21T20:38:50.164Z · LW(p) · GW(p)

Not if you deliberately pick numbers orders of magnitude more unfavourable to your point of view than you think is actually the case.

Replies from: Perplexed, timtyler
comment by Perplexed · 2010-08-21T20:59:27.175Z · LW(p) · GW(p)

But you didn't do that. A calculation involves both quantitative and qualitative/structural assumptions, and you slanted all of your qualitative choices toward making your point.

You used a discount rate of 0. (That is, a hypothetical life a hundred generations from now deserves exactly as much of my consideration as people alive today). That totally discredits your calculation.

You used a definition of 'saving a life' which suggests that we ought to pay women to get pregnant, have children, and then kill the infant so they are free to get pregnant again. Count that as one life 'saved'.

You didn't provide estimates of the opportunity cost in 'human lives' of diverting resources toward colonizing the universe. Seems to me those costs could be enormous - in lives at the time of diversion, not in distant-future lives.

Replies from: Larks, JGWeissman
comment by Larks · 2010-08-21T21:24:50.330Z · LW(p) · GW(p)

Appart from the issue JGWeissman brings up, even if you supposed that the lives saved all occurred 300 years in the future, a reasonable discount rate would still only give you a couple of orders of magnitude.

For example, 1.05^300 = 2.3 * 10^6

Which is nowhere near enough.

Edit: there are discount rates that would give you the result you want, but it still seems pritty plausible that, assuming Astronomical Waste, SIAI isn't a bad bet.

Replies from: Perplexed
comment by Perplexed · 2010-08-22T01:19:23.041Z · LW(p) · GW(p)

even if you supposed that the lives saved all occurred 300 years in the future, a reasonable discount rate would still only give you a couple of orders of magnitude.

Sounds about right to me.

For example, 1.05^300 = 2.3 * 10^6

Huh? You think 6 is "a couple"? I wish I had your sex life!

But 5% per annum is far too high. It discounts the next generation to only a quarter of the current generation. Way too steep.

Which is nowhere near enough.

Double huh? And huh? some more. You wrote:

a yearly donation of $1 saves an expected 10^18 lives

You imagine (conservatively) that there are a potential 10^18 lives to save 300 years into the future? Boy, I really wish I had your sex life.

Replies from: CarlShulman
comment by CarlShulman · 2010-08-22T06:10:32.048Z · LW(p) · GW(p)

If people 300 years from now are whole brain emulations or AIs, then they could reproduce like software with high population densities.

Replies from: timtyler
comment by timtyler · 2010-08-22T06:33:39.105Z · LW(p) · GW(p)

Alternatively, if the human-size brains were all sucked into the matrix long ago, there may well be about 1 person per-planet.

comment by JGWeissman · 2010-08-21T21:12:43.395Z · LW(p) · GW(p)

You used a discount rate of 0. (That is, a hypothetical life a hundred generations from now deserves exactly as much of my consideration as people alive today). That totally discredits your calculation.

What makes a life at one time worth more than a life at a different time?

Replies from: Perplexed
comment by Perplexed · 2010-08-21T21:26:12.239Z · LW(p) · GW(p)
  1. Distance.
  2. Tradition in the field of economics
  3. Mathematical well-behavedness may demand this if the universal expansion is not slowing down.
  4. Reciprocity. Future folks aren't concerned about my wishes, so why should I be concerned about theirs?
  5. What makes a life at one time worth the same as a life at a different time?

In a sense, these are flip answers, because I am not really a utilitarian to begin with. And my rejection of utilitarianism starts by asking how it is possible to sum up utilities for different people. It is adding apples and oranges. There is no natural exchange rate. Utilities are like subjective probabilities of different people - it might make sense to compute a weighted average, but how do you justify your weighting scheme?

I suspect that discussing this topic carefully would take too much of my time from other responsibilities, but I hope this sketch has at least given you some things to think about.

Replies from: Tyrrell_McAllister, timtyler
comment by Tyrrell_McAllister · 2010-08-22T03:33:16.446Z · LW(p) · GW(p)

And my rejection of utilitarianism starts by asking how it is possible to sum up utilities for different people. It is adding apples and oranges. There is no natural exchange rate.

Utilities for different people don't come into it. The question is, how much to you now is a contemporaneous person worth, versus someone in a hundred generations? (Or did you mean utilities of different people?)

Replies from: Perplexed, timtyler
comment by Perplexed · 2010-08-22T03:47:37.403Z · LW(p) · GW(p)

You have me confused now. I like apples. Joe likes oranges. Mary wishes to maximize total utility. Should she plant an orange tree or an apple tree?

I agree with Kant (perhaps for the only time). Asking how much Joe is worth to Mary is verbotten.

Replies from: Tyrrell_McAllister, JGWeissman
comment by Tyrrell_McAllister · 2010-08-22T04:03:41.310Z · LW(p) · GW(p)

You have me confused now. I like apples. Joe likes oranges. Mary wishes to maximize total utility. Should she plant an orange tree or an apple tree?

She should consider the world as it would be if she planted the apple tree, and the world as it would be if she planted the orange tree, and see which one is better as she measures value. (Another option is to plant neither, of course.)

The idea isn't to add up and maximize everyone's utility. I agree with you that that makes no sense. The point is, when an agent makes a decision, that agent has to evaluate alternatives, and those alternatives are going to be weighed according to how they score under the agent's utility function. But utility isn't just selfish profit. I can value that there be happiness even if I don't ever get to know about it.

I agree with Kant (perhaps for the only time). Asking how much Joe is worth to Mary is verbotten.

  1. We don't always have the luxury of not choosing. What should Mary do in a trolley problem where she has to direct a train at either you or Joe (or else you both die)? That said . . .

  2. "Worth" needs to be understood in an expansive sense. Kant is probably right that Mary shouldn't think, "Now, from whom can I extract more profit for my selfish ends? That's the one I'll save." The things she ought to consider are probably the ones we're accustomed to thinking of as "selfless". But she can't evade making a decision.

Replies from: Perplexed
comment by Perplexed · 2010-08-22T04:33:23.835Z · LW(p) · GW(p)

The idea isn't to add up and maximize everyone's utility. I agree with you that that makes no sense. The point is, when an agent makes a decision, that agent has to evaluate alternatives, and those alternatives are going to be weighed according to how they score under the agent's utility function. But utility isn't just selfish profit. I can value that there be happiness even if I don't ever get to know about it.

Ok, I think we can agree to agree. Revealed preference doesn't prevent me from incorporating utilitarianish snippets of other peoples utility judgments into my own preferences. I am allowed to be benevolent. But simple math and logic prevent me from doing it all-out, the way that Bentham suggested.

Now, which one of us has to tell Eliezer?

Replies from: timtyler
comment by timtyler · 2010-08-22T06:44:32.885Z · LW(p) · GW(p)

But simple math and logic prevent me from doing it all-out, the way that Bentham suggested.

What simple math and logic? Utilitarianism seems pretty silly to me too - but adding different people's utilities together is hardly a show-stopping problem.

The problem I see with utilitarianism is that it is a distant ideal. Ideals of moral behaviour normally work best when they act like a carrot which is slightly out of reach. Utilitarianism conflicts with people's basic drives. It turns everyone into a sinner.

If you preach utilitarianism, people just think you are trying to manipulate them into giving away all their stuff. Usually that is true - promoters of utilitarianism are usually poorer folk who are after the rich people's stuff - and have found a moral philosophy that helps them get at it.

Politicians often say they will tax the rich and give the money to the poor. This is because they want the poor people's votes. Utilitarianism is the ethical equivalent of that. Leaders sometimes publicly promote such policies if they want the support of the masses in order to gain power.

comment by JGWeissman · 2010-08-22T03:58:41.540Z · LW(p) · GW(p)

No, what is forbidden is to ask how much Joe is worth in an absolute sense, independent of an agent like Mary.

Utility is not a fundamental property of the world, it is perceived by agents with preferences.

Replies from: Perplexed
comment by Perplexed · 2010-08-22T04:12:40.854Z · LW(p) · GW(p)

This is rapidly becoming surreal.

Forbidden is not a fundamental property of the world, it is imposed by theorists with agendas.

Samuelson, Wald, von Neumann, Savage, and the other founders of "revealed preference" forbid us to ask how much Joe (or anything else) is worth, independent of an agent with preferences, such as Mary.

Emmanuel Kant, and anyone else who takes "The categorical imperative" at all seriously, forbids us to ask what Joe is worth to Mary, though we may ask what Joe's cat Maru is worth to Mary.

I knew I shouldn't have gotten involved in this thread.

Replies from: timtyler
comment by timtyler · 2010-08-22T07:00:05.101Z · LW(p) · GW(p)

Immanuel Kant says we that can't ask what Joe is worth to Mary?

So what? Why should anyone heed that advice? It is silly.

comment by timtyler · 2010-08-22T06:53:22.061Z · LW(p) · GW(p)

Utilities of different people, yes. He's complaining that they don't add up.

comment by timtyler · 2010-08-21T21:48:29.486Z · LW(p) · GW(p)

For 2, perhaps consider:

http://lesswrong.com/lw/n2/against_discount_rates/

Replies from: Perplexed
comment by Perplexed · 2010-08-22T01:32:13.254Z · LW(p) · GW(p)

Considered. Not convinced. If that was intended as an argument, then EY was having a very bad day.

He is welcome to his opinion but he is not welcome to substitute his for mine.

The ending was particularly bizarre. It sounded like he was saying that treasury bills don't pay enough interest to make up for the risk that the US may not be here 300 years from now. But we should, for example, consider the projected enjoyment of people we imagine visiting our nature preserves 500 years from now, as if their enjoyment were as important as our own, not discounting at all for the risk that they may not even exist.

Replies from: Nick_Tarleton, timtyler
comment by Nick_Tarleton · 2010-08-22T03:00:36.350Z · LW(p) · GW(p)

But we should, for example, consider the projected enjoyment of people we imagine visiting our nature preserves 500 years from now, as if their enjoyment were as important as our own, not discounting at all for the risk that they may not even exist.

Eliezer doesn't disagree: as he says more than once, he's talking about pure preferences, intrinsic values. Other risks do need to be incorporated, but it seems better to do so directly, rather than through a discounting heuristic. Larks seems to implicitly be doing this with his P(AGI) = 10^-9.

comment by timtyler · 2010-08-22T06:18:24.142Z · LW(p) · GW(p)

Time travel, the past "still existing" - and utilitariainism? I don't buy any of that either - but in the context of artificial intelligence, I do agree that building discounting functions into the agent's ultimate values looks like bad news.

Discounting functions arise because agents don't know about the future - and can't predict or control it very well. However, the extent to which they can't predict or control it is a function of the circumstances and their own capabilities. If you wire temporal discounting into the ultimate preferences of super-Deep Blue - then it can't ever self-improve to push its prediction horizon further out as it gets more computing power! You are unnecessarily building limitations into it. Better to have no temporal discounting wired in - and let the machine itself figure out to what extent it can predict and control the future - and so figure out the relative value of the present.

comment by timtyler · 2010-08-21T20:53:23.014Z · LW(p) · GW(p)

Well, in that case, your math is all wrong!

The SIAI is actually having a negative effect, by distracting conscientious researchers with all kinds of ridiculous and irrelevant moral issues - which results in the unscrupulous Korean FKU corporation developing machine intelligence first - when otherwise they would have been beaten to it. Their bot promises humans a cure for cancer - but then it gets powerful, eats the internet, and goes on to do all manner of dubious things.

If only you had had the sense to give to VillageReach!

comment by CarlShulman · 2010-08-21T19:12:45.584Z · LW(p) · GW(p)

With such low numbers the incidental effects of VillageReach on existential risk (via poverty, population, favoring the efficient philanthropy culture, etc) would dominate. If reasonably favorable, they would mean VillageReach would be a better choice. However, the numbers in the comment are incredibly low.

comment by ata · 2010-08-21T18:22:43.740Z · LW(p) · GW(p)

Even then a yearly donation of $1 can expect to save 10^18 lives.

We need to be careful with the terminology here. "Saving expected lives" and "expecting to save lives" are pretty different activities.

Replies from: Larks
comment by Larks · 2010-08-21T18:27:20.091Z · LW(p) · GW(p)

Thanks & corrected.

comment by allenwang · 2010-08-24T16:36:36.183Z · LW(p) · GW(p)

I really want to put up a post that is highly relevant to this topic. I've been working with a couple of friends on an idea to altar personal incentives to solve the kinds of public good provision problems that charities and other organizations face, and I want to get some feedback from this community. Is someone with enough points to post willing to read over it and post for me? Or can I get some upvotes? (I know that this might be a bit rude, but I really want to get this out there ASAP).

Thanks a bunch, Allen Wang

Replies from: katydee, whpearson
comment by katydee · 2010-08-24T20:33:28.108Z · LW(p) · GW(p)

I can also take a look if you want, but yeah, no promises.

comment by whpearson · 2010-08-24T20:11:13.844Z · LW(p) · GW(p)

I'll have a look, no promises.