Notes from Online Optimal Philanthropy Meetup: 12-10-09

post by Giles · 2012-10-13T05:36:29.297Z · LW · GW · Legacy · 17 comments

Contents

17 comments

Here are my notes from the Optimal Philanthropy online meeting. Things in square brackets are my later additions and corrections. Let me know if there are factual errors or if I've incorrectly captured the drift of what you were saying.

Nick:

  • existential risk argument goes as follows:
    • small chance that people will be around for a really long time
    • if so, increasing probability of that happening by a small amount then really good
    • therefore, you should focus on this rather than things that have short-run impacts
    • focus only on things that reduce xrisk
  • Similar property: make things a little bit better for a really long time. Might be easier to do that.
    • name for these: trajectory change. Different from just economic growth – changes default trajectory along which things go
    • interesting for same reason as xrisk reduction but people haven’t talked much about them.

[sorry, didn’t note who said this]: education as an example. Particular kind of education?

Nick:

  • not really thought through this idea yet. [This idea is at the same stage] as if you’d just stumbled across xrisk concept
  • another example is just changing people’s values for a better
    • ordinary – e.g. campaigning for particular political party

Scott:

  • calls this “static” [I assume this means as in “noise” rather than as in “stationary”]
  • I’m doing something – tends to break down after a month or so, very difficult to predict long term results
  • should focus on things that can be predicted, build better models

Nick:

  • things that look short run could have long run impacts
  • e.g. thinking of buying bednets for people as having short-term impact – has short run impact today, no impact in 100 years time
  • but if you think about all downstream consequences, actually making small impact

Scott agrees.

  • both good and bad long-term consequences included in static
  • it’s a wash

Jeff:

  • what makes you think bednets have good long term consequences?
  • currently thinks of bednet as short term good, long term wash
  • benefits of having more people, having more healthy kids?
  • really hard to predict long term value

Nick:

  • Four classes of thing
    • things that reduce xrisk
    • things that speed up progress
    • things that make trajectory change
    • only short run effect
  • bednet not a trajectory change.
  • buying bednet increases economic growth – long term effects
  • economic growth has been compounding for a long time.
  • Plausible story for why this happens – people get richer, specialization, help each other out. Overall a positive process, and a process that ripples far into the future.

Jeff:

  • really hard to get good predictions. Maybe you could figure it out
  • even evaluating short-term effects of individual charity now is hard.
  • long term effects incredibly difficult

Nick:

  • when comparing within categories, makes sense to ignore these thing.
  • e.g. two things that contribute to economic growth – just ask which has better predictable impacts.
  • but if you’re comparing something that reduces xrisk with something that increases economic growth

Jeff:

  • funding education versus funding health.
  • wouldn’t surprise him if someone got handle on their long term effects and they were very different

Nick:

  • expects one of them to contribute to trajectory change more
  • hard to believe one of them has a very different effect on long term economic growth
  • e.g. education appeared on a par with health intervention, but expecting 1000x more economic growth seems wild.

Jeff:

  • [if optimizing for] economic growth – would do it very differently from doing health interventions

Nick:

  • can’t use economic growth argument to promote AMF: "AMF is best charity to
  • more plausible candidate: political or meta-research (some GiveWell thing)

At this point everyone on the call introduces themselves.

  • Giles Edkins: Toronto LW, THINK
  • Jeff Kaufman: married to Julia Wise. Both into earning to give stuff
  • Nick Beckstead: PHD in Philosophy at Rutgers. Population ethics and xrisks. Involved in Centre for Effective Altruism.
  • Nisan Stiennon: Berkeley. PHD in Math. Teaches for CFAR. Confused about whether to focus on xrisk or other kinds of philanthropy e.g. AMF
  • Peter Hurford: Ohio. Political Science & Psych. Joined GWWC – giving 10% of meager college income. New to smart giving stuff, on smart giving subreddit, on LW.
  • Raymond Arnold: does a lot of work for THINK. LW for past two years. General boat of xrisk & GiveWell stuff, only things he has a lot of info about
  • Scott Dickey: LWer, gives 25% of income to charity, trying to make that go as far as he can – SingInst, GiveWell. Following, reading them, watching where things are going
  • Boris Yakubchik: president of GWWC Rutgers. Giving 50% of income, high school math teacher. Don’t know what to expect from this hangout – needs to run off!

Ray:

  • assumes everyone has read GW, Nick Bostrom’s xrisk stuff
  • anyone read anything else?

Nick:

  • a lot of my dissertation is not empirical, instead it’s moral philosophy
  • has been thinking on the side on comparison between xrisk and other kinds of things, talking to people
  • feeling is there’s not huge amount of writing
  • e.g. economics of climate change – most similar to xrisk. Not a whole lot out there that’s helpful

Ray:

  • earlier this summer (interrupted by other things), wanted breadth-first look at potentially promising high impact stuff
  • 3 books on global poverty, each with a different thesis
    • The End of Poverty – Jeffrey Sachs
    • White man’s burden – William Easterly
    • The Bottom Billion – [Paul Collier]
  • would be good for people in our community to start compiling summaries of this information

Giles:

  • GW prioritizes things that are easier to understand – targets beginners

Ray:

  • And where there’s good information

Jeff:

  • GW are reasonably honest in not just choosing charities that are accessible, but that the people involved are convinced about the best ones – has talked to Holden on Skype 8 months ago.

Ray:

  • Only recently with GW Labs have they made an effort to look at harder questions, not just looking at charities with easy-to-obtain information

Giles:

  • Any info on how GW Labs is going?

Ray:

  • 100k to Cochrane. Not a whole lot of money! Still question on what else is out there

Peter:

  • Thought they were going to move away from GW Labs, were going to pursue this as main funding strategy

Scott:

  • Thought they were going to integrate things into one effort

Peter:

  • target big funders
  • away from health related issues

Scott:

  • [Should we] give to GiveWell itself?

Ray:

  • [If we give to GiveWell], a small amount to funding goes to themselves, the rest goes to their favorite charity

Peter:

  • can give to Clear Fund – fund for GW activities.
  • But they strongly discourage that – want to give to their top charity instead.

Giles:

  • can we talk to GW and find out why?

Jeff:

  • if you give to charity, tell them it’s because of GW

Ray:

  • has anyone talked to non-LWers about optimal philanthropy?

Jeff:

  • talked to college friends, not LWers. Some people think it makes sense – they are the people most similar to LWers, computer programmers and other engineers. Job makes them a bit more money than they know what to do with. “If I started giving away more than 20% of my money it wouldn’t hurt.” Helps people to think about it dispassionately.
  • Other people not inclined to this approach – doesn’t like quantifying. Shouldn’t be prioritising one thing over another

Scott:

  • talk to people at work, highly diverse set of backgrounds. Vast majority doesn’t like analytic side – they call me “cold”. Work fundraising drive – literally had picture of crying puppy. Very emotional, very community oriented, didn’t like analysis. Could be because I’m a bad arguer.

Jeff:

  • tries really hard, really hard to have discussions with people that people aren’t giving to best charity. People used to get mad at him. Now usually agrees to disagree but not losing any more friends

Ray:

  • requested charity Christmas gift – think about who you’re giving to. Conversation over Christmas dinner – worst idea ever! Grandmother most likely to donate, donated to local causes. She prioritizes giving locally not because she believes its the most effective but she prioritises her own community [did I get this right?]
  • nowadays asks what being a good person means to someone before having that conversation
  • has been having conversation at work not as “you should be doing it too” but “here’s this thing I’m doing”
  • has laid groundwork.
  • one person has said she now gives to more global causes [is this right?] as a result of what I was saying
  • long-term non-confrontational strategy

Scott:

  • another thing GW is doing right is targeting high-wealth individuals
  • spent a lot of time talking to my Dad about it. After 10 conversations he says “fine, I’ll give to what you think are more effective causes” – gave $100, and that was it forever.

Jeff:

  • convincing someone to give $10000 a year is not something to be sneezed at. Need to accept both the “give effectively” and “give substantially” ideas.

Jeff:

  • not many people [in effective altruism movement] older than I am. (late 30’s or higher)

Scott:

  • Bill Gates and Warren Buffett started own movement. Get rich people to give 50% of resources
  • Jaan Tallin – level 2 – looking at “malaria” instead of “human disease”

Jeff:

  • B&M Gates Foundation does a lot of stuff with malaria because it’s effective – same reason as GiveWell.

Scott: agrees

  • goes back to Nick’s thing of smaller things making larger impacts over time
  • not trying to change human condition
  • what inspired Scott: “what can you do today that people can remember your name 50000 years for now”

Ray:

  • if goal is to be remembered, won’t get there by donating. Founding agency or inventing something.

Scott:

  • Or Hitler, but would rather not be on that list.

Ray:

  • new ways of thinking – Darwin, Galileo.
  • optimizing giving is a different question
  • even “change the way governments work” – won’t be remembered for donating to that. But that’s OK.
  • agrees with sentiment though

Jeff:

  • certainly an inspiring sentiment

Ray:

  • what is the thing you’d want to remembered in 50000 years for, even if your name isn’t remembered?

Jeff:

  • some really important things don’t get remembered, e.g. eradicating Polio and Smallpox. Huge amounts of work, people don’t think about it because it’s no longer a problem.

Scott:

  • amend it to “should” be remembered for

Ray:

  • something I’ve been trying to do is put together a list of inspiring things that humans have done that we should care about.
  • evidence that things are getting better
  • ways of dealing with stress – global poverty is really depressing
  • “we should do more stuff like that”
  • polio & smallpox on list

Scott:

  • green revolution
  • one guy, saved or created billions of lives

Nick:

  • book: “millions saved” [Ruth Levine?], success stories in philanthropy in particular global health
  • Steven Pinker: “better angels of our nature, chapter 1”, things that went surprisingly well in last few decades
  • “Scientists greater than Einstein” [Billy Woodward?]

Ray:

  • 80000hours blog
  • have read it occasionally
  • first few times, changed how I looked at that side of things
  • since then, haven’t found a whole lot of content

Scott:

  • stopped reading 80k
  • they deferred their choosing of charities to GW
  • [I had] already read up on income maximization

Jeff:

  • am subscribed to 80k blog
  • haven’t read things that are wholly new ideas that I’m excited to read
  • mostly been summaries of things
  • remaining questions are really hard. People won’t make good progress sitting down for an afternoon reading [writing?] a blog post

Scott:

  • waiting for Nick Bostrom to come up with magnum opus?

Jeff:

  • happy with what I’m doing so far
  • important not to get attached to really high leverage factors where it’s only worth it if impact is too huge to think about
  • good to realize how much good you can do with a GWWC-style 10% given to GW charities
  • and try and do better than that!

Ray:

  • Jeff: you and Julia are highest% giving couple that I know of. On the giving a lot side…

Jeff:

  • keeping a donations page, very numbersy
  • Bolder Giving talks about us as if it’s still 2009 and we have changes to make, titled Julia Wise and should be both of us…

Ray:

  • question is “are they living in Manhattan”?

Jeff:

  • rent in Boston is still high
  • studio apartment costing $1000 a month
  • even if we’d be spending twice as much, it wouldn’t have had a huge effect
  • actually, yes it would have a huge effect! [but could still give away large percentage]

Ray:

  • and do they have kids?

Jeff:

  • yes, planning to
  • yearly expenses will change by $10000 for each kid we have [note I may have written down Jeff’s numbers wrong]
  • living at parents house, pay them rent but it’s low
  • own house with kids: $40000 on ourselves, giving at least that much away, still pretty good
  • looking forward to say “yes, you can have kids and a house and still give away lots!”

Giles:

Jeff:

  • yes, of course!
  • more normal and conventional our life seems, convincing in a non-logical way
  • “oh, I could live a life like that. Not so different from the life I was thinking of living”.

Nick:

  • a few people had a question about how to compare xrisk reduction and GW charities
  • are there specific aspects of that that people want to know about?

Giles:

  • how do we know xrisk orgs are having any impact at all? How do we know they’re not making things worse?

Scott:

  • Yudkowksy’s scale of ambition. Went into hacker news post, restructured entire conversation where highest you could go was “bigger than apple”, he put that at 2. His 10 was “you can hack the computer universe is running on”

Ray:

  • expected value calculation – don’t need to know they [xrisk orgs] are doing something meaningful, 1% chance is good enough. But how can we even be sure of that?
  • without some calculation, running on intuition.
  • intuitively sounds plausible that donating to xrisk org is higher impact than giving to AMF
  • whole point of GW is that our intuitions aren’t necessarily reliable
  • what are the numbers for xrisk?
  • one thing that’s bothering him – people say xrisk, only think of SingInst, FHI

Scott:

  • Methuselah Foundation
  • Lifeboat Foundation (even they sound amateurish)

Ray:

  • SingInst is most reputable of the various xrisk orgs [what about FHI?], still has a lot of reasons to be skeptical of SingInst
  • changing a bit since Luke Muelhauser took the reins, moving in more reliable direction

[These didn't all come up in the discussion, but I'll give lukeprog's list of x-risk orgs: FHI, FutureTech, the Singularity Institute, Leverage Research, the Global Catastrophic Risk Institute, CSER]

Scott:

  • Growing pains

Ray:

  • OK, I can see SingInst is improving. Not at point I’d be willing to donate to them yet
  • set up criteria: if they meet these criteria, I’d consider them reliable

Giles:

Ray:

  • part of what influenced me

Nick:

  • do you wish there was more like the Holden post?

Scott:

  • Holden should target any other organization in the field [i.e. should address other xrisk orgs besides SingInst]

Jeff:

  • Holden’s post on Singinst was mostly organisation critique. Looking at what they’re doing, things they’re doing that seem to be working and not.
  • even if they passed all of those things, I would still be unsure it would make more sense [to donate] than an easily measurable intervention [such as AMF]
  • SingInst still doing incredibly hard to evaluate work

Ray:

  • asteroid impact – already solved/diminishing returns point
  • AI risk
  • nuclear proliferation
  • bioterrorism
  • CDC

[question came up on what other xrisk mitigation efforts might there be that we don’t know about, in particular AI related]

Scott:

  • Google, Microsoft Research

Jeff:

  • Google mostly public with google.org stuff
  • what secret but massively positive stuff would there be?

Scott:

  • Google cars, save 30000 lives a year, not too big. Bigger if roll out to world
  • at that point, Google is AI company and should be focusing on AI safety

Jeff:

  • people in Google mostly think that kind of thing is silly?
  • very far from machine learning systems that are scary
  • so hard to get them to do anything at all intelligent
  • hard for people to think about possibility of recursively self-improving anything.
  • not my impression that they are excited about AI safety.

Scott:

  • not saying that they are, but that they should be
  • keep a lot of it locked down
  • we don’t know what’s happening in China, Korea, Japan had a push for AI recently

Jeff:

  • wouldn’t be surprised if no-one at Google knows more about AI safety than what you can read on LW

Ray:

  • one thing that’s potentially high impact is donating to scientific research
  • there’s a thing called Petridish – Kickstarter for research projects
  • not optimized for “these are the projects that will help the world the most”
  • wonder if there’s a way we can push this in that direction?

Jeff:

  • amazing how animal-focused they are
  • specific to biology, or is that what people like to fund?

Giles:

  • find people good at communicating effective altruism message and put them in touch with orgs [such as Petridish] that need it?

Scott:

  • finding marketers

Ray:

  • some of that is in progress at Leverage. At pre-planning stage

Scott:

  • campaign Red or Pink, generate millions of dollars by having campaigns. Doesn’t seem to be having much impact, but can we co-opt or create something like that?

Ray:

  • am definitely looking for info on xrisk in a more concise form
  • “I don’t have a degree in computer science or AI research, wouldn’t be able to analyze at Eliezer level”
  • so layman can feel like they’re making an informed decision, has important information but not too much information

 

17 comments

Comments sorted by top scores.

comment by Andy_McKenzie · 2012-10-13T19:07:05.120Z · LW(p) · GW(p)

Thanks so much for writing up these notes.

if goal is to be remembered, won’t get there by donating. Founding agency or inventing something

If humanity survives, to the extent that they care about the past, we should expect future societies to be very smart about figuring out precisely who it was that made the large differences--consequentially. Making money and donating/evangelizing very intelligently could very well get you remembered.

some really important things don’t get remembered, e.g. eradicating Polio and Smallpox. Huge amounts of work, people don’t think about it because it’s no longer a problem.

Just because it is not remembered much now doesn't mean it won't be remembered in the future. Tesla died penniless in 1943 and was mostly forgotten after his death. It wasn't until the 1990's that there was a resurgence in interest in his work.

Or, as another example, consider Gregor Mendel. It took ~ 40 yr's before society recognized his accomplishments.

On another note, I'm surprised people didn't mention better, faster methods for making vaccines w/r/t reducing xrisk. E.g., see Bostrom here.

comment by amcknight · 2012-10-14T02:42:30.268Z · LW(p) · GW(p)

Another group I recommend investigating that is working on x-risk reduction is the Global Catastrophic Risk Institute, which was founded in 2011 and has been ramping up substantially over the last few months. As far as I can tell they are attempting to fill a role that is different from SIAI and FHI by connecting with existing think tanks that are already thinking about GCR related subject matter. Check out their research page.

Replies from: Giles
comment by Giles · 2012-10-14T14:21:35.271Z · LW(p) · GW(p)

Thanks - lukeprog gave us a list of xirsk orgs a while back, including GCRI, so I've pasted that into the minutes also (though I've made it clear we didn't discuss them all).

comment by Randaly · 2012-10-14T04:08:30.736Z · LW(p) · GW(p)

Lifeboat Foundation (even they sound amateurish)

Serious question have been raised regarding their leadership.

comment by Raemon · 2012-10-14T04:04:28.810Z · LW(p) · GW(p)

Wow, these are some pretty serious minutes. Thanks Giles!

comment by carey · 2012-10-20T10:00:30.632Z · LW(p) · GW(p)

When will there be another online optimal philanthropy meetup?

comment by Peter Wildeford (peter_hurford) · 2012-10-15T03:10:32.022Z · LW(p) · GW(p)

Did anyone ever end up contacting Givewell about donating to The Clear Fund / promoting their expansion? What I was quoting regarding the future of Givewell Labs comes from "Recent Board Meeting on GiveWell’s Evolution" (see also the included attachment). (Their contact information is available here).

Replies from: Giles, peter_hurford
comment by Giles · 2012-10-16T05:55:50.063Z · LW(p) · GW(p)

I'm guessing not so far - if no one else seems to want to do it, I'll ask them a week from now.

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2012-10-23T18:19:14.836Z · LW(p) · GW(p)

It's been a week. Any news?

Replies from: Giles, Giles
comment by Giles · 2012-10-29T05:07:58.918Z · LW(p) · GW(p)

OK - I've written to them.

Replies from: peter_hurford
comment by Giles · 2012-10-25T03:12:46.853Z · LW(p) · GW(p)

News is last couple of days spent dealing with an unforeseen event. I'll do it just as soon as I get my life back :-)

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2012-10-26T00:14:52.065Z · LW(p) · GW(p)

I hope everything is okay. Let me know if I can do anything to help!

comment by Pablo (Pablo_Stafforini) · 2012-10-14T04:57:21.665Z · LW(p) · GW(p)

Peter Hurfurd

Correct spelling: 'Peter Hurford'

Replies from: Giles, peter_hurford
comment by Giles · 2012-10-14T14:04:32.482Z · LW(p) · GW(p)

Fixed - thanks.