Why I'm Skeptical About Unproven Causes (And You Should Be Too)
post by Peter Wildeford (peter_hurford) · 2013-07-29T09:09:27.464Z · LW · GW · Legacy · 98 commentsContents
Focusing on Speculative Causes Requires Unreliable Commonsense Can You Pick the Winning Social Program? GiveWell and Commonsense's Track Record of Failure People Are Notoriously Bad At Predicting the (Far) Future Even Broad Effects Require Specific Attempts Focusing on Speculative Causes Plays Into Our Biases Focusing on Speculative Causes Uses Bad Decision Theory Would you play a lottery with no stated odds? "Conservative Orders of Magnitude" Arguments Value of Information, Exploring, and Exploiting Learning in Practice The Typical Donor GiveWell’s Top Charities Also Have High Value of Information Conclusion None 98 comments
Since living in Oxford, one of the centers of the "effective altruism" movement, I've been spending a lot of time discussing the classic “effective altruism” topic -- where it would be best to focus our time and money.
Some people here seem to think that the most important thing we should be focusing our time and money on are speculative projects, or projects that promise a very high impact, but involve a lot of uncertainty. One such very common example is "existential risk reduction", or attempts to make a long-term far future for humans more likely, say by reducing the chance of things that would cause human extinction.
I do agree that the far future is the most important thing to consider, by far (see papers by Nick Bostrom and Nick Beckstead). And I do think we can influence the far future. I just don't think we can do it in a reliable way. All we have are guesses about what the far future will be like and guesses about how we can affect it. All of these ideas are unproven, speculative projects, and I don't think they deserve the main focus of our funding.
While I waffled in cause indecision for a while, I'm now going to resume donating to GiveWell's top charities, except when I have an opportunity to use a donation to learn more about impact. Why? My case is that speculative causes, or any cause with high uncertainty (reducing nonhuman animal suffering, reducing existential risk, etc.) requires that we rely on our commonsense to evaluate them with naīve cost-effectiveness calculations, and this is (1) demonstrably unreliable with a bad track record, (2) plays right into common biases, and (3) doesn’t make sense based on how we ideally make decisions. While it’s unclear what long-term impact a donation to a GiveWell top charity will have, the near-term benefit is quite clear and worth investing in.
Focusing on Speculative Causes Requires Unreliable Commonsense
How can we reduce the chance of human extinction? It just makes sense that if we fund cultural exchange programs between the US and China, there will be more goodwill for the other within each country, and therefore the countries will be less likely to nuke each other. Since nuclear war would likely be very bad, it's of high value to fund cultural exchange programs, right?
Let's try another. The Machine Intelligence Research Institute (MIRI) thinks that someday artificial intelligent agents will become better than humans at making AIs. At this point, AI will build a smarter AI which will build an even smarter AI, and -- FOOM! -- we have a superintelligence. It's important that this superintelligence be programmed to be benevolent, or things will likely be very bad. And we can stop this bad event by funding MIRI to write more papers about AI, right?
Or how about this one? It seems like there will be challenges in the far future that will be very daunting, and if humanity handles them wrong, things will be very bad. But if people were better educated and had more resources, surely they'd be better at handling those problems, whatever they may be. Therefore we should focus on speeding up economic development, right?
These three examples are very common appeals to commonsense. But commonsense hasn't worked very well in the domain of finding optimal causes.
Can You Pick the Winning Social Program?
Benjamin Todd makes this point well in "Social Interventions Gone Wrong", where he provides a quiz with eight social programs and asks readers to guess whether they succeeded or failed.
I'll wait for you to take the quiz first... doo doo doo... la la la...
Ok, welcome back. I don't know how well you did, but success on this quiz is very rare, and this poses problems for commonsense. Sure, I'll grant you that Scared Straight sounds pretty suspicious. But the Even Start Family Literacy Program? It just makes sense that providing education to boost literacy skills and promote parent-child literacy activities should boost literacy rates, right? Unfortunately, it was wrong. Wrong in a very counter-intuitive way. There wasn't an effect.
GiveWell and Commonsense's Track Record of Failure
Commonsense actually has a track record of failure. GiveWell has been talking about this for ages. Every time GiveWell has found an intervention hyped by commonsense notions of high-impact and they've looked at it further, they've ended up disappointed.
The first was the Fred Hollows Foundation. A lot of people had been repeating the figure that the Fred Hollows Foundation could cure blindness for $50. But GiveWell found that number suspect.
The second was VillageReach. GiveWell originally put them as their top charity and estimated them as saving a life for under $1000. But further investigation kept leading them to revise their estimate until ultimately they weren't even sure if VillageReach had an impact at all.
Third, there is deworming. Originally, deworming was announced as saving a year of healthy life (DALY) for every $3.41 spent. But when GiveWell dove into the spreadsheets that resulted in that number, they found five errors. When the dust settled, the $3.41 figure was found to actually be off by a factor of 100. It was revised to $326.43.
Why shouldn't we expect this trend to not be the case in other areas where calculations are even looser and numbers are even less settled, like efforts devoted to speculative causes? Our only recourse is to fall back on interventions that are actually studied.
People Are Notoriously Bad At Predicting the (Far) Future
Cost-effectiveness estimates also frequently require making predictions about the future. Existential risk reduction, for example, requires making predictions about what will happen in the far future, and how your actions are likely to effect events hundreds of years down the road. Yet, experts are notoriously bad at making these kinds of predictions.
James Shanteau found in "Competence in Experts: The Role of Task Characteristics" (see also Kahneman and Klein's "Conditions for Intuitive Expertise: A Failure to Disagree") that experts perform well when thinking about static stimuli, thinking about things, and when there is feedback and objective analysis available. Furthermore, experts perform pretty badly when thinking about dynamic stimuli, thinking about behavior, and feedback and objective analysis are unavailable.
Predictions about existential risk reduction and the far future are firmly in the second category. So how can we trust our predictions about our impact on the far future? Our only recourse is to fall back on interventions that we can reliably predict, until we get better at prediction (or invest money in getting better at making predictions).
Even Broad Effects Require Specific Attempts
One potential resolution to this problem is to argue for “broad effects” rather than “specific attempts”. Perhaps it’s difficult to know whether a particular intervention will go well or mistaken to focus entirely on Friendly AI, but surely if we improved incentives and norms in academic work to better advance human knowledge (meta-research), improved education, or advocated for effective altruism, the far future would be much better equipped to handle threats.
I agree that these broad effects would make the far future better and I agree that it’s possible to implement these broad effects and change the far future. The problem, however, is it can’t be done in an easy or well understood way. Any attempt to implement a broad effect would require a specific action that has an unknown expectation of success and unknown cost-effectiveness. It’s definitely beneficial to advocate for effective altruism, but could this be done in a cost-effective way? A way that’s more cost-effective at producing welfare than AMF? How would you know?
In order to accomplish these broad effects, you’d need specific organizations and interventions to channel your time and money into. And by picking these specific organizations and interventions, you’re losing the advantage of broad effects and tying yourself to particular things with poorly understood impact and no track record to evaluate.
Focusing on Speculative Causes Plays Into Our Biases
We've now known for quite a long time that people are not all that rational. Instead, human thinking fails in very predictable and systematic ways. Some of these ways make us less likely to take speculative causes seriously, such as ambiguity aversion, the absurdity heuristic, scope neglect, and overconfidence bias.
But there’s also a different side of the coin, with biases that might make people think badly about existential risk:
Optimism bias. People generally think things will turn out better than they actually will. This could lead people to think that their projects will have a higher impact than they actually will, which would lead to higher estimates of cost-effectiveness than is reasonable.
Control bias. People like to think they have more control over things than they actually do. This plausibly also includes control over the far future. Therefore, people are probably biased into thinking they have more control over the far future than they actually do, leading to higher estimates of ability to influence the future than is reasonable.
"Wow factor" bias. People seem attracted to more impressive claims. Saving a life for $2500 through a malaria bed net seems much more boring compared to the chance of saving the entire world by averting a global catastrophe. Within the Effective Altruist / LessWrong community, existential risk reduction is cool and high status, whereas averting global poverty is not. This might lead to more endorsement of existential risk reduction than is reasonable.
Conjunction fallacy. People have a problem assessing probability properly when there are many steps involved, each of which has a chance of not happening. Ten steps, each with an independent 90% success rate, has only a 35% chance of success. Focusing on the far future seems to involve that a lot of largely independent events happen the way that is predicted. This would mean people are worse at estimating their chances of helping the far future, creating higher cost-effectiveness estimates than is reasonable.
Selection bias. When trying to find trends in history that are favorable for affecting the far future, some examples can be provided. However, this is because we usually hear about the interventions that end up working, whereas all the failed attempts to influence the far future are never heard of again. This creates a very skewed sample that can negatively bias our thinking about our success of influencing the far future.
It’s concerning there are numerous biases both weighted in favor and weighted against speculative causes, and this means we must tread carefully when assessing their merits. However, I would strongly expect biases to be even worse in favor of speculative causes rather than against them, because speculative causes lack the available feedback and objective evidence needed to help insulate against bias, whereas a focus on global health does not.
Focusing on Speculative Causes Uses Bad Decision Theory
Furthermore, not only is the case for speculative causes undermined by a bad track record and possible cognitive biases, but the underlying decision theory seems suspect in a way that's difficult to place.
Would you play a lottery with no stated odds?
Imagine another thought experiment -- you're asked to play a lottery. You have to pay $2 to play, but you have a chance at winning $100. Do you play?
Of course, you don't know, because you're not given odds. Rationally, it makes sense to play any lottery where you expect to come out ahead more often than not. If the lottery is a coin flip, it makes sense to pay $2 to have a 50/50 shot to win $100, since you'd expect to win $50 on average, and come ahead $48 each time. With a sufficiently high reward, even a one in a million chance is worth it. Pay $2 for a 1/1M chance of winning $1B, and you'd expect to come out ahead by $998 each time.
But $2 for the chance to win $100, without knowing what the chance is? Even if you had some sort of bounds, like you knew the odds had to be at least 1/150 and at most 1/10, though you could be off by a little bit. Would you accept that bet?
Such a bet seems intuitively uninviting to me, yet this is the bet that speculative causes offer me.
"Conservative Orders of Magnitude" Arguments
In response to these considerations, I've seen people endorsing speculative causes look at their calculations and remark that even if their estimate were off by 1000x, or three orders of magnitude, they still would be on solid ground for high impact, and there's no way they're actually off by three orders of magnitude. However, Nate Silver's The Signal and the Noise: Why So Many Predictions Fail — but Some Don't offers a cautionary tale:
Moody’s, for instance, went through a period of making ad hoc adjustments to its model in which it increased the default probability assigned to AAA-rated securities by 50 percent. That might seem like a very prudent attitude: surely a 50 percent buffer will suffice to account for any slack in one’s assumptions? It might have been fine had the potential for error in their forecasts been linear and arithmetic. But leverage, or investments financed by debt, can make the error in a forecast compound many times over, and introduces the potential of highly geometric and nonlinear mistakes.
Moody’s 50 percent adjustment was like applying sunscreen and claiming it protected you from a nuclear meltdown—wholly inadequate to the scale of the problem. It wasn’t just a possibility that their estimates of default risk could be 50 percent too low: they might just as easily have underestimated it by 500 percent or 5,000 percent. In practice, defaults were two hundred times more likely than the ratings agencies claimed, meaning that their model was off by a mere 20,000 percent.
Silver points out that when estimating how safe mortgage backed securities were, the difference between assuming defaults are perfectly uncorrelated and defaults are perfectly correlated is a difference of 160,000x in your risk estimate -- or five orders of magnitude.
If these kinds of five-orders-of-magnitude errors are possible in a realm that has actual feedback and is moderately understood, how do we know the estimates for cost-effectiveness are safe for speculative causes that are poorly understood and offer no feedback? Again, our only recourse is to fall back on interventions that we can reliably predict, until we get better at prediction.
Value of Information, Exploring, and Exploiting
Of course, there still is one important aspect of this problem that has not been discussed -- value of information -- or the idea that sometimes it’s worth doing something just to learn more about how the world works. This is important in effective altruism too, where we focus specifically on “giving to learn”, or using our resources to figure out more about the impact of various causes.
I think this is actually really important and is not a victim to any of my previous arguments, because we’re not talking about impact, but rather learning value. Perhaps one could look to an "explore-exploit model", or the idea that we achieve the best outcome when we spend a lot of time exploring first (learning more about how to achieve better outcomes) before exploiting (focusing resources on achieving the best outcome we can). Therefore, whenever we have an opportunity to “explore” further or learn more about what causes have high impact, we should take it.
Learning in Practice
Unfortunately, in practice, I think these opportunities are very rare. Many organizations that I think are “promising” and worth funding further to see what their impact looks like do not have sufficiently good self-measurement in place to actually assess their impact or sufficient transparency to provide that information, therefore making it difficult to actually learn from them. And on the other side of things, many very promising opportunities to learn more are already fully funded. One must be careful to ensure that it’s actually one’s marginal dollar that is getting marginal information.
The Typical Donor
Additionally, I don’t think the typical donor is in a very good position to assess where there is high value of information or have the time and knowledge to act upon this information once it is acquired. I think there’s a good argument for people in the “effective altruist” movement to perhaps make small investments in EA organizations and encourage transparency and good measurement in their operations to see if they’re successfully doing what they claim (or potentially create an EA startup themselves to see if it would work, though this carries large risks of further splitting the resources of the movement).
But even that would take a very savvy and involved effective altruist to pull off. Assessing the value of information on more massive investments like large-scale research or innovation efforts would be significantly more difficult, beyond the talent and resources of nearly all effective altruists, and are probably left to full-time foundations or subject-matter experts.
GiveWell’s Top Charities Also Have High Value of Information
As Luke Muehlhauser mentions in "Start Under the Streetlight, Then Push Into the Shadows", lots of lessons can be learned only by focusing on the easiest causes first, even if we have strong theoretical reasons to expect that they won’t end up being the highest impact causes once we have more complete knowledge.
We can use global health cost-effectiveness considerations as practice for slowly and carefully moving into the more complex and less understood domains. There even are some very natural transitions, such as beginning to look at "flow through effects" of reducing disease in the third-world and beginning to look at how more esoteric things affect the disease burden, like climate change. Therefore, even additional funding for GiveWell’s top charities has high value of information. And notably, GiveWell is beginning this "push" through GiveWell Labs.
Conclusion
The bottom line is that sometimes things look too good to be true. Therefore, I should expect that the actual impact of speculative causes that make large promises, upon a thorough investigation, will be much lower.
And this has been true in other domains. People are notoriously bad at estimating the effects of causes in both the developed world and developing world, and those are the causes that are near to us, provide us with feedback, and are easy to predict. Yet, from the Even Start Family Literacy Program to deworming estimates, our commonsense has failed us.
Add to that the fact that we should expect ourselves to perform even worse at predicting the far future. Add to that optimism bias, control bias, "wow factor" bias, and the conjunction fallacy, which make it difficult for us to think realistically about speculative causes. And then add to that considerations in decision theory, and whether we would bet on a lottery with no stated odds.
When all is said and done, I'm very skeptical of speculative projects. Therefore, I think we should be focused on exploring and exploiting. We should do whatever we can to fund projects aimed at learning more, when those are available, but be careful to make sure they actually have learning value. And when exploring isn’t available, we should exploit what opportunities we have and fund proven interventions.
But don’t confuse these two concepts and fund causes intended for learning because of their actual impact value. I’m skeptical about these causes actually being high impact, though I’m open to the idea that they might be and look forward to funding them in the future when they become better proven.
-
Followed up in: "What Would It Take To 'Prove' A Skeptical Cause" and "Where I've Changed My Mind on My Approach to Speculative Causes".
This was also cross-posted to my blog and to effective-altruism.com.
I'd like to thank Nick Beckstead, Joey Savoie, Xio Kikauka, Carl Shulman, Ryan Carey, Tom Ash, Pablo Stafforini, Eliezer Yudkowsky, and Ben Hoskin for providing feedback on this essay, even if some of them might strongly disagree with it's conclusion.
98 comments
Comments sorted by top scores.
comment by lukeprog · 2013-07-29T18:18:56.070Z · LW(p) · GW(p)
Existential risk reduction is cool and high status, whereas averting global poverty is not.
What?? If this is true, please pass along the message to the Gates Foundation, the United Nations, the World Economic Forum, and... almost everyone else on the planet.
Replies from: gwern, OnTheOtherHandle, peter_hurford↑ comment by gwern · 2013-07-29T19:40:08.260Z · LW(p) · GW(p)
Yes, I was going to say... How can one possibly argue that certain speculative causes are too popular and this is because they play into common cognitive biases when the examples are the fringest of the fringe and funded approximately not at all?
Let's try another. The Machine Intelligence Research Institute (MIRI) thinks that someday artificial intelligent agents will become better than humans at making AIs. At this point, AI will build a smarter AI which will build an even smarter AI, and -- FOOM! -- we have a superintelligence. It's important that this superintelligence be programmed to be benevolent, or things will likely be very bad. And we can stop this bad event by funding MIRI to write more papers about AI, right?
Or how about this one? It seems like there will be challenges in the far future that will be very daunting, and if humanity handles them wrong, things will be very bad. But if people were better educated and had more resources, surely they'd be better at handling those problems, whatever they may be. Therefore we should focus on speeding up economic development, right?
These three examples are very common appeals to commonsense. But commonsense hasn't worked very well in the domain of finding optimal causes.
I wish I lived on a planet where these were 'very common appeals to commonsense'. I wonder how much a ticket there would cost?
↑ comment by OnTheOtherHandle · 2013-07-29T19:34:06.847Z · LW(p) · GW(p)
I think it might be more for a select group of people. In the LW community, I have gotten the impression that existential risk is higher status than global poverty reduction - that's definitely the opinion of the high status people in this community. And maybe for the specific kind of nonconformist nerd who reads Less Wrong and is likely to come across this post, transhumanism and existential risk reduction has a "coolness factor" that global poverty reduction doesn't have.
You're definitely right about the wider world, but many people might only care about the opinions of the 100 or so members of their in-group.
Replies from: None↑ comment by Peter Wildeford (peter_hurford) · 2013-07-30T04:30:24.011Z · LW(p) · GW(p)
I feel like you're just sneering at a very small point I made rather than actually engaging with it.
What I meant to say was (1) x-risk reduction is cooler and higher status in the effective altruist / LessWrong community and (2) this biases people at least a little bit. I'll edit the essay to reflect that.
Would you agree with (1)? What about (2)?
Replies from: lukeprog↑ comment by lukeprog · 2013-07-30T05:25:45.532Z · LW(p) · GW(p)
If you meant to say x-risk reduction is high-status in the EA/LW community, then yes, that makes a lot more sense than what you originally said.
But I'm not actually sure how true this is in the broader EA community. E.g. GiveWell and Peter Singer are two huge players in the EA community, each with larger communities than LW (by my estimate), and they haven't publicly advocated x-risk reduction. So my guess is that x-risk reduction is basically just high status in the LW/MIRI/FHI world, and maybe around CEA as well due to their closeness to FHI. To the extent that x-risk reduction is high-status in that world, we should expect a bias toward x-risk reduction, but that's a pretty small world. There's a much larger and more wealthy world outside that group which is strongly biased against caring about x-risk reduction, and for this and other reasons we should expect on net for Earth to pay way, way less attention to x-risk than is warranted.
Replies from: CarlShulman, JonahSinick↑ comment by CarlShulman · 2013-08-03T05:02:06.146Z · LW(p) · GW(p)
GiveWell and Peter Singer are two huge players in the EA community, each with larger communities than LW (by my estimate), and they haven't publicly advocated x-risk reduction.
GiveWell is doing shallow analyses of catastrophic risks, and Peter Singer has written favorably on reducing x-risk, although not endorsing particular charities or interventions, and it's not a regular theme in his presentations.
Replies from: lukeprog↑ comment by JonahS (JonahSinick) · 2013-07-30T07:16:05.978Z · LW(p) · GW(p)
There's a much larger and more wealthy world outside that group which is strongly biased against caring about x-risk reduction
Why do you think that there's a bias against x-risk reduction in the broader world? I think that there's a pretty strong case for x-risk reduction being underprioritized from a utilitarian perspective. But I don't think that I've seen compelling evidence that it's unappealing relative to a randomly chosen cause.
Replies from: lukeprog↑ comment by lukeprog · 2013-07-30T17:57:51.082Z · LW(p) · GW(p)
By "randomly chosen cause," do you mean something like "Randomly chosen among the charitable causes which have at least $500k devoted to them each year" or do you mean "Randomly chosen in the space of potential causes"?
Replies from: JonahSinick↑ comment by JonahS (JonahSinick) · 2013-07-31T00:15:56.792Z · LW(p) · GW(p)
The former.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-07-31T07:27:54.689Z · LW(p) · GW(p)
Consider the total amount sent toward the generalized cause of a randomly chosen charity with a budget of at least $500K/year. I.e., not the Local Village Center for the Blind but humanity's total efforts to help the blind. Compare MIRI and FHI.
Replies from: Rain, JonahSinick↑ comment by Rain · 2013-07-31T12:57:13.099Z · LW(p) · GW(p)
Agreed.
Search for 'million donation' on news.google.com, first two pages:
- Kentucky college gets record $250 million gift
- $20-million Walton donation will boost Teach for America in LA
- NIH applauds $30 million donation from NFL
- Emerson College gets $2 million donation
- Jim Pattison makes $5 million donation for Royal Jubilee Hospital
- Eric and Wendy Schmidt donate $15 million for Governors Island park
Every time I hear a dollar amount on the news, I cringe at realizing how pathetic spending on existential risks is by comparison.
↑ comment by JonahS (JonahSinick) · 2013-07-31T22:28:02.760Z · LW(p) · GW(p)
I agree that x-risk reduction is a lot less popular than, e.g., caring for the blind, but it doesn't follow that people are strongly biased against caring about x-risk reduction. Note that x-risk reduction is a relatively new cause (because the issues didn't become clear until relatively recently), whereas people have been caring for the blind for millennia. Under the circumstances, one would expect much more attention to go toward caring for the blind independently of whether people were biased against x-risk reduction specifically. I expect x-risk reduction to become more popular over time.
comment by Shmi (shminux) · 2013-07-29T16:46:44.910Z · LW(p) · GW(p)
This post had an odd effect on me. I agreed with almost everything in it, as it matches my own logic and intuitions. Then I realized that I strongly disliked the logic in your anti-meat post, because it appeared so severely biased toward a predefined conclusion "eating meat is ethically bad". So, given the common authorship, I must face the possibility that the quality of the two posts is not significantly different, and it's my personal biases which make me think that it is. As a result, I am now slightly more inclined to consider the anti-meat arguments seriously and slightly less inclined to agree with the arguments from this post, even though the foggy future and the lack of feedback arguments make a lot of sense.
EDIT: Hmm, whatever shall I do with 1 Eliezer point and 1 Luke point...
Replies from: Eliezer_Yudkowsky, lukeprog↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-07-29T17:13:14.774Z · LW(p) · GW(p)
+1 for correct, evenhanded use of the genetic heuristic (what we call the genetic fallacy when we agree with its usage).
Replies from: army1987, lukeprog↑ comment by A1987dM (army1987) · 2013-07-30T13:11:55.418Z · LW(p) · GW(p)
what we call the genetic fallacy when we agree with its usage
I was about to ask whether you actually meant to say “disagree”, then I noticed that English has an ambiguity with predicative nominals over the object in relative clauses I hadn't noticed before. :-/
Replies from: BloodyShrimp↑ comment by BloodyShrimp · 2013-07-30T19:35:29.203Z · LW(p) · GW(p)
I'm having trouble parsing the version with "agree" to anything simultaneously non-tautologous (i.e. when we use a name, we generally agree with our own usage) and reasonable; what reading did you notice?
Replies from: army1987↑ comment by A1987dM (army1987) · 2013-07-31T09:35:14.512Z · LW(p) · GW(p)
My first reading: ‘We call the genetic heuristic “the genetic fallacy” when we agree with its usage.’
The intended reading: ‘We call the genetic fallacy “the genetic heuristic” when we agree with its usage.’
Replies from: FeepingCreature↑ comment by FeepingCreature · 2013-08-17T04:34:35.097Z · LW(p) · GW(p)
To switch your brain back and forth, read it with emphasis on "fallacy" for the wrong reading, emphasis on "call" for the intended reading. (at least for me)
comment by Nick_Beckstead · 2013-07-29T10:11:27.040Z · LW(p) · GW(p)
A few comments and requests for clarification:
Are you only talking about donations? Do you think it would also be a mistake to work on speculative causes? (That seems much different in that there are many more opportunities for learning from working on a cause than from donating to it.) I think you are on much stronger ground with claims about donations, though I think it can make sense for other people with the right kind of information to fund other opportunities. E.g., think about early GiveWell. I wouldn't want to buy into something which said giving to early GiveWell was a bad idea, even for people who had a lot of information about the project. I think some people may be in a similar position for funding some early EA orgs, and they shouldn't be discouraged from funding them.
What counts as a "speculative cause"? Meta-research? Political advocacy? Education for talented youth? Climate change? Prioritization work? Funding early GiveWell? Funding 80K? Anyone who says the thing they are doing is somehow improving very long-run outcomes? Anything that hasn't been recommended by GiveWell? Anything that hasn't been proven to work with RCT-quality evidence? Asking for a definition is probably unfair, but maybe a few illustrative examples of non-speculative causes and speculative causes would be helpful.
If what you care about is the far future, why does AMF get to count as a "non-speculative" giving opportunity? We have very little idea how much AMF improves very long-run outcomes in comparison with other things, little effort has gone into studying this, and it seems many of your arguments that we are bad at understanding long-run effects apply.
On this last point, I think it may make sense to say something like, "We are very bad at saying how much various changes (saving a life in the US, saving a life in Africa, boosting GDP in the US by X, boosting GDP in an African country by X, etc.) improve long-run outcomes. But we know that AMF stands out for creating these changes in a proven, cost-effective way. And I think the intermediate changes AMF is making are good for long-run outcomes, and not clearly less important than other intermediate changes we could be making. So that makes it the best donation target even by far future standards." I just think this is a highly tentative position, and one that could easily change with further thought and research, and should be acknowledged as such.
Replies from: lukeprog, peter_hurford↑ comment by lukeprog · 2013-07-29T18:14:25.750Z · LW(p) · GW(p)
Yeah, #3 is my own biggest question for the OP. If you care about the far future, then it seems like the case for MIRI+FHI's positive effect on the far future has been made more robustly than the case for AMF's effect on the far future has been made, even though it's still true that in general we are pretty ignorant about how to affect the far future in positive ways.
↑ comment by Peter Wildeford (peter_hurford) · 2013-07-30T04:53:05.146Z · LW(p) · GW(p)
Are you only talking about donations? Do you think it would also be a mistake to work on speculative causes? (That seems much different in that there are many more opportunities for learning from working on a cause than from donating to it.)
I agree with that point. I'm not talking only about donations per se, though that's a much more important consideration for the "typical person". I think in so far as one should be purchasing either impact through proven causes or value of information through speculative causes, value of information is better purchased by work rather than money.
~
I think it can make sense for other people with the right kind of information to fund other opportunities. E.g., think about early GiveWell.
I agree. This is why I want to see certain "promising" causes funded, provided we will get to see if they succeed or fail.
~
What counts as a "speculative cause"? Meta-research? Political advocacy? Education for talented youth? Climate change? Prioritization work? Funding early GiveWell? Funding 80K? Anyone who says the thing they are doing is somehow improving very long-run outcomes? Anything that hasn't been recommended by GiveWell? Anything that hasn't been proven to work with RCT-quality evidence?
I'd say a "speculative cause" is any cause who's impact has not been demonstrated with empirical evidence of sufficient quality. It doesn't have to be an RCT. For example, vegetarianism advocacy could demonstrate impact, in my opinion, with just a few studies with actual control groups run by independent people that still show the conversion rate of ~2%.
I think Giving What We Can or 80K could sufficiently demonstrate their impact with just slightly better member surveys and slightly more robust analyses.
For another example, I think GiveWell itself has proven itself to be a good cause (albeit not one with more funding) through their documentation of money moved and track record of good research and there aren't any RCTs done on GiveWell.
~
If what you care about is the far future, why does AMF get to count as a "non-speculative" giving opportunity? We have very little idea how much AMF improves very long-run outcomes in comparison with other things, little effort has gone into studying this, and it seems many of your arguments that we are bad at understanding long-run effects apply.
I think this is a key thing to clear up. AMF is non-speculative in terms of producing near-term impact (or impact generically), but it is still speculative in terms of long-term impact. I think if you want to purchase direct impact (instead of information), you should be doing so through a proven cause like AMF.
Replies from: None, SoerenMind↑ comment by [deleted] · 2013-07-30T12:13:55.824Z · LW(p) · GW(p)
Your last point doesn't make much sense to me. I agree that we should be concerned about purchasing as much impact as we can, but the amount of impact you're purchasing from AMF is minuscule compared to the far future. It seems like your concern for something being 'proven' is skewing your choices.
It's like walking into a shop and buying the first TV (or whatever) that you see, despite it likely being expensive and not nearly as good as other ones, because you can at least see it, rather than doing a bit of looking on Amazon.
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2013-07-30T21:00:57.973Z · LW(p) · GW(p)
I'd reject your analogy.
If you look for TVs among Amazon, they give you specific prices and you can reliably buy them. Now imagine you walked into a TV shop and saw some that were a good deal (say $100), but others where the price was set randomly?
I don't think we can make a case that we're actually purchasing some known, larger-than-AMF amount of impact from far future work, because so much far future work is likely to not actually do well at affecting the far future. I just don't think we have that much control over how the future unfolds.
Replies from: None↑ comment by [deleted] · 2013-08-07T15:11:40.180Z · LW(p) · GW(p)
Some interventions have no impact, some have low impact, and some have high impact. 'No impact' doesn't help anyone/do any good, 'low impact' helps some people/does some good, and 'high impact' helps a lot of people/does a lot of good. Because of the size of the future, an intervention has to help a lot of people/do a lot of good to be 'high-impact' - helping millions or billions rather than thousands or tens of thousands.
We're fairly sure that AMF is 'low impact', since we have evidence that it reliably helps a decent number of present people. Which is great - it's not 'no impact'! But it's unlikely that it will be 'high impact'.
I agree that we don't have a clear sense yet of which interventions are actually high-impact. That's why I don't donate to any direct x-risk reduction effort. However the appropriate response to this problem seems to be to invest in more research, to work out which interventions will plausibly be high-impact. Alternatively, one could invest in improving one's position to be able to purchase more of the high-impact intervention when we have a clearer view of what that is - putting oneself one a good career path or building an effective movement.
I don't understand why you think the response should be to purchase low-impact interventions.
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2013-08-07T20:44:59.651Z · LW(p) · GW(p)
I agree that we don't have a clear sense yet of which interventions are actually high-impact. [...] However the appropriate response to this problem seems to be to invest in more research, to work out which interventions will plausibly be high-impact. Alternatively, one could invest in improving one's position to be able to purchase more of the high-impact intervention when we have a clearer view of what that is - putting oneself one a good career path or building an effective movement.
I definitely agree with this, and that's what I tried to articulate in the section on value of information. Sorry if I wasn't clear.
↑ comment by SoerenMind · 2013-08-04T09:28:27.329Z · LW(p) · GW(p)
About donating vs. working: I don't agree that the two are fundamentally different. Basically what I'm saying is time is money and money is time. I think there is a conversion rate between the two and that rate is certainly not static. But you have to consider the counterfactual, what you could be doing if you didn't spend your time or money on a cause. For example if you are a very good cost-effectiveness researcher your time may be worth 2 or 3 times as much as the money you could make earning to give. But that money could pay for at least one additional researcher. Similarly, if you spend your time writing web posts that time could be spent working a student job or better, investing in your career which will yield money in the long run. Somehow you can always convert the two.
Although we can't quantify the conversion rate I think it exists. And generally I would expect it to be around 1, maybe between 0,33 and 3 in some cases. That is sometimes significant, but with the different causes we're talking about we don't expect the impact to be even in the same order of magnitude. So for example it wouldn't make sense to say working on MIRI is a good idea, but donating to them is not a good idea.
So if I say I would not donate to an organisation it makes sense to ask if I would not work for that organisation either. Money and time are somehow convertible into each other. So I wouldn't agree that value of information is always better purchased by time than by money. And if it is, the difference may not be all that great.
And a question: Maybe this has been answered somewhere, but I think it would be useful if you straightforwardly said which causes you consider speculative. Makes this quite vague discussion a little less vague hopefully. So say 80k, EAA, GWWC, MIRI, FHI, CFAR, Effective Fundraising, Animal Ethics, all x-risk related, all research related?? This would really help me. I think the issue you raised with your post is very important to discuss and get right ;)
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2013-08-07T21:09:00.778Z · LW(p) · GW(p)
So if I say I would not donate to an organisation it makes sense to ask if I would not work for that organisation either.
I'm not sure this is the case. If you're working for the organiazation, you're in a significant different place with regard to the amount of information you can get on the organization's impact and what you can do to increase that information. I think it's possible to be the case that if you're working in a speculative organization, you can get further information through working but not through donating.
~
Maybe this has been answered somewhere, but I think it would be useful if you straightforwardly said which causes you consider speculative. Makes this quite vague discussion a little less vague hopefully
I answer this now in "What Would it Take to 'Prove' A Speculative Cause?".
Replies from: SoerenMind↑ comment by SoerenMind · 2013-08-09T07:36:41.102Z · LW(p) · GW(p)
I see, that makes sense. I understood value of information as creating valuable information for the whole community. You seem to be talking about valuable information for oneself. And maybe as an added bonus increasing the information about your organization more than a replacement worker would otherwise.
But yes, it makes sense to me that if you work for a speculative cause you are in a better position to assess if you should donate to them.
The point I was trying to make is less about value of information for yourself but information for others. Your donation ould fund a new employee for example who 1. gathers a lot of information like you would if you were in there position and 2. brings valuable information to the community in general. The questions is of course whether that person would be as productive as you.
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2013-08-09T08:41:31.810Z · LW(p) · GW(p)
I understood value of information as creating valuable information for the whole community. You seem to be talking about valuable information for oneself.
Well, any information I gather individually could be shared.
~
Your donation could fund a new employee for example who 1. gathers a lot of information like you would if you were in there position and 2. brings valuable information to the community in general.
Right. That would be one way to do it, if you could trust the person you hire to be interested in gathering information. Right now, my perception is that people who are interested in gathering information and reporting it are kind of rare.
Replies from: SoerenMind↑ comment by SoerenMind · 2013-08-09T18:47:46.832Z · LW(p) · GW(p)
That's an interesting point. I strongly agree that less proven charities should do more internal research and especially reporting about their effectiveness. I think this could fuel an important discussion. Even if the results aren't that amazing I think certain people would consider donating to them more simply because they are more aware of the opportunity and/or less uncertain about it.
I'm not quite sure yet what exactly you refer to by information though. It sounds like this refers to reporting about the effectiveness of the charities. Or are you talking about information like cost-effectiveness research and research papers/blog posts as well?
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2013-08-10T15:05:15.871Z · LW(p) · GW(p)
I'm thinking here information about impact, or evidence that would lower our uncertainty about the effect of a certain intervention.
comment by Randaly · 2013-07-29T18:15:07.227Z · LW(p) · GW(p)
Would you play a lottery with no stated odds?
Imagine another thought experiment -- you're asked to play a lottery. You have to pay $2 to play, but you have a chance at winning $100. Do you play?
Of course, you don't know, because you're not given odds. Rationally, it makes sense to play any lottery where you expect to come out ahead more often than not. If the lottery is a coin flip, it makes sense to pay $2 to have a 50/50 shot to win $100, since you'd expect to win $50 on average, and come ahead $48 each time. With a sufficiently high reward, even a one in a million chance is worth it. Pay $2 for a 1/1M chance of winning $1B, and you'd expect to come out ahead by $998 each time.
But $2 for the chance to win $100, without knowing what the chance is? Even if you had some sort of bounds, like you knew the odds had to be at least 1/150 and at most 1/10, though you could be off by a little bit. Would you accept that bet?
Such a bet seems intuitively uninviting to me, yet this is the bet that speculative causes offer me.
The reason not to play a lottery is because it is a zero-sum game in which the rules are set by the other agent; since you know that the other player's goal is to make a profit, you should expect the rules to be set up to ensure that you lose money. Obviously, reality is not playing a zero-sum game with humanity; if one chooses a different expected payout structure- say, you have no idea what the specific odds are, but you know that your crazy uncle Bill Gates is giving away potentially all his money to family members in a lottery with 2$ tickets- then obviously it makes sense to play.
Replies from: Muhd↑ comment by Muhd · 2013-07-29T22:35:07.787Z · LW(p) · GW(p)
These were my thoughts when I read this.
A better analogy might be buying stock in a technology startup which is making a product completely unlike anything on the market now. It is certainly more risky than the sure thing, with lots of potential for losing your investment, but also has a much much higher potential payoff. This is generally the case in any sort of investing, whether it be investing in a charity or in a business -- the higher the risk, the higher the potential gain. The sure stuff generally has plenty of funding already -- the low hanging fruit has already been taken.
That being said, one should be on the lookout for good investing opportunities of both kinds -- charging more (in terms of expected payoff) for the riskier ones but not shunning either completely.
Replies from: Vulture↑ comment by Vulture · 2014-01-12T04:49:08.023Z · LW(p) · GW(p)
A better analogy might be buying stock in a technology startup which is making a product completely unlike anything on the market now.
I think this is also a dangerous example because most of the salient and readily-available examples of doing this are the highly-publicized successes (this might be less true for people who are actually actively involved in technology investment - I say this from the perspective of an outsider).
comment by So8res · 2013-07-29T16:50:12.362Z · LW(p) · GW(p)
I agree with the most of the points points here. Short-term cost-effective charities are often more worthy of donation than speculative long-term high-uncertainty ones. I would prefer donating to GiveWell's top charities to funding US/China exchange programs.
Yet I still donate to MIRI. Why? Because it's a completely different beast.
I don't view MIRI as addressing far-future concerns. I view MIRI as addressing one very specific problem: we are on the path to AI, and we seem to be a lot closer to developing AI than we are to developing a perfect reflective preservable logical encoding of all human values.
There's a timer. That timer is getting uncomfortably low. And when it gets to zero, there's not a lot of death and a bad economy -- there's an extinction event.
If we had good reason to believe that the US and China will cross a threshold this century causing them to either blow up the world or collaborate and travel to the stars, based solely on the sentiment of each population towards the other, then you're damn right I'd fund exchange programs.
We don't have any evidence along those lines. There are a plethora of potential political catastrophes and uncountable factors that could cause them. A China/US nuclear war would be very bad, but it's one small possibility in a sea of many. It's far future, and it's very hard to predict what will help and what won't.
MIRI isn't trying to vaguely prod the unknown future into a state that's maybe better. It's racing against a timer that's known to be ticking. This argument hinges on a belief that we'll get strong AI in the next 30-150 years, that human value is complex and fragile, and that decision theory is nowhere near ready -- all arguments that have been made to my satisfaction.
Strong AI this century is a moderately likely probability, and if we don't have the right decision theory by the time it gets here then humanity loses. End of story, game over, no savepoints.
MIRI isn't hoping that it will nudge us towards a better future. It isn't trying to tweak factors that just maybe might push us towards a better future. Rather, MIRI is addressing a single-point-of-failure with an expiration date. There's a good likelihood that the fate of humanity will hinge upon the existence of this one bit of mathematics.
Replies from: lukeprog↑ comment by lukeprog · 2013-07-29T18:08:28.781Z · LW(p) · GW(p)
Yours is the kind of response I have to posts like the OP and also Holden's "Empowerment and Catastrophic Risk," though I wouldn't place so much specific emphasis on e.g. decision theory.
It is important to analyze general features of the world and build up many outside views, but one must also exercise the Be Specific skill. If several outside views tell me to eat Osha Thai, but then I snack on some Synsepalum dulcificum and I know it makes sour food taste sweet, then I should update heavily against the results of my original model combination, even if the Osha Thai recommendation was a robust result of 20+ models under model combination. Similarly, even if you have a very strong outside view that your lottery ticket is not the winner, a simple observation that the number on your ticket matches the announced winner on live TV should allow you to update all the way to belief that you won.
To consider a case somewhat more analogous to risks and AI, there were lots of outside views in 1938 suggesting that one shouldn't invest billions in an unprecedented technology that would increase our bombing power by several orders of magnitude, based on then-theoretical physics. Definitely an "unproven cause." And yet there were strong reasons to suggest it would be possible, and could be a determining factor in WWII, even though lots of the initial research would end up being on the wrong path and so on.
Also see Eliezer's comment here, and its supporting post here.
comment by Stuart_Armstrong · 2013-07-29T11:43:32.055Z · LW(p) · GW(p)
James Shanteau found in "Competence in Experts: The Role of Task Characteristics"...
Good to see that paper being given an airing. But one important thing that must be done is to decompose the problems we're working on: some results may be more solid that others. I've shown that using expert opinion to establish AI timelines is nearly worthless. However you can still get some results about the properties of AIs (see for instance Omahundro's AI-drives paper), and these are far more solid (for one, they depend much more on arguments than on expertise). So we're in the situation of having no clue when and how AIs could emerge, but being fairly confident that there's a high risk if they do.
Compare for instance the economics of the iPhone. We failed to predict the iPhone ahead of time (continually predicting that these kinds of thing were just around the corner or in the far future), but the iPhone didn't escape the laws of economics and copying and competition. We can often say something about things, even if we must fail to say everything.
comment by John_Maxwell (John_Maxwell_IV) · 2013-07-29T18:16:06.513Z · LW(p) · GW(p)
Benjamin Todd makes this point well in "Social Interventions Gone Wrong", where he provides a quiz with eight social programs and asks readers to guess whether they succeeded or failed.
Were the social interventions sampled randomly, or were they chosen for their counterintuitive outcomes?
Anyway, maybe if one wants to get good at reducing existential risk, the first thing to do is to start using PredictionBook and continue until one is good enough to have a reliable track record, then proceed from there.
Replies from: gwern, Manfred↑ comment by gwern · 2013-07-29T19:45:06.484Z · LW(p) · GW(p)
Were the social interventions sampled randomly, or were they chosen for their counterintuitive outcomes?
They were chosen, but I'm not sure how one could even sample randomly - where would one look, how would one compile a list, based on what criteria? Most programs haven't been studied at all in any kind of rigorous fashion.
The best I can say is that Rossi, well-respected in the area of social interventions, takes an extremely pessimistic overall view on the effectiveness of interventions.
↑ comment by Manfred · 2013-07-29T21:33:58.987Z · LW(p) · GW(p)
They seem to have been chosen based on size and media coverage. Except for the literacy program I think - I hadn't heard of it, so maybe it was included for counterintuitiveness.
Anyhow, you could get a good track record on the quiz just by using the "paternalistic teaching is hard and sometimes backfires, social engineering works" heuristic.
Replies from: byrnema↑ comment by byrnema · 2013-08-09T16:00:26.172Z · LW(p) · GW(p)
I agree, it seems paternalistic interventions don't work is a good heuristic. What I took away was that the programs that worked - a nurse coming to the house over several years or a big brother program - provided a setting where something like a friendship could happen. A long term, more intimate relationship might have been what worked. The program provides the opportunity for the effective relationship, but can't force it.
Consistently, several times in the past I've noticed that what can matter most in health care settings - compassion, etc. - can't be bought or legislated. But if policies don't get in the way, things work out because nurses are compassionate, etc. Anyway I've had some nurses that were, and was grateful, knowing that it was a gift rather than something I could rely on always.
Replies from: JenniferRM↑ comment by JenniferRM · 2013-08-20T23:10:18.419Z · LW(p) · GW(p)
I took the program outcome prediction test cleanly, and got a good score and am talking about it, which actually sort of makes my meta-analysis suspect to some degree because of signaling issues... however... In lower paragraphs I talk about the test's details so anyone clicking into the comment itself who wants to take the test "fairly" should go do that before reading further...
...
The scoring recommended at the end seems wrong, and biased against people having confidence in their own abilities, because you were supposed to assign programs to one of THREE results (helped, neutral, hurt) and doing that at random should give you ~2.6 right by accident. Thus, if you got 4 of them right, or even 3, it is more likely than not that you were probably reasoning things out at better than chance rates.
A long term, more intimate relationship might have been what worked. The program provides the opportunity for the effective relationship, but can't force it.
"Monkey see, monkey do" is a really important factor, that interacts with the rarely considered third possible "made things worse" outcome, because this heuristic doesn't just predict the cases of efficacy but also predicts that putting people into proximity (especially benevolent proximity) so they are emotionally close to "bad people" (even if those bad people are claiming to be counter-examples out of benevolent intent) will cause those put in proximity to the less than ideal people to become worse. This was already my rough working hypothesis and I called the "actually made things worse" outcomes of the Scared Straight and behavioral half of the 21st Century Learning Centers on this basis, without having already read about these specific programs in advance.
"A bad apple spoils the barrel" and "If you lay down with dogs you get up with fleas" are relevant folk sayings that capture the harm-causing aspect of the insight in a less than scientifically precise way. Probably a lot of people have heard the sayings and yet still got the questions wrong. Luke probably knows about the downside angles but has called attention to the positive side in the past :-)
One thing the general model suggests is that there might have been personal down sides for the "Big Sibling" volunteers and the nurses of the Nurse-Family Partnership Program, although I would naively guess that the psychology of being higher status (via adulthood and professionalism, respectively) might have protected them from acquiring too many negative characteristics from the people they mentored.
comment by TheOtherDave · 2013-07-29T18:44:35.679Z · LW(p) · GW(p)
"Conservative Orders of Magnitude" Arguments
I wanted to highlight this one, because as someone who isn't an expert in any of these fields it's an easy one for me to fall for.
My rule of thumb is whenever I find myself applying a fudge-factor of N to a system "to be safe," I should ask myself "why not 10N instead? why not 100N?"
If my only answer is incredulity, I should immediately halt and dump core; my processing has become corrupted.
comment by Stuart_Armstrong · 2013-07-29T13:43:12.151Z · LW(p) · GW(p)
"Wow factor" bias.
That's worth keeping in mind - that's certainly what pushed me into working for the FHI.
comment by Wei Dai (Wei_Dai) · 2013-07-30T08:38:23.407Z · LW(p) · GW(p)
How do we choose between different proposals for "exploration"? For example the workshops that MIRI is currently hosting on a regular basis seem to be designed largely to learn how to most efficiently produce FAI-related research as well as how productive such research efforts can currently be. On the other hand, I suggest that at this stage we should devote more resources into what I call "Singularity Strategies" research, to better understand for example whether pushing for "FAI first" has positive or negative expected impact. I think both of these activities can plausibly be called "exploration". Peter, do you agree and if so, have you thought about how to choose between them?
(BTW, I'm planning to attend one of the MIRI workshops in September, which is mainly due to my intellectual curiosity about the topic and not meant to indicate that I endorse MIRI's approach as far as getting the best long term outcome.)
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2013-07-30T20:56:08.088Z · LW(p) · GW(p)
Peter, do you agree and if so, have you thought about how to choose between them?
I know absolutely nothing about these workshops or the details about how one ideally would get better at learning about FAI, so I wouldn't know the answer.
comment by Nick_Beckstead · 2013-07-29T10:32:54.319Z · LW(p) · GW(p)
Selection bias. When trying to find trends in history that are favorable for affecting the far future, some examples can be provided. However, this is because we usually hear about the interventions that end up working, whereas all the failed attempts to influence the far future are never heard of again. This creates a very skewed sample that can negatively bias our thinking about our success of influencing the far future.
When I was talking about trends in history, I was saying that certain factors could be identified which would systematically lead to better outcomes rather than worse outcomes if those factors were in place to a greater extent when humanity faced future challenges and opportunities. (Note that I did not say I knew of specific, ready-to-fund interventions for making these factors be in place to a greater extent when humanity faces future challenges and opportunities. We may be talking past each other to some extent since you are talking about where to give now and I am mostly talking about where to look for opportunities later.)
I don't think what you've said here effectively addresses this claim, and I don't think there is selection bias pushing this claim. Consider a list of challenges lukeprog gave elsewhere:
nuclear weapons, climate change, recombinant DNA, nanotechnology, chloroflourocarbons, asteroids, cyberterrorism, Spanish flu, the 2008 financial crisis, and large wars.
Now consider the things I said in this talk would help people meet future challenges better: improved coordination between key actors, improved information access, improved motives, and improved individual capabilities (higher intelligence and technology). (I'm sorry for the vague terms but we don't have great frameworks for this right now.) Now ask: if people had more of these things when they faced aspects of these challenges which we've dealt with so far, would that be expected to lead to better or worse outcomes? I think it is clear that, in each case, it would be more likely to lead to better outcomes than to worse outcomes. Maybe you can think of cases where these factors make us deal with challenges worse, but that is not the typical case.
Replies from: CronoDAS, peter_hurford↑ comment by CronoDAS · 2013-07-29T21:19:20.095Z · LW(p) · GW(p)
improved individual capabilities
In general, the more power each individual has, the more damage a single bad actor can do. It takes a lot of people to make an open communications network valuable, but only a few spammers to wreck it.
Replies from: Randaly↑ comment by Randaly · 2013-07-30T06:09:12.729Z · LW(p) · GW(p)
We are almost certainly not presently at the point where a single person can be a GCR. Almost all of the above list would not wipe out all mankind, and would rely on the breakdown of society to be an x-risk; if individuals are more capable on their own (via, e.g. solar panels, 3-d printing, etc.) then the level of destruction needed for something to qualify as an x-risk becomes much higher.
Replies from: CronoDAS↑ comment by Peter Wildeford (peter_hurford) · 2013-07-30T04:44:29.620Z · LW(p) · GW(p)
We may be talking past each other to some extent since you are talking about where to give now and I am mostly talking about where to look for opportunities later.
That sounds pretty plausible. But the "what are you actually going to do to make these broad things happen?" question is an important one. These things -- systematically making the population smarter, more coordinated, more benevolent, etc. -- are hella hard to pull off.
~
Now consider the things I said in this talk would help people meet future challenges better: improved coordination between key actors, improved information access, improved motives, and improved individual capabilities (higher intelligence and technology).
I agree that these things will generally make the future go better, but they might be too broad.
Take the example of "higher intelligence". This raises the question -- intelligence in what? Better English literature skills certainly won't help us deal with x-risks. It seems quite plausible that a particular x-risk we're dealing with will require a pretty particular set of skills, to which most intelligence amplification will not have been helpful. ...Perhaps you could argue that we need a diversified portfolio of education because we can't know what x-risk we'll be hit with, though.
comment by Xodarap · 2013-07-31T00:43:00.915Z · LW(p) · GW(p)
Question: Suppose MIRI was like one of the 8 charities listed above (i.e. intuitively plausible, but empirically useless). How would we know? How would this MIRI' be different from MIRI today?
Replies from: RobbBB, Eliezer_Yudkowsky↑ comment by Rob Bensinger (RobbBB) · 2013-07-31T10:30:27.799Z · LW(p) · GW(p)
I think this question is too vague. MIRI could turn out to be useless for any number of reasons, leading to different empirical disconfirmations. (A lot of these will look like the end of human life.) E.g.:
MIRI is useless because FAI research is very useful, but MIRI's basic methodology or research orientation is completely and irredeemably the wrong approach to FAI. Expected evidence: MIRI's research starts seeing diminishing returns, even as they attempt a wide variety of strategies; non-MIRI researchers make surprising amounts of progress into the issues MIRI is interested in; reputable third parties that assess MIRI's results consistently disagree with its foundational assumptions or methodology; the researchers MIRI attracts come to be increasingly seen by the academic establishment as irrelevant to FAI research or even as cranks, for substantive, mathy reasons.
MIRI is useless because FAI is impossible -- we simply lack the resources to engineer a benign singularity, no matter how hard we try. Expected evidence: Demonstrations that a self-modifying AGI can't have stable, predictable values; demonstrations that coding anything like indirect normativity is unfeasible; increasingly slow progress, e.g., due to fixed hardware limitations.
MIRI is useless because some defeater will inevitably kill us all (or otherwise halt technological progress) before we have time to produce self-enhancing AGI. (Alternatively: UFAI is completely impossible to prevent, by any means.) Expected evidence: Discovery of some weapon (e.g., a supervirus or bomb) that can reliably kill everyone and has widely known and easy-to-follow engineering specifications.
MIRI is useless because some defeater will kill us all before we build an AGI, with a probability high enough to suck all the expected utility out of FAI research. I.e., our survival until the intelligence explosion singularity is a possibility, but not a large enough possibility to make the value of surviving past that sieve worth paying attention to. Expected evidence: Discovery that near-term existential threats are vastly more threatening than currently thought.
MIRI is useless because FAI is inevitable. Expected evidence: Black-swan disproof of fragility-of-value and complexity-of-value, revolutionizing human psychology.
Note that these scenarios not only would be indicated by different evidence, but also call for very different responses.
- If 1 is true: Instead of funding MIRI, we should fund some radically different FAI research program.
- If 2 is true: Instead of funding MIRI, we should pour all our resources into permanently stopping AI research, by any available means.
- If 3 is true: Instead of funding MIRI, we should just enjoy what little time we have left before the apocalypse.
- If 4 is true: Instead of funding MIRI, we should combat other existential risks, or act as in 3.
- If 5 is true: Instead of funding MIRI, we should speed along AI research and fund cryonic and life-extension technologies in the hopes of surviving into the intelligence explosion.
↑ comment by Xodarap · 2013-07-31T12:20:27.375Z · LW(p) · GW(p)
This is great RobBB, thanks!
RE: #1: do you have a suggestion for how someone who is not an AI researcher could tell if MIRI's work is diminishing? I think your suggestion is to ask experts - apart from Holden, have any experts reviewed MIRI?
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2013-08-01T18:45:35.138Z · LW(p) · GW(p)
Explicit reviews of MIRI as an organization aren't the only kind of review of MIRI. It also counts as a review of MIRI, at least weakly, if anyone competent enough to evaluate any of MIRI's core claims comes out in favor of (or opposition to) any of those claims, or chooses to work with MIRI. David Chalmers' The Singularity: A Philosophical Analysis and the follow-up collectively provide very strong evidence that analytic philosophers have no compelling objection to the intelligence explosion prediction, for example, and that a number of them share it. Reviews of MIRI's five theses and specific published works are likely to give us better long-term insight into whether MIRI's on the right track (relative to potential FAI researcher competitors) than a review focused on, e.g., MIRI's organizational structure or use of funding, both of which are more malleable than its basic epistemic methodology and outlook.
It's also important to keep in mind that the best way to figure out whether MIRI's useless is probably to fund MIRI. Give them $150M, earmarked for foundational research that will see clear results within a decade, and wait 15-20 years. If what they're doing is useless, it will be far more obvious when we've seen them do a lot more of it; and any blind alleys they go down will help clarify what they (or a rival research team) should be working on instead. At this point MIRI is very much a neonate, if not a partly-developed fetus. Speeding its development would both make us able to fairly evaluate it much more quickly, and encourage other researchers to get into the business.
↑ comment by Bayeslisk · 2013-08-07T18:58:17.535Z · LW(p) · GW(p)
Isn't 3 already pretty much the case thanks to things like cobalt bombs?
Replies from: Randaly↑ comment by Randaly · 2013-08-07T19:32:55.030Z · LW(p) · GW(p)
No. I'm pretty confident the chance of MIRI failing due to cobalt bombs is <<1%, given that none exist, there are no known plans to build any, and it would still need to be used to halt progress. Also, the use of enough cobalt bombs to destroy MIRI's relevance (remember, MIRI has supports in many different nations, who would presumably a) remain concerned and b) attempt to carry on research if there aren't pressing concerns) presupposes a global nuclear exchange, which would make MIRI irrelevant either way.
(Irrelevant in the sense that all of MIRI's research and writings would be lost, and there wouldn't be enough tech left for people to remember MIRI's research program by the time they would be able to restart research again. I am not claiming that a global nuclear exchange would be an existential risk.)
Replies from: Bayeslisk↑ comment by Bayeslisk · 2013-08-07T20:00:57.243Z · LW(p) · GW(p)
That isn't what you said though. You were talking about the discovery, the very existence of a weapon able to reliably kill everyone. You'd need a lot fewer cobalt bombs to salt the earth with lethal amounts of fallout than you'd need to melt everything to slag, too.
Replies from: Randaly↑ comment by Randaly · 2013-08-07T20:54:01.081Z · LW(p) · GW(p)
(I am not RobbBB.)
1: The methods for constructing a nuclear bomb are by no means "widely known and easy to follow." Witness the often unsuccessful struggles of many nations for decades to acquire them. Cobalt bombs are even more advanced and difficult to construct than 'regular' nuclear weapons.
The scenario RobBB was presumably envisioning was one in which private individuals have gained the ability to essentially destroy society using, e.g. a super-pandemic or something. A sufficient number of people have always been able to destroy human society; no new technology would be needed for everybody in the world to simultaneously commit suicide, or, for that matter, for a massive nuclear exchange. Spontaneous collective suicide is not likely. However, actions by a small group of individuals (c.f. al-Qaeda) are far more likely. Such groups do not at present have the ability to end human society as we know it; RobBB is envisioning a scenario where they gain it.
2: In the above comment, I wasn't talking about existential risks [*]; I am not claiming that a nuclear war would be an existential risk. For any conventional nuclear war of significant size, SF/Berkley would almost certainly be targeted, killing everybody at MIRI. While I am unsure where the physical location of the servers storing their website/other data is, it's overwhelmingly likely that EMP from nuclear detonations would destroy the ability of anybody to access that data. Given the geographic distribution of LWers, it is likely that only a small-ish number would survive a massive nuclear exchange. Presumably at least a few of these individuals would attempt to carry on research and to recopy MIRI's research/ideas for future generations, assuming that humanity will eventually recover. However, it is extremely unlikely that they will get very far, and very likely that whatever they do write down will be lost or ignored.
[*] Rereading the comment, I actually was talking about existential risks. I have edited it for clarity; I was not intending to, but adopted RobBB's phrasing of something killing us all, while I regard nuclear risks as more likely to render MIRI useless by collapsing society. My bad.
Replies from: Bayeslisk↑ comment by Bayeslisk · 2013-08-07T21:23:37.574Z · LW(p) · GW(p)
Sorry about that. I got confused. s/you/RobBB/. I understand better now. I still believe that of the five, 3 is probably the most likely. I also 2-believe that I might overestimate that probability. (Sorry if I sound a bit strange. I'm starting to study lojban.)
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-07-31T07:28:19.633Z · LW(p) · GW(p)
I correctly distinguished among all 8 charities when I tested myself, so I'd know. :)
Replies from: NotInventedHere↑ comment by NotInventedHere · 2013-07-31T09:56:23.741Z · LW(p) · GW(p)
That still isn't an answer as to how MIRI' would differ from MIRI.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-07-31T19:27:01.698Z · LW(p) · GW(p)
RobbBB has answered that well. I was remarking against epistemic defeatism.
comment by Paul Crowley (ciphergoth) · 2013-07-30T18:35:03.145Z · LW(p) · GW(p)
I don't think it's good practice to mix in "wow factor" bias into that list. That list is mostly made up of terms drawn from the psychology literature of empirically demonstrated deviations from rational behaviour that are predicted my some mathematical model, but for that one, this article is the top hit and no formal meaning has been assigned to it, never mind empirically demonstrated.
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2013-07-30T21:03:32.550Z · LW(p) · GW(p)
You're right that there's no identifiable literature on the "wow factor" bias (I made up the name myself), but I do think it's plausible that people are skewed toward seeking out opportunities that are flashy and high status. I could very well be wrong, but I have a lot of personal experience that this is the case. Do you disagree?
comment by Grant · 2013-08-05T08:33:26.890Z · LW(p) · GW(p)
If existential risks are hugely important but we suck at predicting them, why not invest in schemes which improve our predictions?
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2013-08-05T11:32:17.757Z · LW(p) · GW(p)
I think such schemes are promising avenues for exploration. I don't currently know of any schemes that can demonstrate a track record of improving predictions, have room for more funding, and can make a case that marginal funding would yield a marginal benefit in making predictions.
Replies from: Grant↑ comment by Grant · 2013-08-06T05:02:02.902Z · LW(p) · GW(p)
I'm sure the use of prediction markets to predict existential threats is difficult, but it seems like you could at least use them to predict the emergence of AI. I'd be surprised if this wasn't discussed here at some point.
It seems to me that while prediction markets may not need funding from a technical perspective, public and especially political opinion on them does need some nudging. I don't think I'm entering mind-killing territory by suggesting it'd be good if politics didn't get in their way so much. I'm certainly no expert, but long-running markets where investors would expect interest paid would face all sorts of (US) regulatory hurdles beyond normal markets. Its very expensive just to find out what regulations you'll run afoul of, as obviously US financial regulation was not created with prediction markets which could last decades in mind (or prediction markets at all for that matter).
↑ comment by Rob Bensinger (RobbBB) · 2013-08-07T21:49:45.956Z · LW(p) · GW(p)
I predict: 1,2,3,5
Huh?
That wouldn't be cool, it would be very Ted Kaczynski-ish.
Is not embarrassing yourself by looking a bit like Ted Kaczynski your only terminal value?
Just because you're making an AGI that can't be "proven friendly," it might be friendly.
I suggest thinking inside the box more. The hypothetical is 'What if FAI is impossible?', not 'What if we can't prove that FAI is actual?'. But all your suggestions are attempts to resist that premise, not attempts to explore its consequences. One of the basic concerns of the EA movement is that it's very dangerous not to be able to seriously entertain worst-case scenarios.
More, But there's still a chance, right? isn't the right way to think about any question. The question is whether the probability is low enough to be worth the risk, not whether the probability is nonzero.
Also: maybe we could reason with it before it killed us all, and possibly change its mind.
If it kills us all in less than the time it takes to construct a reasoned paragraph in any human language -- which is possibly the default scenario, the most common one -- then that will be difficult.
More importantly, any relevant argument we could come up with will almost certainly have already been thought of by the UFAI, and presented with far more rigor than any human could. You should be more confident that a chimpanzee could beat Garry Kasparov in chess than that a human could outwit a post-FOOM AI.
comment by brainoil · 2013-07-30T02:04:58.897Z · LW(p) · GW(p)
Well, curing cancer might be more important than finding a cure for common cold, but that doesn't necessarily mean you should be trying to cure cancer instead of trying to get rid of common cold, unless of course you have some inner quality that makes you uniquely capable of curing cancer. There are other considerations.
Reducing existential risks is important. But suppose it is not as important as ending world poverty. There's also lot of uncertainty. It may be that no matter how hard we try, something will come out of the blue and kill us all (three hours from now). Still, if you are the only one who is doing something about existential risks, and is capable of reducing it a tiny bit, your work is very valuable.
The thing is, outside few communities like this one, no one really cares about existential risks (even global warming is a political phenomenon for most people, rather than a scientific one. Other existential risks make blue-collar oil drillers go to space and blow up asteroids).
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2013-07-30T04:28:31.779Z · LW(p) · GW(p)
I don't think that everyone working on x-risk should quit x-risk. I also don't think that no one should go into x-risk. Obviously, we need some people working on x-risk, even if it's only for value of information considerations.
~
Still, if you are the only one who is doing something about existential risks, and is capable of reducing it a tiny bit, your work is very valuable.
How would you know if you're capable of reducing it a tiny bit?
comment by So8res · 2013-07-29T16:59:43.420Z · LW(p) · GW(p)
Another note: increasing education and speeding up economic development is actually a very very important form of charity, as far as I can tell. So important that the government collects taxes which are used to provide public education and foreign aid. If there was no public education or foreign aid in the world today, I would strongly consider donating to educational charities and economic charities instead of GiveWell's current top charities.
Unless you think public education and foreign aid should be completely de-funded in order to save more lives then you already support long-term infrastructural charities to some degree. There is certainly a tradeoff between the near term known benefits and the long term risky benefits, and it may well be that the cost effectiveness of saving lives right now outweighs the risk-adjusted cost effectiveness of better infrastructure and more education -- but I hesitate to accept the argument that short-term proven investment always dominates long-term speculative investment.
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2013-07-30T04:38:52.433Z · LW(p) · GW(p)
short-term proven investment always dominates long-term speculative investment
I don't think it does, especially in situations of value of information.
~
government collects taxes which are used to provide public education and foreign aid. If there was no public education or foreign aid in the world today, I would strongly consider donating to educational charities and economic charities instead of GiveWell's current top charities.
I think this is a room for more funding consideration. I think we already have very high evidence that public education actually causes gains in education and foreign aid causes gains in economic development. The reason why I don't want to put money into them directly is not because they're speculative, it's because they're not sufficiently underfunded.
Replies from: So8res↑ comment by So8res · 2013-07-30T12:31:26.725Z · LW(p) · GW(p)
I don't think it does, especially in situations of value of information
You're right, I spoke too strongly. I was trying to summarize your relevant arguments quickly, and should have quantified e.g. as "sufficiently proven short-term investment always dominates sufficiently speculative long-term investment", which admittedly is tautological.
it's because they're not sufficiently underfunded.
I agree completely. I was responding mainly to this:
Or how about this one? It seems like there will be challenges in the far future that will be very daunting, and if humanity handles them wrong, things will be very bad. But if people were better educated and had more resources, surely they'd be better at handling those problems, whatever they may be. Therefore we should focus on speeding up economic development, right?
which you write off as an appeal to common sense in a manner that I thought was somewhat unjust.
Your other arguments, that we're often bad at predicting the causes of the future and that it's easy to be overconfident about impressive sounding projects, were well received.
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2013-07-30T20:58:32.588Z · LW(p) · GW(p)
which you write off as an appeal to common sense in a manner that I thought was somewhat unjust.
Could you elaborate?
Replies from: So8res↑ comment by So8res · 2013-07-30T23:26:44.916Z · LW(p) · GW(p)
Sure. This
But if people were better educated and had more resources, surely they'd be better at handling those problems, whatever they may be. Therefore we should focus on speeding up economic development, right?
followed by this
These three examples are very common appeals to commonsense. But commonsense hasn't worked very well in the domain of finding optimal causes.
followed by your main points, imply an argument that we shouldn't focus on speeding up economic development. It's this connotation that I found unjust, and responded to in the ancestor comment.
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2013-07-31T09:22:05.992Z · LW(p) · GW(p)
I think focusing on speeding up economic development is important. But I disagree that we know of ways to speed up economic development that create more impact than AMF.
(Note: this is not saying AMF is optimal for speeding up economic development; it's that we don't know enough about economic development to say.)
Replies from: So8res↑ comment by So8res · 2013-07-31T11:39:12.476Z · LW(p) · GW(p)
I disagree that we know of ways to speed up economic development that create more impact than AMF
Potential candidates:
- Public education
- Foreign aid
this is not saying that AMF is optimal ... it's that we don't know enough
So adjust the expected utility of public education and foreign aid downwards in proportion to their risk.
If you want to save the most people with your money then you need to purchase units of the most cost effective charity (after risk adjustment). We already do a lot of economic development (that's what public education and foreign aid are for).
You must believe one of the following:
a) Risk adjusted economic stimulus (in the form of public education / foreign aid) is more cost effective than AMF
b) Risk adjusted stimulus is less cost effective than AMF
c) Risk adjusted stimulus is precisely as cost effective as AMF
Your comment implies you reject a). If b) is the case, then you should want to transfer funds from education to AMF until they equalize. c) implies indifference between them, and is implausible.
Do you believe public education should be defunded to support AMF? It seems to me that you must. That is a fine argument to make, but it is a much less obvious point, and I don't think your casual dismissal of economic stimulus did it justice.
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2013-08-02T12:42:14.600Z · LW(p) · GW(p)
I disagree with your conclusion because there's a difference between...
(1) risk-adjusted stimulus beating AMF and marginal risk-adjusted stimulus beating marginal contributions to AMF
(2) risk-adjusted stimulus beating AMF and risk-adjusted stimulus being done by those who know what they're doing beating AMF
(3) risk-adjusted stimulus beating AMF and $1B in risk-adjusted stimulus beating $1B to AMF
I do think it's quite plausible that public education spending in the developed world is not as cost-effective at producing well-being than spending on AMF (until AMF runs out of room for more funding). But I also think there are far less controversial and less useful places we could get money for AMF from.
Along the same lines as what you've asked me, if you think economic stimulus is important, do you think AMF should be defunded in order to donate to the US government or to developed world education?
Replies from: So8res↑ comment by So8res · 2013-08-02T23:50:46.052Z · LW(p) · GW(p)
do you think AMF should be defunded in order to donate to US government or to developed world education?
No. To developing-world education, probably (given sufficient evidence of effectiveness).
On a mildly related note, I see AMF as an organization that treats the symptom of malaria instead of the cause. I'd rather donate money to an organization that makes measurable progress towards eliminating malaria entirely.
Treating symptoms is important. Immediate feedback is a powerful tool. However, I think it's possible to lose sight of the forest for the trees. Supporting provably-effective short term charities could lead to risk aversion that costs lives in the long run.
None of your post contradicts these statements directly, but I found it uncomfortably dismissive of certain long-term goals. My current feeling is that GiveWell is too risk averse. I haven't inspected that feeling lately as my conclusion w.r.t. MIRI short-circuited further inquiry into GiveWell.
To rephrase my original concern, I feel like it is possible to accept all the arguments in your post and use them to argue in favor of donating to charities that improve third-world education, despite the fact that the connotation of your post implies you disagree. Specifically, the economic-development snipe felt somewhat dishonest.
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2013-08-03T21:28:02.951Z · LW(p) · GW(p)
I think there's an interesting and decently evidenced argument to be made that fighting disease is actually the best way to boost developing-world education, better than direct interventions in education (see here and here, plus GiveWell's concerns about education).
~
I'd rather donate money to an organization that makes measurable progress towards eliminating malaria entirely.
With 100% bednet coverage, the amount of attack vectors for malaria would be substantially lower, and malaria could be eliminated, so I think AMF is a plausible candidate for malaria elimination as well as malaria reduction.
But what other opportunities are there? I suppose you could try to aim to fund vaccine research, but there aren't any organizations pursuing a malaria vaccine with room for more funding (I've looked, documentation forthcoming on Giving What We Can).
~
Treating symptoms is important. Immediate feedback is a powerful tool. However, I think it's possible to lose sight of the forest for the trees. Supporting provably-effective short term charities could lead to risk aversion that costs lives in the long run.
As much as I've seemingly argued against this, I think the sentiment is important. The problem is, however, there are significant barriers right now to implementing these long-run approaches -- we simply just don't know enough yet. Thus, I prefer a value of information approach.
~
Specifically, the economic-development snipe felt somewhat dishonest.
Dishonesty, to me, implies malevolence; an intention to deceive or mislead. Do you think I'm being misleading?
Replies from: So8res↑ comment by So8res · 2013-08-04T02:47:27.316Z · LW(p) · GW(p)
Thanks for all of the resources. I've updated considerably in favor of AMF as a means to improving third-world conditions.
We simply don't know enough yet. Thus, I prefer a value of information approach.
Conceded. Really I'm just lamenting that we aren't non-deterministic problem solvers. I still think that there are better options out there, but I don't know what they are and I don't have a way to differentiate the better from the worse. This is frustrating, but it's not an argument. My desire to donate to AMF is increased, and I've decreased my probability that GiveWell is too risk adverse.
I still believe that there are under-funded charities with long-term goals that provide more utiilty for my dollar (as per my other top-level comment), but this is due to viewing the problem space as different in some areas. I am now much closer to agreement with your points in the third-world assistance space.
Do you think I'm being misleading?
Yes, a bit. No offense intended. The general article was not misleading, and the intent was well-received. However, I still feel that the tone of this:
But if people were better educated and had more resources, surely they'd be better at handling those problems, whatever they may be. Therefore we should focus on speeding up economic development, right?
These three examples are very common appeals to commonsense. But commonsense hasn't worked very well in the domain of finding optimal causes.
was somewhat misleading. It felt like an attack on a position which you disagree with without sufficient evidence. To me, it felt like you were providing evidence against X, and then you slipped in a jab against Y, which is related to X but was not covered by the evidence provided. I'm not sure the name for this logical fallacy, but yeah, it felt like you were (perhaps unconsciously) trying to garner support against Y via arguments against related X.
The evidence and reasoning you provided above go a fair way towards arguing your point, and I've updated accordingly, but the above quote still seems like somewhat naked and misleading in the original article.
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2013-08-04T05:31:21.726Z · LW(p) · GW(p)
Really I'm just lamenting that we aren't non-deterministic problem solvers. I still think that there are better options out there, but I don't know what they are and I don't have a way to differentiate the better from the worse. This is frustrating, but it's not an argument. My desire to donate to AMF is increased, and I've decreased my probability that GiveWell is too risk adverse.
I am very sympathetic to that sentiment. And I'm glad to see someone updating quickly and properly.
~
No offense intended.
I'm not offended. I just want to make sure to correct the article, because I don't want to be misleading.
~
To me, it felt like you were providing evidence against X, and then you slipped in a jab against Y, which is related to X but was not covered by the evidence provided.
That's certainly possible. What would you say X and Y are?
I think X is the position that "economic development will reliably and predictably reduce existential risk" or a weaker claim like "economic development has enough of a chance of reducing existential risk that we should donate to it instead of something else".
Is Y something like "economic development in the developing world is a reasonable target area for donations"?
Replies from: So8res↑ comment by So8res · 2013-08-04T17:47:40.012Z · LW(p) · GW(p)
What would you say X and Y are?
X was roughly "we should donate to speculative projects with long term goals" and Y was "we should focus on developing the economy and improving education".
Arguments supporting "some things are too good to be true" / "I'm very skeptical of speculative projects" were against X. The statement deriding Y (quoted above) seemed out of place, because you did not successfully link economic development and education improvement with the class of speculative long-term charities that you argue against supporting.
For what it's worth, I still don't think that public education / economic development fall into that class. They are long term, but their impact is well supported. The arguments that caused me to update were:
1) Reducing disease goes a long way towards stimulating the economy and improving education levels
2) Simply reducing attack vectors goes a long way towards eliminating diseases
3) It is difficult to find other means of economic/educational stimulus that are more effective (after adjusting for risk)
So while I agree more with your conclusions now, I still think that the jab at promoting economic development / education is out of place.
In other words, the current connotation of the article (with respect to economic stimulus) is "you think you should fund education/economic growth, but you should actually fund AMF instead", whereas I think the correct connotation is more like "even if you want to fund education/economic growth, AMF is the best way to do it".
comment by bokov · 2013-08-13T21:43:43.575Z · LW(p) · GW(p)
I'll make the common sense observation that if population growth continues without any progress in space colonization or other highly speculative projects, the Malthusian trap will eventually again become an existential risk in on way or another, and environmental problems might be early signs of this.
Passing over unproven causes, we're left with promoting family planning and empowerement/education of women. Are there any Givewell-endorsed charities with a proven track record of limiting population growth by these or any other means? How is this effectiveness measured? Do you know of any data available about charities that have the reverse effect, of increasing population growth as a intended or unintended consequence of what they do?
Is the concern about overpopulation and its sequelae itself a speculative cause?
If so, do you believe population growth will decline on its own? What is the most likely mechanism by which that will happen?
If you don't believe population growth will decline without coordinate action, what do you consider the most likely scenario under which the human race does not face extinction within the next several hundred years, and yet population growth continues at currently projected rates without any speculative causes such as space colonization, seasteading, uploading, or FAI saving our bacon? How likely do you consider this scenario compared to extinction?
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2013-08-14T21:48:56.933Z · LW(p) · GW(p)
Passing over unproven causes
I don't think unproven causes should be passed over. Instead, I think we should be open to investigating unproven causes using a value of information approach.
~
Are there any Givewell-endorsed charities with a proven track record of limiting population growth by these or any other means? How is this effectiveness measured?
There's Population Services International. Also, it's possible that AMF might reduce population.
~
How likely do you consider this scenario compared to extinction?
I have no ability or basis to make a reliable, useful prediction of this kind.
Replies from: bokov↑ comment by bokov · 2013-08-14T23:03:27.880Z · LW(p) · GW(p)
How likely do you consider this scenario [] compared to extinction? I have no ability or basis to make a reliable, useful prediction of this kind.
The prediction being made is implicit. Maybe I should have said testable hypothesis.
By "passed over" I didn't mean to "ignored". I meant something more like, "cannot be relied on to have the intended impact at this time". So, if a problem needs to be solved urgently, proven charities are the way to go. At this point, one necessarily assigns weights to the following mutually exclusive beliefs:
1A. Exceeding the planet's carrying capacity (in the generalized sense that doesn't imply we know which specific resource we will overuse in a manner that kills us) is a speculative existential threat in the same category as asteroid impacts and rogue AI because population growth is already slowing and at present trends it will slow to zero or even into population decline soon enough to avert disaster.
1B. Exceeding the planet's carrying capacity (as above) is a speculative existential threat in the same category as asteroid impacts and rogue AI because so far new technology has always found ways to expand the planet's carrying capacity just in time to prevent disaster and will continue to do so indefinitely.
1C. Exceeding the planet's carrying capacity (as above) is a speculative existential threat in the same category as asteroid impacts and rogue AI because I have some other evidence that this is too unlikely to be worth worrying about.
2 Exceeding the planet's carrying capacity (as above) is a sufficiently credible and immediate existential risk, but there exist proven causes that I believe are the best way to tackle this problem.
3 Exceeding the planet's carrying capacity (as above) with the resultant collapse of civilization and possible extinction is not preventable. The practical altruist's goal is instead reducing suffering as much as possible while we wait for everything to be undone by our inevitable demise.
For my part, I'm making the implicit assumption that value of information is believed to be lower than the value of concrete charitable outcomes, i.e. the intellectually honest person's version of the "we should solve all problems on Earth before we go exploring the universe" argument. To be fair, I don't think you actually said charitable experiments should be funded less than proven charities. For all I know you might privately believe the opposite: that even proven methods aren't enough and we need to desperately expand our capabilities by funding speculative projects more (with concrete criteria for measuring outcomes, of course).
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2013-08-16T11:36:41.316Z · LW(p) · GW(p)
I'd put decent credence on 1A, but I don't expect actual population decline. I'd also put decent credence on 1B, but perhaps not indefinitely. There does seem lots of room for further innovation in farming and resource extraction. Furthermore, one could also imagine eventual colonization of other planets.
Secondly, I think you're missing the option I most endorse:
4 Exceeding the planet's carrying capacity (as above) is a sufficiently credible and immediate existential risk to take seriously (but perhaps still is not as credible nor as immediate as other existential risks). However, there are no known interventions at this time to reliably improve our planet's carrying capacity. Therefore, our best option is to try and find these innovations.
I agree with 4 to the degree that I disagree with 1B. I think there's a good chance existing agricultural innovations are already good enough and just need to be deployed. But I don't think funding that is the most cost-effective thing I could be doing.
Lastly, as a nitpick: I don't think asteroid impacts and Rogue AI are in the same category. Asteroid risk is actually fairly well understood, relatively speaking.
Replies from: bokov, Yuyuko↑ comment by bokov · 2013-08-16T18:14:23.713Z · LW(p) · GW(p)
However, there are no known interventions at this time to reliably improve our planet's carrying capacity.
True enough for the supply-side. The demand-side interventions are obvious, but are not seriously considered or even discussed because of religious/political/cultural stigma.
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2013-08-16T18:47:27.407Z · LW(p) · GW(p)
The demand-side interventions are obvious, but are not seriously considered or even discussed because of religious/political/cultural stigma.
What interventions would you consider?
Replies from: bokov↑ comment by bokov · 2013-08-16T19:10:13.754Z · LW(p) · GW(p)
The final outcome involves people choosing to reproduce less, obviously. The means to get there in a way that's broadly acceptable is the tough problem. But perhaps not the same order of tough as AI.
Many religions are hostile to family planning and no mainstream ones I know of are actively in favor of it.
People who choose to have large numbers of children have the advantage of numbers (insofar that their large-family values get passed onto their children).
Civil libertarians are uncomfortable with population control because of it being a cover for racist policies in the recent past.
Economic libertarians are uncomfortable with population control because they have come to associate that goal with intrusive government policy and this prevents them from even considering free-market means to achieve that goal.
Many, maybe most people like to leave the option of having more-than-replacement levels of children for emotional reasons that were perhaps shaped by evolution.
It's a lot to overcome. Perhaps the first step is at least separating the actual issue from misguided solutions that have been attempted and make it less taboo of a topic for public debate. I don't know, though. It's easier to see the destination than how to get there.
↑ comment by Yuyuko · 2013-08-16T20:07:13.325Z · LW(p) · GW(p)
Exceeding the planet's carrying capacity (as above) is a sufficiently credible and immediate existential risk to take seriously (but perhaps still is not as credible nor as immediate as other existential risks). However, there are no known interventions at this time to reliably improve our planet's carrying capacity.
Though I fear it hypocritical to mention: perhaps you ought to give some thought to reducing consumption per individual living human instead? Particularly among those who already enjoy the largesse?
Replies from: bokov, peter_hurford