Where I've Changed My Mind on My Approach to Speculative Causes

post by Peter Wildeford (peter_hurford) · 2013-08-16T07:09:54.834Z · LW · GW · Legacy · 51 comments

Contents

  My Argument, As It Currently Stands
  Specific Things I Changed My Mind About
    I used to think donating to AMF, at least in part, was important for me.  Now I don't.
    I now agree that there are relevant economies of scale in pursuing information that I hadn't taken into account.
    I was partially mistaken in thinking about how to "prove" speculative causes.
    I had previously not fully taken into account the cost of acquiring further information.
    I'm slightly more in favor of acting randomly (trial and error).
None
51 comments

Follow up to Why I'm Skeptical About Unproven Causes (And You Should Be Too)

Previously, I wrote "Why I'm Skeptical About Unproven Causes (And You Should Be Too)" and a follow up essay "What Would It Take to Prove a Speculative Cause?".  Both of these sparked a lot of discussion on LessWrong, on the Effective Altruist blog, and my own blog, as well as many hours of in person conversation.

After all this extended conversation with people, I've changed my mind on a few things that I will elaborate here.  I hope in doing so I can (1) clarify my original position and (2) explain where I now stand in light of all the debate so people can engage with my current ideas as opposed to the ideas I no longer hold.  My opinions on things tend to change quickly, so I think updates like this will help.

 

My Argument, As It Currently Stands

If I were to communicate one main point of my essay, based on what I believe now, it would be when you're in a position of high uncertainty, the best response is to use a strategy of exploration rather than a strategy of exploitation.

What I mean by this is that given the high uncertainty of impact we see now, especially with regard to the far future, we're better off trying to find more information about impact and reduce our uncertainty (exploration) rather than pursuing whatever we think is best (exploitation).

The implications of this would mean that:

 

And to be clear, here are specific statements that address misconceptions about what I have argued:


 

And, lastly, if I were to make a second important point it would be it's difficult to find good opportunities to buy information.  It's easy to think that any donation to an organization will generate good information or that we'll automatically make progress just by working.  I think some element of random pursuit is important (see below), but all things considered I think we're doing too much random pursuit right now.

 

Specific Things I Changed My Mind About

Here are the specific places I changed my mind on:

 

I used to think donating to AMF, at least in part, was important for me.  Now I don't.

I underestimated the power of exploring and the existing opportunities, so I think that 100% of my donations should be going to trying to assess impact.  I've been persuaded that there is already quite a lot of money going toward AMF and we might not need more money as quickly as thought, so for the time being it's probably more appropriate to save and then donate to opportunities to buy information as they come up.

 

I now agree that there are relevant economies of scale in pursuing information that I hadn't taken into account.

What I mean by this is it might not be appropriate for individuals to work on purchasing information themselves.  Instead, this could end up splitting up the time of organizations unnecessarily as they provide information to a bunch of different people.  Also, many people don't have the time to do this themselves.

I think this has two implications:

 

I was partially mistaken in thinking about how to "prove" speculative causes.

I think there was some value in my essay "What Would It Take to Prove a Speculative Cause?" because it talked concretely about strategies some organizations could take to get more information about their impact.

But the overall concept is mistaken -- there is no arbitrary threshold of evidence at which a speculative cause needs to cross and I was wasting my time by trying to come up with one.  Instead, I think it's appropriate to continue doing expected value calculations as long as we maintain a self-skeptical, pro-measurement mindset.

 

I had previously not fully taken into account the cost of acquiring further information.

The important question in value of information is not "what does this information get me in terms of changing my beliefs and actions?" but actually "how valuable is this information?", as in, do the benefits of gathering this information outweigh all the costs?  In some cases, I think the benefits of further proving a cause probably don't outweigh the costs.

For one possibly extreme example, while I don't know the rationale for doing a 23rd randomized controlled trial on anti-malaria bednets after performing the previous 22, it's likely that doing that RCT would have to be testing something more specific than the general effectiveness of bednets to justify the high cost of doing an RCT.

Likewise, there are costs on organizations to devoting resources to measuring themselves and being more transparent.  I don't think these costs are particularly high or defeat the idea of devoting more resources to this area, but I hadn't really taken them into account before. 

 

I'm slightly more in favor of acting randomly (trial and error).

I still think it's difficult to acquire good value of information and it's very easy to get caught "spinning our wheels" in research, especially when that research has no clear feedback loops.  One example, perhaps somewhat controversial, would be to point to the multi-century lack of progress on some problems in philosophy (think meta-ethics) as an example of what can happen to a field when there aren't good feedback loops to ground yourself.

However, I underestimated the amount of information that comes forward just doing ones normal activities.  The implication here is that it's more worthwhile than I initially thought to fund speculative causes just to have them continue to scale and operate.

-

(This was also cross-posted on my blog.)

51 comments

Comments sorted by top scores.

comment by RyanCarey · 2013-08-16T13:24:20.562Z · LW(p) · GW(p)

An impressive post. On a personal note, sometimes I think that I will one day lose the ability to change my mind. I will become dull, stubborn and conservative, and keep publishing rephrasings of my same old views, as do many philosophers and academics. From then on, I will just keep rationalising my existing views, on and on, until I die. Whatever degree of neuroplasticity ageing people have, this probably motivates me to update aggressively, and now! So congratulations on being mentally alive!

Now, specific feedback: As per your initial post, you highly value the high future. As per this post, you favour exploration over exploitation.

Now that you have inreased your valuation of Givewell, you should do the same for the Future of Humanity Institute and Center for Study of Existential Risk, right? If you still do not value FHI and CSER, perhaps you still want to improve the future with interventions like AMF’s that have ‘proven’ near-future benefits. But what about scanning for asteroids? This has a pretty straightforward case for its benefits, but is not ‘proven’, and is never going to be supported by an RCT or cohort study. I’m not sure you’ve really thought through the consequences of dissolving this concept of ‘proven’ vs ‘unproven’. Your thoughts?

Also, I predict some pushback from LW on your use of the word 'random’. ‘Educated guess’ or ‘estimate’ seem better.

Lastly, this should be promoted to main, because it is high quality, and this rapid update should be presented beside the original contention.

Replies from: CarlShulman, peter_hurford, Crux
comment by CarlShulman · 2013-08-16T18:20:26.092Z · LW(p) · GW(p)

But what about scanning for asteroids?

That's mostly solved, all the dinosaur-killers have been tracked, and 90%+ of the 1 km size ones. So there's not much room for more funding left, and it seems very likely that's not the best thing to work on in terms of existential risk (the mopping-up effort is now increasingly aimed at city-smashers or tsunami-triggers).

Dark comet risk has been less addressed, because it is much harder to track such comets than asteroids.

Wikipedia:

It is currently (as of late 2007) believed that there are approximately 20,000 objects capable of crossing Earth's orbit and large enough (140 meters or larger) to warrant concern.[43] On the average, one of these will collide with Earth every 5,000 years, unless preventative measures are undertaken.[44] It is now anticipated that by year 2008, 90% of such objects that are 1 km or more in diameter will have been identified and will be monitored.

(This has happened).

The further task of identifying and monitoring all such objects of 140m or greater is expected to be complete around 2020.

Replies from: Lumifer
comment by Lumifer · 2013-08-16T19:06:21.918Z · LW(p) · GW(p)

That's mostly solved

I don't think so. The asteroid problem doesn't involve only what the astronomers technically call asteroids. In involves any sufficiently large body moving at sufficiently high speed on the intercept trajectory.

To "mostly solve" this problem you need either to account for all sufficiently large bodies in the Solar system (we'll agree not to worry about whatever might arrive out of interstellar space) or build some kind of deflection/destruction system which can handle everything the Solar system can throw at our planet.

Replies from: gwern
comment by gwern · 2013-08-16T19:39:20.267Z · LW(p) · GW(p)

To "mostly solve" this problem you need either to account for all sufficiently large bodies in the Solar system (we'll agree not to worry about whatever might arrive out of interstellar space)

You should probably read Carl's link.

Replies from: Lumifer
comment by Lumifer · 2013-08-16T19:46:25.066Z · LW(p) · GW(p)

I did. I am not impressed by the statistics quoted in it. In particular, there is a neat trick in transitioning from "NASA reports that all near-earth asteroids larger than 10 kilometers in diameter ... have already been identified" to "This eliminates much of the estimated risk due to {note the glaring empty space here where words "near-earth" used to be} asteroids"

Asteroid impact is mostly a black-swan type of problem: you can identify the risk you see but you have very little idea of the remaining risk from things you do not see.

Replies from: JoshuaZ
comment by JoshuaZ · 2013-08-16T20:11:18.743Z · LW(p) · GW(p)

Near earth asteroids are the primary threat set here, with only a tiny fraction not in that set that have any chance of hitting Earth. That's precisely why they say it eliminates much of the estimated risk due to asteroids.

Replies from: Lumifer
comment by Lumifer · 2013-08-16T20:31:09.399Z · LW(p) · GW(p)

Near earth asteroids are the primary threat set here

I think it used to be the primary threat set. The claim is that near earth asteroids are not a threat because we looked at them and established that large ones are not going to hit Earth in the near future. Thus near earth asteroids are not the primary threat set any more.

Replies from: JoshuaZ
comment by JoshuaZ · 2013-08-18T19:15:56.615Z · LW(p) · GW(p)

This seems like an odd use of language which misses the fundamental point: the observation of the near Earth asteroids reduces the estimated risk level by orders of magnitude. Whether the remaining risk is still concentrated in the near-earth case is a secondary consideration and not relevant to what was being discussed.

Replies from: Lumifer
comment by Lumifer · 2013-08-19T18:11:41.443Z · LW(p) · GW(p)

I don't know why you think this use of language is odd. Saying that "we thought X was dangerous, we looked at it closely and it turns out X isn't dangerous at all" has the same meaning as "we mis-estimated the danger from X and then corrected the estimate".

If your updated belief is that there is little danger from near-earth asteroids, then the original belief that near-earth asteroids were the primary threat set was incorrect.

Replies from: JoshuaZ, timtyler
comment by JoshuaZ · 2013-08-19T18:31:32.910Z · LW(p) · GW(p)

Because it misses the point that the total risk was from asteroids isn't that high. Yes, of the remaining asteroid threat, more of it is from non near Earth asteroids, but that's not relevant to the discussion at hand. Hence the phrase in the report that you objected to ""This eliminates much of the estimated risk due to asteroids" makes complete sense.

Replies from: Lumifer
comment by Lumifer · 2013-08-19T18:57:09.368Z · LW(p) · GW(p)

We are talking past each other.

Let me try to reformulate my point. We're talking about existential risk of an asteroid impact (where "asteroid" is defined as anything large enough moving fast enough). Large asteroids have hit Earth before, we have a fairly good idea how often such things happen. The historical record gives us the basis for a guesstimate of the risk.

That risk estimate is, of course, quite low. Still, we went out looking for things which might hit us in the near future. Note the asymmetry here: were we to find something our risk estimate would skyrocket, but were we to find nothing, it wouldn't perceptibly change.

So we looked at near-earth asteroids because, well, they are near-earth. Turned out none of them is on a collision course with Earth in the foreseeable future. This is good, of course, but it does not mean that the estimated risk went down -- what happened was that it did not go up and that's a different thing.

My original objection was to the characterization of asteroid risk as a "solved problem". It is not. Saying this is like looking up, noticing that the ceiling isn't about to collapse, and then on this basis confidently pronouncing that things falling on your head is a solved problem.

Replies from: asr, JoshuaZ
comment by asr · 2013-08-19T20:04:54.174Z · LW(p) · GW(p)

So we looked at near-earth asteroids because, well, they are near-earth. Turned out none of them is on a collision course with Earth in the foreseeable future. This is good, of course, but it does not mean that the estimated risk went down -- what happened was that it did not go up and that's a different thing.

I had the impression that the near-earth ones were the ones that, averaged over earth's history, are the bulk of the problem. So if the current crop of near-earth asteroids aren't likely to hit us in the historically-near future, doesn't that mean that our near-future risk of impact is below the long-term average risk?

(I am not an astronomer and do not vouch for "NEAs are the main part of the risk" from personal knowledge.)

Replies from: Lumifer
comment by Lumifer · 2013-08-19T20:37:05.117Z · LW(p) · GW(p)

Well, IANAAE (I Am Not An Astronomer Either) but I think that with respect to historical record, there are these considerations:

  • We're pretty sure that large asteroids (defined as above) have struck Earth before. We are not sure where they came from.

  • With the passage of time the frequency of collisions should decline as Earth sweeps a path free of other space objects. So the future risk of impact is below the historical risk of impact.

  • The extinction-scale impact risk seems to be very small. In geologically recent times Earth was not bombarded by asteroids.

comment by JoshuaZ · 2013-08-19T19:29:43.394Z · LW(p) · GW(p)

So we looked at near-earth asteroids because, well, they are near-earth. Turned out none of them is on a collision course with Earth in the foreseeable future. This is good, of course, but it does not mean that the estimated risk went down -- what happened was that it did not go up and that's a different thing.

Yes, it does mean the estimated risk has gone down. It means that the largest set of obvious candidates aren't doing that. If seeing them would make it go up, not seeing asteroids on collision paths must push it down. This is the conservation of expected evidence.

Replies from: Lumifer
comment by Lumifer · 2013-08-19T19:45:26.196Z · LW(p) · GW(p)

If seeing them would make it go up, not seeing asteroids on collision paths must push it down.

Yes, technically. But I've already been though that in a thread here -- that was the whole thing about how checking your garbage can and not finding a tiger in it happens to be evidence for non-existence of tigers.

I'm willing to grant that not finding any near-earth asteroids on a collision course reduces the probability of an impact during, say, next 50 years, but that reduction is miniscule. In fact I'd call it "undetectable".

To throw in another metaphor, if I'm driving on a highway, look around, and see that no cars are headed straight at me -- technically speaking, that reduces the probability that I'll get into a car accident this year. But it reduces this probability by an infinitesimal amount only. On the other hand, if I see a car that's about to ram me, the probability of getting into an accident this year HUGELY increases.

Replies from: JoshuaZ
comment by JoshuaZ · 2013-08-19T19:49:16.624Z · LW(p) · GW(p)

Ok. So the thing is, is most asteroids don't change their orbits that drastically. NEOs aren't just things near Earth's position right now, but are all asteroids that orbit the sun on roughly the plane of the elliptic. They are about .9 to about 1.4 AU from the sun. So the vast majority of objects which have any substantial chance of hitting Earth fall into this category. And we can plot their trajectories out far into the future.

Replies from: Lumifer
comment by Lumifer · 2013-08-19T20:05:16.841Z · LW(p) · GW(p)

...and we're back to me pointing out that once you have determined that these are not a threat, these are not a threat.

But let's try another tack. Do you know of any data-supported estimates of the asteroid impact risk? I'm not interested in the number per se, but more in the data on which it is based and the procedure of estimation.

Replies from: JoshuaZ
comment by JoshuaZ · 2013-08-19T20:21:51.461Z · LW(p) · GW(p)

...and we're back to me pointing out that once you have determined that these are not a threat, these are not a threat.

Which we've already addressed, since what is relevant is trying to estimate the total risk. I thought I had explained that already. Is there something that is wrong with that logic?

But let's try another tack. Do you know of any data-supported estimates of the asteroid impact risk? I'm not interested in the number per se, but more in the data on which it is based and the procedure of estimation.

So, one thing to look at is the Near Earth Object Program, which has a lot of links and discussions. Most of the specific asteroids are targeted by ground based telescopes, although a lot of the initial data comes from the WISE mission which was able to spot objects in a fairly broad range (for most purposes, out to a bit beyond where the main asteroid belt is). In addition to this, we have models of the solar systems which try to estimate how many large objects are likely to be missed, as well as estimates from prior background impact rates. Since the Earth is highly active, only some of the largest of asteroid impacts end up leaving a direct trace here, so we have to use the Moon and other objects to make those sorts of estimates. The links Carl gave earlier are also worth reading and discuss some of this in further detail.

Replies from: Lumifer
comment by Lumifer · 2013-08-19T20:56:26.410Z · LW(p) · GW(p)

what is relevant is trying to estimate the total risk

Right, and I'm asserting that finding that risk from near-earth asteroids in the next 100 years or so is negligible should not affect the estimate of the total risk in any meaningful way (compared to the pre-NEO-survey estimate).

Now, for the actual estimates we're interested in, that's what is called the background frequency, aka unconditional expectations of impacts. The source for that goes to Chapman, C. R., and D. Morrison 1994. Impacts on the Earth by asteroids and comets: Assessing the hazard. Nature 367 ,33–40 which, unfortunately, is behind a paywall and I'm too lazy to scour the 'net for an open copy. The basic expectation of frequency of impacts, though, is visible through other sources (see e.g. http://neo.jpl.nasa.gov/risk/doc/palermo.pdf)

To re-express my point in these terms, the survey of near-earth asteroids does not change the background frequency.

Replies from: gwern, JoshuaZ
comment by gwern · 2013-08-20T00:21:34.731Z · LW(p) · GW(p)

http://schillerlab.bio-toolkit.com/media/pdfs/2010/03/16/367033a0.pdf

(One of the nice things about my Xmonad setup is that I have a shortcut which yanks the current copy-paste buffer and searches Google Scholar for it; so the net effort looks like 'highlight "Impacts on the Earth by asteroids and comets: Assessing the hazard", hit mod-Y, right-click on PDF link, copy, paste'.)

Replies from: Lumifer
comment by Lumifer · 2013-08-20T00:35:12.143Z · LW(p) · GW(p)

Thanks.

Interesting shortcuts you have :-)

comment by JoshuaZ · 2013-08-19T21:18:53.005Z · LW(p) · GW(p)

Background frequency over a few hundred years or more, yes. Is anyone asserting that we should be planning out now exactly how much resources we put into this past anytime beyond the next fifty years or so? And if not, how is that relevant?

Replies from: Lumifer
comment by Lumifer · 2013-08-19T21:28:18.314Z · LW(p) · GW(p)

Background frequency over a few hundred years or more, yes.

Huh? Are you saying that your current impact risk estimates for the next, say, 50 years are significantly lower than the background?

Replies from: JoshuaZ
comment by JoshuaZ · 2013-08-19T21:31:20.871Z · LW(p) · GW(p)

Are you saying that your current impact risk estimates for the next, say, 50 years are significantly lower than the background?

Yes. And we can conclude that because we have detailed understanding of the orbits of the big near earths and can predict their orbits out reliably for a few decades.

Replies from: Lumifer
comment by Lumifer · 2013-08-20T00:26:48.574Z · LW(p) · GW(p)

That's not enough to conclude that.

You need the assumption that (geologically) recent impacts that the background frequency reflects came from near-earth asteroids.

Replies from: JoshuaZ
comment by JoshuaZ · 2013-08-20T00:48:12.774Z · LW(p) · GW(p)

You need the assumption that (geologically) recent impacts that the background frequency reflects came from near-earth asteroids.

Yes, but we're pretty confident about this also. It is very difficult for an asteroid out past 1.4 au to end up changing orbit enough to run into Earth. It requires a large force. It can happen if it gets just the right luck with collisions or with the right gravitational pull (such as if it happens to past just right near Mars at the right time in its orbit). But these events are rare, and moreover, we can model their likelyhood pretty well. Stochastic aspects of orbital dynamics are decently approximable by a variety of methods (such as Monte Carlo).

Replies from: Lumifer
comment by Lumifer · 2013-08-20T01:11:41.164Z · LW(p) · GW(p)

Yes, but we're pretty confident about this also.

I don't know about that. First, there are comets. Second, large forces are not uncommon with collisions in space. More to the point, any collision (or maybe even a close pass) could change the trajectory of an already-catalogued asteroid to something different and possibly dangerous.

The Chapman & Morrison paper points out that "Because of stochastic variability in the process of asteroid and comet break-up, there is a chance for significant temporal variation in the impact flux".

I really don't think we understand the movement of various objects in the Solar System well enough to declare "problem solved".

Replies from: JoshuaZ
comment by JoshuaZ · 2013-08-20T02:04:17.887Z · LW(p) · GW(p)

First, there are comets.

Comets != asteroids. Comets have much harder to predict orbits (outgassing and change in mass both make them much trickier to predict). There's some positive side here in that in order for them to be remotely close to us they generally need to be pretty visible (there's some worry about comet remnants who are in weird orbits but are no longer highly visible when they are in the inner solar system area).

More to the point, any collision (or maybe even a close pass) could change the trajectory of an already-catalogued asteroid to something different and possibly dangerous.

Yes, and this is a problem, and I mentioned this explicitly in my last comment. The issue is how frequent such events are.

The Chapman & Morrison paper points out that "Because of stochastic variability in the process of asteroid and comet break-up, there is a chance for significant temporal variation in the impact flux".

Yes. And what's your point? No one is saying anything otherwise.

I really don't think we understand the movement of various objects in the Solar System well enough to declare "problem solved".

The argument isn't that the problem is solved. The issue is that the chance of an issue in the short-term is much lower than we would have thought 10 or 20 years ago. That's not the same as problem solved: the problem won't be completely solved until we've got much better tracking (I'd prefer radio beacons on every object greater than 1 km) and have a system that can deal with sudden threats. But that's not the issue at hand.

Replies from: Lumifer
comment by Lumifer · 2013-08-20T03:44:10.114Z · LW(p) · GW(p)

Comets != asteroids

In the grand...grandparent I explicitly defined asteroids as anything large enough and fast enough to make a noticeable impact on Earth precisely to avoid terminology issues like this.

The argument isn't that the problem is solved.

That's how the whole thing started. If you go to the origin of this long sub-thread you'll see CarlShulman saying "That's mostly solved, all the dinosaur-killers have been tracked" and me replying "I don't think so".

The issue is that the chance of an issue in the short-term is much lower than we would have thought 10 or 20 years ago.

Yep -- that's what I mean by having an wrong estimate and then correcting it.

Replies from: JoshuaZ
comment by JoshuaZ · 2013-08-20T03:48:48.596Z · LW(p) · GW(p)

In the grand...grandparent I explicitly defined asteroids as anything large enough and fast enough to make a noticeable impact on Earth precisely to avoid terminology issues like this.

Ah, I see. Yes, in that case, using that broad class of objects, we have a much poorer understanding of comets then. The same basic argument goes through (because comets are not nearly as common an object as what is normally called asteroids), but not as by much.

. If you go to the origin of this long sub-thread you'll see CarlShulman saying "That's mostly solved, all the dinosaur-killers have been tracked" and me replying "I don't think so".

Yeah. I think Carl's wording here is important. "Mostly solved" is different from "solved". In this sort of context problems are very rarely solved completely, but more solved in the sense of "we've put a lot of effort into this, the most efficient thing to do is to put our next bit of resources into many other existential risks".

The issue is that the chance of an issue in the short-term is much lower than we would have thought 10 or 20 years ago.

Yep -- that's what I mean by having an wrong estimate and then correcting it.

I'm confused here. What exactly are you saying?

Replies from: Lumifer
comment by Lumifer · 2013-08-20T04:08:48.793Z · LW(p) · GW(p)

With respect to the wrong estimate -- there is the "background frequency", right? Tracking a bunch of near-earth asteroids does not lower it significantly (I am not sure, we may disagree on that). So if 10-20 years ago we thought that the threat of an asteroid impact was much higher, I think I'd call it a wrong estimate.

comment by timtyler · 2013-08-21T23:52:06.016Z · LW(p) · GW(p)

Saying that "we thought X was dangerous, we looked at it closely and it turns out X isn't dangerous at all" has the same meaning as "we mis-estimated the danger from X and then corrected the estimate".

Risks and dangers here are percieved risks and dangers. In that context, such talk makes sense - obviously percieved risks depend on your current state of knowledge. Maybe god knows whether the bad thing will happen or not - but without a hotline to Him, percieved risks and dangers will remain the best we have.

comment by Peter Wildeford (peter_hurford) · 2013-08-18T07:30:37.286Z · LW(p) · GW(p)

I think that I will one day lose the ability to change my mind. I will become dull, stubborn and conservative, and keep publishing rephrasings of my same old views, as do many philosophers and academics.

I'm concerned about this too.

~

Now that you have increased your valuation of Givewell, you should do the same for the Future of Humanity Institute and Center for Study of Existential Risk, right? If you still do not value FHI and CSER, perhaps you still want to improve the future with interventions like AMF’s that have ‘proven’ near-future benefits.

I think this is less clear. I know a moderate amount about what GiveWell is making and how they intend to achieve and demonstrate progress in their domain. I think this is currently not true for FHI and CSER, but I could be convinced otherwise.

~

But what about scanning for asteroids? This has a pretty straightforward case for its benefits, but is not ‘proven’, and is never going to be supported by an RCT or cohort study.

I agree with the thoughts expressed by Carl Shulman and would have said the same thing. However, I do think it's possible there is scalable existential risk interventions that are reasonably understood and progress can be confidently made on them. I just don't yet know what they are.

For the record, my position is not "I need an RCT or I won't trust it".

comment by Crux · 2013-08-18T09:09:38.285Z · LW(p) · GW(p)

Certainly it seems to be the case that people have a lot harder of a time changing their mind once they get past their 40s or 50s, but at the same time I have met a few people closing in on their 70s who have made radical changes to their lives and their thinking. It's not impossible; just way more difficult.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-18T03:41:36.729Z · LW(p) · GW(p)

Upvoted for changing one's mind in public.

comment by wiserd · 2013-08-16T19:20:55.302Z · LW(p) · GW(p)

"We should put more trust in larger scale organizations who are doing exploring, like GiveWell, and pool our resources."

I wonder about the notion of scalability, because with any information provider there's the possible issue of corruption. Is it better to have a large organization which is easy to track? (Goodwill and the Red Cross seem to give their CEOs pretty high salaries, considering that they're charities.) Or is it better to have a system which is more robust to corruption and tunnel vision due to multiple redundancies? Do larger organizations attract a certain type of social climber? I have no doubt that there are economies of scale. Part of the reason I'm asking these questions is that corruption seems to be a universal human stumbling block, and one that is often inadequately addressed. We project generously and assume that the heads of organizations are "basically like us" and altruistic in their intent. Neither is probably true, as the upper ranks of large organizations tend to collect those who are particularly attracted to climbing the ranks.

comment by cousin_it · 2013-08-16T15:54:56.176Z · LW(p) · GW(p)

Interesting! What's the most cost-effective way to become more certain about the impact of something like MIRI?

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2013-08-16T18:44:34.362Z · LW(p) · GW(p)

I don't know nearly enough about MIRI or AI to say. My initial guess would be to get more external review from experts, but I imagine there's reasons why that hasn't happened much yet (though see Luke's and XiXiDu's interview series).

Replies from: lukeprog
comment by lukeprog · 2013-08-17T03:32:25.233Z · LW(p) · GW(p)

Yes, I think external review by AI experts would be good, though we'd need to find people who show the ability to change their mind in response to evidence, and people who are willing to actually listen to and work through the arguments with us. I say more about this here.

Replies from: bentarm
comment by bentarm · 2013-08-17T07:01:01.311Z · LW(p) · GW(p)

I imagine this isn't your intention, but this does read a lot like "I think external review like AI experts would be good, but if we do that review, and don't liek the results, it's because we picked the wrong AI experts."

Replies from: lukeprog
comment by lukeprog · 2013-08-18T00:07:25.241Z · LW(p) · GW(p)

Here are two things we want from an external review:

  1. MIRI wants to learn more about whether its basic assumptions and their apparent strategic implications are sound.
  2. Outsiders of various kinds want to learn more about whether MIRI is a reasonable thing to support and emulate.

These two groups (MIRI & outsiders) have wildly different information about risks from AI, and are thus in different positions with respect to how they should respond to external reviews of MIRI's core assumptions.

To illustrate the point, consider two actual events of direct or indirect "external review" of MIRI's core assumptions.

#1: Someone as intelligent and broadly informed as Russ Roberts was apparently relieved of some of his worry about AI risk upon hearing Kevin Kelly's reasoning on the matter. Given my state of information, I can immediately see that Kelly's point about the Church-Turing thesis isn't relevant to whether we should worry about AI (Ctrl+F on this page for "The basis of my non-worry"). But maybe Russ Roberts isn't in a position to do so, because he knows so little about AI and theory of computation that he can't tell whether Kelly's stated reasons for complacency are sound or not. So the result of this indirect "external review" of MIRI's assumptions is that Roberts has to say to himself "Well, I don't know what to think. Robin Hanson says we should worry but Kevin Kelly says we shouldn't, and they both have fairly good normal credentials on the issue of long-term tech forecasting relative to almost everyone." And I, meanwhile, can see that I haven't been presented with much reason to re-evaluate my core assumptions about AI risk.

#2: In mid-2012, Paul Christiano, who at the time had just finished his undergraduate degree, gave (w/ help from Carl Shulman) such a detailed, informed, and reasonable critique of Eliezer's standard case for high confidence in hard takeoff that both Eliezer and I have since mentally re-tagged "AI takeoff speed" as an "open question" rather than a "moderately closed question," prompting Eliezer to write up intelligence explosion microeconomics as his "Friendly AI Open Problem #1." Eliezer and I were in a position to make a fairly significant update from Paul's argument, but unfortunately Paul has far fewer normal credentials as an "expert worth listening to" than Kevin Kelly or Robin Hanson do, so outsiders probably aren't in a position to be moved much either way by Paul stating his opinion and his reasons on the issue, due to (fairly appropriate) epistemic learned helplessness.

Also note that if 10 of the world's top AI experts spent two weeks with MIRI and FHI trying to understand our arguments, and their conclusion was that AI wasn't much of a risk, and their primary reasons were (1) a denial of causal functionalism and (2) the Chinese room argument, and we had tried hard to elicit other objections, then MIRI and FHI should update in favor of more confidence that we're on the right track. (I don't expect this to actually happen; this is just another illustration of the situation we're in with regard to external review.)

Related: Contrarian Excuses.

ETA: Also, I should mention that I don't gain any confidence in MIRI's core assumptions form the fact that people like Kevin Kelly or these people give bad arguments against the plausibility of intelligence explosion. As far as I can tell, these people aren't engaging the arguments in much detail, and are mostly just throwing up whatever rejections first occur to them. I would expect them to do that whether MIRI's core assumptions were correct or not. That's why I stipulated that to gain confidence in MIRI's core assumptions, we'd need to get a bunch of smart, reasonable people to investigate the arguments in detail, and try hard to extract good objections from them, and learn that they can't come up with good objections even under those circumstances.

Replies from: CarlShulman, lukeprog, private_messaging, ESRogs, peter_hurford
comment by CarlShulman · 2013-08-18T02:35:42.106Z · LW(p) · GW(p)

Also note that if 10 of the world's top AI experts spent two weeks with MIRI and FHI trying to understand our arguments, and their conclusion was that AI wasn't much of a risk, and their primary reasons were (1) a denial of causal functionalism and (2) the Chinese room argument, and we had tried hard to elicit other objections, then MIRI and FHI should update in favor of more confidence that we're on the right track.

Based on what we already know this would require a very unrepresentative sample, and cause wider revisions. And if they published such obviously unconvincing reasons it would lead to similar updates in many casual observers.

And so what we are going to do is, there is really almost no reason to make human-like intelligence because we can do it so easily in 9 months. Untrained workforce.

Yes, this argument is remarkably unconvincing. Human labor is still costly, limited in supply (it's not 9 months, it's 20+ years, with feeding, energy costs, unreliable quality and many other restrictions), and so forth.

Replies from: private_messaging
comment by private_messaging · 2013-08-27T22:41:53.297Z · LW(p) · GW(p)

Based on what we already know this would require a very unrepresentative sample.

There's too much focus on confirmation - e.g. if it is false, there must be some update in the opposite direction, but in practice one would just say that "those top 10 AI experts took us seriously and engaged out arguments, which boosts our confidence that we are on the right track".

comment by lukeprog · 2013-08-28T23:01:34.479Z · LW(p) · GW(p)

I'll also mention that GiveWell — clearly not a den of MIRI-sympathizers — is effectively doing an external review of some of MIRI's core assumptions, by way of "shallow investigations" of specific catastrophic risks (among other interventions).

So far, they've reviewed climate change, asteroids, supervolcanoes, and nuclear war. None of these reviews cite the corresponding chapters in GCR, perhaps because GiveWell wants to do its own review of these issues that is mostly independent from the Bostromian school (which includes MIRI).

So far, GiveWell seems to have come to the same conclusions as the Bostromian school has about these specific risks, with the major caveat that these are shallow investigations that are "not researched and vetted to the same level as [Givewell's] standard recommendations."

I'm pretty excited about GiveWell investigating GCRs independently of the Bostromian school, since (1) I admire the quality and self-skepticism of GiveWell's research so far, (2) I think GCRs are important to study, and (3) this will provide a pretty solid external review of some of MIRI's core assumptions about the severity of various x-risks.

comment by private_messaging · 2013-08-27T12:59:07.380Z · LW(p) · GW(p)

and their primary reasons were (1) a denial of causal functionalism and (2) the Chinese room argument, and we had tried hard to elicit other objections, then MIRI and FHI should update in favor of more confidence that we're on the right track.

Unpack "top 10 AI experts" as "who we think the top 10 AI experts are" and unpack "their primary reasons were" with "we think their primary reasons were", and this updating will sound a lot more silly, especially if further conditioned by "we do not have a way to show our superiority in a more objective manner".

comment by ESRogs · 2013-09-06T19:51:10.320Z · LW(p) · GW(p)

Is Paul Christiano's hard takeoff critique publicly available?

Replies from: lukeprog
comment by lukeprog · 2013-09-06T19:54:53.834Z · LW(p) · GW(p)

No. It was given over a series of in-person conversations and whiteboard calculations, with only scattered notes taken throughout. Paul does describe his own "mainline AI scenario" here, though.

Replies from: ESRogs
comment by ESRogs · 2013-09-06T21:05:43.471Z · LW(p) · GW(p)

Ah, thanks.

comment by Peter Wildeford (peter_hurford) · 2013-08-18T07:27:08.450Z · LW(p) · GW(p)

MIRI wants to learn more about whether its basic assumptions and their apparent strategic implications are sound.

I still think it would be valuable to hear what relevant, independent AI experts think about these basic assumptions and strategic implications, perhaps accompanied with a detailed theory as to why they've come to wrong answers and MIRI has more advanced insight.

Replies from: lukeprog
comment by lukeprog · 2013-08-28T22:49:46.486Z · LW(p) · GW(p)

Well, we can do this for lots of specific cases. E.g. last time I spoke to Peter Norvig, he said his reason for not thinking much about AI risk at this point (despite including a discussion of it in his AI textbook) was that he's fairly confident AI is hundreds of years away. Unfortunately, I didn't have time to walk him through the points of When Will AI Be Created? to see exactly why we disagreed on this point.

This will all be a lot easier when Bostrom's Superintelligence book comes out next year, so that experts can reply to the basic theses of our view when they are organized neatly in one place and explained in some detail with proper references and so on.

comment by John_Maxwell (John_Maxwell_IV) · 2015-06-11T15:09:01.719Z · LW(p) · GW(p)

We should put more trust in larger scale organizations who are doing exploring, like GiveWell, and pool our resources.

I think at a certain point it's better to have several medium-sized organizations rather than a single large one. When my dad was thinking about graduate school, his professors encouraged him to go to a different university than the one he was attending in order to get a different perspective. I think the ideal case for the EA movement is to have multiple charity evaluators that share information with one another but operate independently, in the same way universities share information with one another and collaborate on projects while still operating independently. Also, it seems like organizations become more likely to degrade as they grow.