Call for Ideas: Industrial scale existential risk research

post by whpearson · 2017-11-09T08:30:05.534Z · LW · GW · 3 comments

Contents

3 comments

I have a slight problem with Existential Risk. Not as a thing to worry about, it makes lots of sense to worry about it. My problem will only emerge if it becomes an industry, with all the distortion that entails from inadequacy etc. If we want more than the current few odd rationalist academics we have currently working on it, we need to make sure everyones incentives are properly aligned when it is scaled up.

A company in the Existential Risk industry has very little reason to be accurate about the risk. The greater they can make the potential risk sound, the more money they can extract to study it from governments etc. So they have real reasons to inflate the risks. So what can we do to mitigate that problem?

I've had some ideas around trying to create a professional body that aligns the incentives of its members with truth seeking. Things like by giving prizes for accuracy of models (in the short term) and helping retrain people if a thing that society was worried about seems to not be such a problem. Giving them an economic line of retreat.

Then a government can make sure it hires companies that only hire members of this society. I'll write this up properly at some point, so it can get a good kicking.

But I was wondering if anyone else have thoughts around this?

3 comments

Comments sorted by top scores.

comment by Vaniver · 2017-11-09T17:54:22.968Z · LW(p) · GW(p)

It seems to me like most of the incentives for supply-side distortion (me over-promoting my pet x-risk) are from personal affiliation and switching costs. Consider someone who spends a decade studying climate change and alternative energy and so on, and then comes across an analysis that suggests that this was wasted effort (possibly something like the Copenhagen Consensus report that explicitly considers many different options, or an argument that AI is the most important risk facing humans, or so on). If it were the case that any company could freely switch from working on x-risk A to working on x-risk B, then it would be better to switch to the most important x-risk than to distort the importance of your x-risk.

I agree with you that demand-side agencies (funders of x-risk research, the x-risk commentariat, etc.) need to care strongly about seeing clearly, even in the presence of distortion. It's hard to set up the appropriate incentives since most of the tools we use to reduce distortion (like tight feedback loops) are inconsistent with x-risk as a field (we have a very sparse and uninformative feedback channel). I suspect progress here looks like better modes of communication and argumentation, such that distortions are easier to spot and discourage.

comment by JohnGreer · 2017-11-20T21:59:26.228Z · LW(p) · GW(p)

Seems really interesting but I'm wondering how they can measure the accuracy of low probability but long term risks like "someone could release a hacked virus". I look forward to reading your fleshed out post! The aligning incentives reminds me of this: https://www.lesserwrong.com/posts/a7pjErKGYHh7E9he8/the-unfriendly-superintelligence-next-door

comment by scarcegreengrass · 2017-11-09T22:32:48.993Z · LW(p) · GW(p)

Something like a death risk calibration agency? Could be very interesting. Do any orgs like this exist? I guess the CDC (in the US govt) probably quantitively compares risks within the context of disease.

One quote in your post seems more ambitious than the rest: 'helping retrain people if a thing that society was worried about seems to not be such a problem'. I think that tons of people evaluate risks based on how scary they seem, not based on numerical research.