Strategic research on AI risk

post by lukeprog · 2012-06-06T17:02:54.980Z · LW · GW · Legacy · 24 comments

Series: How to Purchase AI Risk Reduction

Norman Rasmussen's analysis of the safety of nuclear power plants, written before any nuclear accidents had occurred, correctly predicted several details of the Three Mile Island incident in ways that that previous experts had not (see McGrayne 2011, p. 180). Had Rasmussen's analysis been heeded, the Three Mile Island incident might not have occurred.

This is the kind of strategic analysis, risk analysis, and technological forecasting that could help us to pivot the world in important ways.

Our AI risk situation is very complicated. There are many uncertainties about the future, and many interacting strategic variables. Though it is often hard to see whether a strategic analysis will pay off, the alternative is to act blindly.

Here are some examples of strategic research that may help (or have already helped) to inform our attempts to shape the future:

Here are some additional projects of strategic research that could help inform x-risk decisions, if funding were available to perform them:

I'll note that for as long as FHI is working on AI risk, FHI probably has an advantage over SI in producing actionable strategic research, given past successes like the WBE roadmap and the GCR volume. But SI is also performing actionable strategic research, as described above.

24 comments

Comments sorted by top scores.

comment by John_Maxwell (John_Maxwell_IV) · 2012-06-07T03:48:38.153Z · LW(p) · GW(p)

Why no love for this project?

http://www.theuncertainfuture.com/

My perception as an outsider is that SI put a fair amount of manpower into it, finished it, submitted it to hacker news, and then folks largely forgot about it. Is it even linked from the SI website?

Replies from: lukeprog, thomblake
comment by lukeprog · 2012-06-09T21:55:37.077Z · LW(p) · GW(p)

I do have it in my to-do list to consider the possibility of mining the work in that project for a paper about predicting AI.

comment by thomblake · 2012-06-07T15:44:37.370Z · LW(p) · GW(p)

I see "Java" and close the browser tab.

comment by JGWeissman · 2012-06-06T17:40:45.509Z · LW(p) · GW(p)

Norman Rasmussen's analysis of the safety of nuclear power plants, written before any nuclear accidents had occurred, correctly predicted several details of the Three Mile Island incident in ways that that previous experts had not (see McGrayne 2011, p. 180).

Is there any way that a policy maker could have known in advance to pay attention to Rasmussen rather than other experts? Is this a case of retroactively selecting the predictor who happened to be right out of a large group of varied, but roughly equally justified, predictors, or did Rasmussen use systematically better methods for making his predictions?

Replies from: John_Maxwell_IV, lukeprog
comment by John_Maxwell (John_Maxwell_IV) · 2012-06-07T03:45:25.676Z · LW(p) · GW(p)

It's worth noting that stories of catastrophes that were successfully averted because someone listened to an expert may be hard to find.

Replies from: JGWeissman
comment by JGWeissman · 2012-06-07T04:31:26.073Z · LW(p) · GW(p)

If an expert tells you to add a safety mechanism, and you end up using that mechanism, you know that the expert helped you.

Replies from: ciphergoth, thomblake
comment by Paul Crowley (ciphergoth) · 2012-06-07T07:14:30.201Z · LW(p) · GW(p)

Right, but the story won't be written up, or will be harder to find.

comment by thomblake · 2012-06-07T15:42:29.301Z · LW(p) · GW(p)

Or the expert caused you to waste money on a needless safety mechanism.

Replies from: JGWeissman
comment by JGWeissman · 2012-06-07T15:53:57.084Z · LW(p) · GW(p)

I mean a safety mechanism like a button that shuts down the assembly line. If someone gets caught in the machinery and you push the button to prevent them from getting (more) hurt, you will be happy the expert told you to install that button.

Replies from: thomblake
comment by thomblake · 2012-06-07T15:59:22.640Z · LW(p) · GW(p)

Aha. I was reading "use" as "install", not "activate during emergency". I agree.

comment by lukeprog · 2012-06-06T18:27:51.866Z · LW(p) · GW(p)

Is there any way that a policy maker could have known in advance to pay attention to Rasmussen rather than other experts?

Yes. Rasmussen used Bayes, while everyone else used the methods of (1) Frequentism or (2) Experts Must Have Great Intuitions.

Replies from: JGWeissman, shminux
comment by JGWeissman · 2012-06-06T19:09:08.885Z · LW(p) · GW(p)

All else being equal, I would put more trust in the report that uses Bayesian statistics than a report that uses Frequentist statistics, but I wouldn't expect that strong an effect from that alone. (I would expect a strong increase in accuracy for using any kind of statistics over intuition.)

Following your link, I notice that Rasmussen's report used a fault tree. I would expect that the consideration of failure modes of each component of a nuclear reactor played a huge role in his accuracy, and that Bayesian and Frequentist statistics would largely agree how to get individual failure rates from historical data and how to synthesize this information into a failure rate for the whole reactor. Assuming the other experts did not also use fault trees, I would credit the fault trees more than Bayes for Rasmussen's success. (And if they did, I would wonder where they went wrong.)

comment by shminux · 2012-06-06T19:09:57.216Z · LW(p) · GW(p)

This is not a convincing argument to a policy maker.

Replies from: lukeprog
comment by lukeprog · 2012-06-06T19:13:47.745Z · LW(p) · GW(p)

Definitely not!

comment by Giles · 2012-06-08T20:27:37.810Z · LW(p) · GW(p)

A model of AI risk currently being developed in MATLAB by Anna Salamon and others

I hadn't heard about this one. Is there a list somewhere of all the little projects that the SI is working on? (Or should there be)? Posts like this one (and the monthly status reports) are very useful, but since each post only lists some of the things going on, I'm worried that I'll miss something interesting. Or that there's something I thought the SI were working on but which had been quietly dropped.

comment by shminux · 2012-06-06T19:12:13.782Z · LW(p) · GW(p)

Nick Bostrom's forthcoming book on machine superintelligence.

A model of AI risk currently being developed in MATLAB by Anna Salamon and others.

Referencing future publications/results detracts from one's credibility.

Replies from: Kaj_Sotala, JGWeissman
comment by Kaj_Sotala · 2012-06-06T19:49:08.510Z · LW(p) · GW(p)

I found it more weird that he specifically mentioned that the model was being developed in MATLAB, but didn't mention any other details. To me, that sounded a little like saying "Anna and the others are writing a paper about AI risk on an Asus Eee and Google Docs".

Replies from: JGWeissman, reup, Vladimir_Nesov
comment by JGWeissman · 2012-06-06T19:56:50.198Z · LW(p) · GW(p)

What I took away from the mention of MATLAB is that the model is expressed as a computer program, as opposed to just talked about, and that this requires a certain level of rigor. But yeah, I don't care so much that it is MATLAB rather than Java.

Replies from: komponisto
comment by komponisto · 2012-06-06T20:38:11.507Z · LW(p) · GW(p)

I didn't realize that MATLAB and Java were members of the same category. I thought that MATLAB was a software program (like Microsoft Word), while Java was a programming language (like C++).

Replies from: asr, JGWeissman
comment by asr · 2012-06-06T21:01:58.872Z · LW(p) · GW(p)

Matlab is a program, but it's more like the Java virtual machine than like MS Word. Both the JVM and Matlab are able to execute arbitrary programs supplied as input. So it's meaningful to talk about Matlab-the-language. The one difference is that Java is a standardized language, whereas Matlab-the-language is "whatever Matlab-the-program accepts these days".

comment by JGWeissman · 2012-06-06T20:58:18.142Z · LW(p) · GW(p)

MATLAB is a programming language, and I have written programs in it (though it has been a while). I would definately prefer Java (or C#) for writing most general applications, but MATLAB is good for some math heavy stuff.

comment by reup · 2012-06-07T21:18:01.317Z · LW(p) · GW(p)

This. It comes off as amateurish, not knowing which details are important to include. But hopefully these semi-informal discussions help with refining the pitch and presentation before they're standing in front of potential donors.

comment by Vladimir_Nesov · 2012-06-06T19:58:02.090Z · LW(p) · GW(p)

Yup. Many cranks say things like "I programmed a human-level AI in Fortran!", so it's a bad pattern match to enable.

comment by JGWeissman · 2012-06-06T19:29:19.456Z · LW(p) · GW(p)

Citing future publications in an academic article is a bad sign, but here Luke is telling us about SIAI's strategy, and how what they (and related organisation FHI) have don, are currently doing, and planning to do fits into that that strategy. Discussing current research projects in this case is good.

And Luke has not cited any future results. That would be crazy.