Why don't people use formal methods?

post by John_Maxwell (John_Maxwell_IV) · 2019-01-22T09:39:46.721Z · LW · GW · 5 comments

This is a link post for https://www.hillelwayne.com/post/why-dont-people-use-formal-methods/

Contents

5 comments

Saw this on Hacker News; there's Hacker News discussion here.

Formal methods seem very relevant to AI safety, and I haven't seen much discussion of them on Less Wrong.

5 comments

Comments sorted by top scores.

comment by Rohin Shah (rohinmshah) · 2019-01-22T21:45:49.931Z · LW(p) · GW(p)

The quick, lacking-all-nuance answer: one of the main things we're worried about with AI safety is that we can't write down a specification of what we want for an AI to optimize. Formal methods typically require such a specification.

(Though note Techniques for optimizing worst-case performance includes verification as a potential avenue of attack.)

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2019-01-25T09:21:44.585Z · LW(p) · GW(p)

FYI, the reason the post is titled "Why don't people use formal methods?" is because that is what the link is called, not because I was wondering why people in AI safety don't use formal methods :) I don't think formal methods are likely to be a magic bullet for AI safety. However, if one wants a provably safe AI, it seems like developing the capability to write provably correct software in easier cases could be a good first step (a necessary but not sufficient condition, if you will), and the article discusses our progress towards that goal. The article also discusses the specification problem a bit, and it seems plausible that insights from how to do formal specifications for software in general will transfer to FAI. The field seems neglected in general, relative to something like machine learning: "There really isn’t a formal methods community so much as a few tiny bands foraging in the Steppe". Finally, my own personal judgement: the "writing down a specification of what we want for an AI to optimize" part seems like it might be more amenable to being solved with a sudden flash of insight (in the same way Nick Bostrom has suggested that the problem of AGI more generally might be solved with a sudden flash of insight), whereas the problem of formally verifying software seems less like that, and if it ends up being a long slog, it might actually constitute the majority of the problem. So for those reasons, it seemed to me like LW might benefit from additional formal verification awareness/discussion.

Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2019-01-25T17:35:53.755Z · LW(p) · GW(p)
FYI, the reason the post is titled "Why don't people use formal methods?" is because that is what the link is called, not because I was wondering why people in AI safety don't use formal methods :)

Got it, my mistake.

"There really isn’t a formal methods community so much as a few tiny bands foraging in the Steppe"

I've got to disagree with that, I used to be very closely adjacent to the formal methods community, if not a part of it myself. It's much bigger than eg. the academic AI safety community. Sure, it's not as big as ML, but almost nothing is as big as ML.

The rest of the reasons seem reasonable. I personally disagree that the capability to write provably correct software in easier cases is a necessary first step, but I don't think I have particularly strong explicit reasons for that position.

ETA: Also I do know a few AI safety people explicitly excited about formal methods and verification who will likely be working on it in the near future, if not already.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2019-01-26T07:58:55.030Z · LW(p) · GW(p)

Thanks for the info!

comment by Gordon Seidoh Worley (gworley) · 2019-01-22T18:37:35.877Z · LW(p) · GW(p)

I think some people do, or at least try to, but my impression of the state of computer-assisted proofs and formal verification methods for programs is that they are still not very good because the problem is incredibly complex and we've basically only made it to the level of having FORTRAN-level tools. This is to say, we're a bit better off than we used to be doing formal verification with assembly-level tools where you had to specify absolutely everything in very low-level terms, but mostly in ways that just make it easier to do that low-level work rather than having many useful abstractions to help us perform formal verification without having to understand the details of (almost) everything all the time.

To continue the analogy, things will get a little more exciting as we get C and then C++ level tools, but I think things won't really explode and be appealing to many folks who don't desperately need to do formal verification until we get to something like the Python/Ruby-level in terms of tooling.

This does suggest something interesting, though: if someone thinks more widely using formal verification is important, especially in AI, then a straight-forward approach is to work on improving formal verification tools to a point that they can build up the abstractions that will help people work with them.