Introduction to Local Interpretable Model-Agnostic Explanations (LIME)

post by Gunnar_Zarncke · 2017-02-09T08:29:40.668Z · score: 4 (5 votes) · LW · GW · Legacy · 2 comments

This is a link post for


Comments sorted by top scores.

comment by Lumifer · 2017-02-09T15:57:47.027Z · score: 1 (1 votes) · LW · GW

It's an interesting idea, though I wouldn't call it "model-agnostic".

Basically they're jiggering the inputs and figuring out what you can't change without the prediction (classification) changing as well. In effect they are answering the question "given this model, which input values are essential to producing this particular output?"

comment by Gunnar_Zarncke · 2017-02-09T08:30:16.970Z · score: 0 (0 votes) · LW · GW

Another (earlier) blog post about it: