Introduction to Local Interpretable Model-Agnostic Explanations (LIME)
post by Gunnar_Zarncke
score: 4 (5 votes) ·
This is a link post for https://www.oreilly.com/learning/introduction-to-local-interpretable-model-agnostic-explanations-lime
Comments sorted by top scores.
comment by Lumifer
· score: 1 (1 votes) · LW
It's an interesting idea, though I wouldn't call it "model-agnostic".
Basically they're jiggering the inputs and figuring out what you can't change without the prediction (classification) changing as well. In effect they are answering the question "given this model, which input values are essential to producing this particular output?"