[CLOSED] Which model explainability tool to integrate with Q1 2022 #235
Closed
htahir1
started this conversation in
Show and tell
Replies: 1 comment 1 reply
-
I wouldn't go with LIME. The technique is nice from a theoretical perspective, and it was one of the early methods that started the XAI movement. It has unfortunately a lot of issues (see f.ex. https://github.com/dylan-slack/Fooling-LIME-SHAP) that make it difficult to trust the results of this algorithm. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
[VOTE] Which model explainability tool to integrate with Q1 2022
Which of the following commonly-used model explainability tools would you like us to support?
Alibi > Vote with 👍
Dalex > Vote with 😀
ELI5 > Vote with 🎉
InterpretML > Vote with ❤️
Lime > Vote with 🚀
What-If Tool > Vote with 👀
If you have another suggestion, leave a comment!
Beta Was this translation helpful? Give feedback.
All reactions