-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Explainability #22
Comments
Yeah it should be possible in principle: as it walks down the trees during prediction, it should be possible to keep track of, e.g., how much each feature contributed to the prediction of each top label. I'm still thinking what kind of "explanation" is useful / interpretable though. Do you have any suggestions? E.g., what kind of use case do you have in mind? I guess maybe one natural choice is, for each top label and each non-zero feature in an example, to calculate the score when this feature is zero, and calculate the different to the actual score. |
Yep, that makes a lot of sense. Then, for each of the top labels, one could get a list of words sorted in descending importance e.g. 50 most important words. Should I expect this feature implemented by you in the nearest future or should I do it on my own? I will most likely do this in Python but I think doing this in Rust (and wrapping for Python) would be more efficient. |
I wonder if LIME would be useful for this? It generates explanations by permutating the input to a classifier and by doing that, finds out the most important features that contribute to the result. See also this blog post. Of course, this may not be as efficient as generating the explanations within the model in a single pass. |
Thanks for the suggestion! @rabitwhte I think LIME should be general enough, although it might be a bit slow depending on your use case. If it's useful I can also try find some time to implement the method mentioned above, which can be done very efficiently. |
@tomtung I hope you will find some time then! :) |
I think that makes sense for a first approximation, but it may have trouble accounting for the effect of the nonlinearities in the cascade of clustering steps, and the effect of label correlations that parabel models so nicely. Still, it is probably the sweet spot in terms of utility vs. computational cost. I would love to try this out on my data too. |
Hey @tomtung,
great repository! Your implementation of parabel has significantly outperformed several architectures of DNN that I tried (for dataset of 600k samples, 20k labels) while being much faster at the same time (both training and prediction). Thank you for the python wrapper as well, since it was easier and faster to try for me.
Can you think of any way to approach a challenge of explainability? Is there a way to e.g. for each prediction, get the most important words in the file that decided the file was classified in that way or another?
Thanks again!
The text was updated successfully, but these errors were encountered: