Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Explainability #22

Open
rabitwhte opened this issue Mar 2, 2020 · 6 comments
Open

Explainability #22

rabitwhte opened this issue Mar 2, 2020 · 6 comments
Assignees
Labels
enhancement New feature or request

Comments

@rabitwhte
Copy link

Hey @tomtung,

great repository! Your implementation of parabel has significantly outperformed several architectures of DNN that I tried (for dataset of 600k samples, 20k labels) while being much faster at the same time (both training and prediction). Thank you for the python wrapper as well, since it was easier and faster to try for me.

Can you think of any way to approach a challenge of explainability? Is there a way to e.g. for each prediction, get the most important words in the file that decided the file was classified in that way or another?

Thanks again!

@tomtung
Copy link
Owner

tomtung commented Mar 3, 2020

Yeah it should be possible in principle: as it walks down the trees during prediction, it should be possible to keep track of, e.g., how much each feature contributed to the prediction of each top label. I'm still thinking what kind of "explanation" is useful / interpretable though. Do you have any suggestions? E.g., what kind of use case do you have in mind?

I guess maybe one natural choice is, for each top label and each non-zero feature in an example, to calculate the score when this feature is zero, and calculate the different to the actual score.

@rabitwhte
Copy link
Author

Yep, that makes a lot of sense. Then, for each of the top labels, one could get a list of words sorted in descending importance e.g. 50 most important words.

Should I expect this feature implemented by you in the nearest future or should I do it on my own? I will most likely do this in Python but I think doing this in Rust (and wrapping for Python) would be more efficient.

@osma
Copy link
Contributor

osma commented Mar 3, 2020

I wonder if LIME would be useful for this? It generates explanations by permutating the input to a classifier and by doing that, finds out the most important features that contribute to the result. See also this blog post. Of course, this may not be as efficient as generating the explanations within the model in a single pass.

@tomtung
Copy link
Owner

tomtung commented Mar 5, 2020

Thanks for the suggestion!

@rabitwhte I think LIME should be general enough, although it might be a bit slow depending on your use case. If it's useful I can also try find some time to implement the method mentioned above, which can be done very efficiently.

@rabitwhte
Copy link
Author

@tomtung I hope you will find some time then! :)

@tomtung tomtung self-assigned this Mar 5, 2020
@trpstra
Copy link

trpstra commented Mar 26, 2020

Yeah it should be possible in principle: as it walks down the trees during prediction, it should be possible to keep track of, e.g., how much each feature contributed to the prediction of each top label. I'm still thinking what kind of "explanation" is useful / interpretable though. Do you have any suggestions? E.g., what kind of use case do you have in mind?

I guess maybe one natural choice is, for each top label and each non-zero feature in an example, to calculate the score when this feature is zero, and calculate the different to the actual score.

I think that makes sense for a first approximation, but it may have trouble accounting for the effect of the nonlinearities in the cascade of clustering steps, and the effect of label correlations that parabel models so nicely. Still, it is probably the sweet spot in terms of utility vs. computational cost. I would love to try this out on my data too.
Cheers!

@tomtung tomtung added the enhancement New feature or request label Jul 2, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants