You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using an XGBoost model as classification function and trying to use Anchors as an explainability technique over and above XGBoost.
I am using the below code to implement Anchors, however the anchors that are being outputted contain all the features (for most instances in test data), which is obviously very hard to read (and therefore, not that interpretable). Moreover, the precision for the whole anchor when given a threshold of 0.8 is only 0.33.
If precision is still not 1 with all the features, it means that the discretization is too coarse. Try using discretizer=decile or providing your own discretizer.
It may very well be the case that the model is too 'jumpy', in which case a full anchor is the right thing even if it's not useful (this is a limitation of anchors as discussed in the paper). But it sounds like the problem here is potentially the discretization.
I am using HELOC dataset which can be downloaded from https://community.fico.com/s/explainable-machine-learning-challenge?tabset-3158a=2.
I am using an XGBoost model as classification function and trying to use Anchors as an explainability technique over and above XGBoost.
I am using the below code to implement Anchors, however the anchors that are being outputted contain all the features (for most instances in test data), which is obviously very hard to read (and therefore, not that interpretable). Moreover, the precision for the whole anchor when given a threshold of 0.8 is only 0.33.
Here is a screenshot of a sample anchor:
Is there something I can do from my end to improve this?
Thanks,
Lara
The text was updated successfully, but these errors were encountered: