Skip to content

Commit

Permalink
Merge pull request #21 from ruivieira/docs-kserve-explainer
Browse files Browse the repository at this point in the history
docs: Add KServe explainer tutorial
  • Loading branch information
ruivieira authored May 8, 2024
2 parents c279043 + 053314e commit 84d21c6
Show file tree
Hide file tree
Showing 16 changed files with 546 additions and 193 deletions.
3 changes: 3 additions & 0 deletions diagrams/trustyai-kserve-explainer.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
20 changes: 20 additions & 0 deletions docs/modules/ROOT/attachments/kserve-explainer-payload.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
{
"instances": [
[
404,
1,
1,
20,
1,
144481.56,
1,
56482.48,
1,
372,
0,
0,
1,
2
]
]
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
apiVersion: "serving.kserve.io/v1beta1"
kind: "InferenceService"
metadata:
name: "explainer-test-lime"
spec:
predictor: <1>
model:
modelFormat:
name: sklearn
protocolVersion: v2
runtime: kserve-sklearnserver
storageUri: https://github.com/trustyai-explainability/model-collection/raw/bank-churn/model.joblib <2>
explainer: <3>
containers:
- name: explainer
image: quay.io/trustyai/trustyai-kserve-explainer:latest <4>
20 changes: 20 additions & 0 deletions docs/modules/ROOT/examples/inference-service-explainer-lime.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
apiVersion: "serving.kserve.io/v1beta1"
kind: "InferenceService"
metadata:
name: "explainer-test-lime"
annotations:
sidecar.istio.io/inject: "true"
sidecar.istio.io/rewriteAppHTTPProbers: "true"
serving.knative.openshift.io/enablePassthrough: "true"
spec:
predictor: <1>
model:
modelFormat:
name: sklearn
protocolVersion: v2
runtime: kserve-sklearnserver
storageUri: https://github.com/trustyai-explainability/model-collection/raw/bank-churn/model.joblib <2>
explainer: <3>
containers:
- name: explainer
image: quay.io/trustyai/trustyai-kserve-explainer:latest <4>
38 changes: 38 additions & 0 deletions docs/modules/ROOT/examples/kserve-explainer-lime-saliencies.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
{
"timestamp": "2024-05-06T21:42:45.307+00:00",
"type": "explanation",
"saliencies": {
"outputs-0": [
{
"name": "inputs-12",
"score": 0.8496797810357467,
"confidence": 0
},
{
"name": "inputs-5",
"score": 0.6830766647546147,
"confidence": 0
},
{
"name": "inputs-7",
"score": 0.6768475400887952,
"confidence": 0
},
{
"name": "inputs-9",
"score": 0.018349706373627164,
"confidence": 0
},
{
"name": "inputs-3",
"score": 0.10709513039521452,
"confidence": 0
},
{
"name": "inputs-11",
"score": 0,
"confidence": 0
}
]
}
}
20 changes: 20 additions & 0 deletions docs/modules/ROOT/examples/kserve-explainer-payload.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
{
"instances": [
[
404,
1,
1,
20,
1,
144481.56,
1,
56482.48,
1,
372,
0,
0,
1,
2
]
]
}
3 changes: 3 additions & 0 deletions docs/modules/ROOT/images/trustyai-kserve-explainer.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
37 changes: 37 additions & 0 deletions docs/modules/ROOT/images/trustyai_icon.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 3 additions & 0 deletions docs/modules/ROOT/nav.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,11 @@
** xref:data-drift-monitoring.adoc[]
** xref:accessing-service-from-python.adoc[]
** xref:saliency-explanations.adoc[]
*** xref:saliency-explanations-on-odh.adoc[]
*** xref:saliency-explanations-with-kserve.adoc[]
* Components
** xref:trustyai-service.adoc[]
** xref:trustyai-operator.adoc[]
** xref:python-trustyai.adoc[]
** xref:trustyai-core.adoc[]
** xref:component-kserve-explainer.adoc[]
29 changes: 29 additions & 0 deletions docs/modules/ROOT/pages/component-kserve-explainer.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
= KServe explainer

image::trustyai-kserve-explainer.svg[TrustyAI KServe architecture diagram]

The TrustyAI KServe Explainer is a component that provides explanations for predictions made by using the built-in KServe explainer support footnote:fn-kserveexplainer[Documentation available at https://kserve.github.io/website/0.12/modelserving/explainer/explainer/[KServe explainers section].]. It supports xref:local-explainers.adoc#lime[LIME] and xref:local-explainers.adoc#shap[SHAP] explanation methods, configurable directly within KServe `InferenceServices`.

== Features

- **Explainability**: Integrated support for xref:local-explainers.adoc#lime[LIME] and xref:local-explainers.adoc#shap[SHAP] explanation methods to interpret model predictions via the `:explain` endpoint.

== Deployment on KServe

The TrustyAI explainer can be added to KServe `InferenceServices` and can be configured to use either xref:local-explainers.adoc#lime[LIME] or xref:local-explainers.adoc#shap[SHAP] explanation methods by modifying the YAML configuration.

When deployed, KServe manages the routing of requests to the appropriate container. Calls to `/v1/models/model:predict` will be sent to the predictor container, while calls to `/v1/models/model:explain` will be sent to the explainer container. The payloads for both endpoints are the same, but the `:predict` endpoint returns the model's prediction, while the `:explain` endpoint returns an explanation of the prediction.

=== LIME Explainer

By default, the TrustyAI KServe explainer uses the xref:local-explainers.adoc#lime[LIME] explainer. You can deploy the explainer by specifying the appropriate container image and any necessary configuration in the `InferenceService` YAML.

=== SHAP Explainer

To use the xref:local-explainers.adoc#shap[SHAP] explainer, you can deploy the explainer by specifying it as an environment variable in the `InferenceService` YAML configuration.

== Interacting with the Explainer

You can interact with the explainer using the `:explain` endpoint. By sending a JSON payload containing the necessary input data, you can retrieve an explanation for the model's prediction. The response structure includes the saliencies of each feature contributing to the prediction.

A full tutorial on how to deploy the TrustyAI KServe explainer is available at xref:saliency-explanations-with-kserve.adoc[Saliency Explanations with KServe].
38 changes: 36 additions & 2 deletions docs/modules/ROOT/pages/local-explainers.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -28,9 +28,43 @@ encoded dataset stem:[E] is built by taking non-zero elements of stem:[x^{\prime
original representation stem:[z \in \mathbb{R}^d] and then computing stem:[f(z)]. A weighted linear model stem:[g] (with weights provided via stem:[\pi_x]) is then trained upon the generated
sparse dataset stem:[E] and the model weights stem:[w] are used as feature weights for the final explanation stem:[\xi(x)].

== SHAP

SHAP, presented by Scott Lundberg and Su-In Lee in 2017<<lundberg2017>>, seeks to unify a number of common explanation methods, notably LIME <<ribeiro2016>> and DeepLIFT <<shrikumar2017>>, under a common umbrella of additive feature attributions. These are explanation methods that explain how an input stem:[x = [x_1, x_2, ..., x_M ]] affects the output of some model stem:[f] by transforming stem:[x \in \mathbb{R}^M] into simplified inputs stem:[z^{\prime} \in 0, 1^M] , such that stem:[z^{\prime}_i] indicates the inclusion or exclusion of feature stem:[i]. These simplified inputs are then passed to an explanatory model stem:[g] that takes the following form:

[stem]
++++
x = h_x(z^{\prime}) \\
g(z^{\prime}) = \phi_0 + \sum_{i=1}^M \phi_i z_i^{\prime} \\
\textbf{s.t.}\quad g(z^{\prime}) \approx f (h_x(z^{\prime}))
++++

In such a form, each value stem:[\phi_i] marks the contribution that feature stem:[i] had on the output model (called the attribution), and stem:[\phi_0] marks the null output of the model; the model output when every feature is excluded. Therefore, this presents an easily interpretable explanation of the importance of each feature and a framework to permute the various input features to establish their collection contributions.

The final result of the algorithm are the Shapley values of each feature, which
give an itemized "receipt" of all the contributing factors to the decision. For example, a SHAP explanation of a loan application might be as follows:

[options="header"]
|===
|Feature | Shapley Value φ
|Null Output | 50%
|Income | +10%
|# Children | -15%
|Age | +22%
|Own Home? | -30%
|Acceptance% | 37%
|Deny | 63%
|===


From this, the applicant can see that the biggest contributor to their denial was their home ownership status, which reduced their acceptance probability by 30 percentage points. Meanwhile, their number of children was of particular
benefit, increasing their probability by 22 percentage points.

[bibliography]
== References

* [[[ribeiro2016]]] Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM
SIGKDD international conference on knowledge discovery and data mining,
* [[[ribeiro2016]]] Ribeiro, M.T., Singh, S., Guestrin, C.: ” why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining,
pp. 1135–1144 (2016)
* [[[lundberg2017]]], S., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems (2017)
* [[[shrikumar2017]]], Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features
through propagating activation differences. CoRR abs/1704.02685 (2017)
1 change: 1 addition & 0 deletions docs/modules/ROOT/pages/main.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ TrustyAI consists of several components, including:
* xref:trustyai-service.adoc[TrustyAI service], TrustyAI-as-a-service, a REST service for fairness metrics and explainability algorithms including ModelMesh integration.
* xref:trustyai-operator.adoc[TrustyAI operator], a Kubernetes operator for TrustyAI service.
* xref:python-trustyai.adoc[Python TrustyAI], a Python library allowing the usage of TrustyAI's toolkit from Jupyter notebooks
* xref:component-kserve-explainer.adoc[KServe explainer], a TrustyAI side-car that integrates with KServe's built-in explainability features.

== Glossary

Expand Down
Loading

0 comments on commit 84d21c6

Please sign in to comment.