From cb94131dcb8463d9ac5ea37bd110a2ce70b3d94b Mon Sep 17 00:00:00 2001 From: Pringled Date: Thu, 20 Feb 2025 13:19:13 +0100 Subject: [PATCH] Updated docs --- model2vec/train/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/model2vec/train/README.md b/model2vec/train/README.md index decc104..60f5551 100644 --- a/model2vec/train/README.md +++ b/model2vec/train/README.md @@ -106,7 +106,7 @@ The scores are competitive with the popular [roberta-base-go_emotions](https://h ## Explainability -We offer a simple explainability method that allows you to see the most important tokens for a prediction. This is based on the logit outputs for the tokens in the input text, which we extract by forward passing them individually through the trained classifier. Since our classifier is a simple mean embedding followed by a single linear layer (meaning there is interaction between tokens), this is a good approximation of the importance of each token. The following code example shows how this works: +We offer a simple explainability method that allows you to see the most important tokens for a prediction. This is based on the logit outputs for the tokens in the input text, which we extract by forward passing them individually through the trained classifier. Since our classifier is a simple mean embedding followed by a single linear layer (meaning there is no interaction between tokens), this is a good approximation of the importance of each token. The following code example shows how this works: ```python from datasets import load_dataset