Skip to content

Commit

Permalink
Apply ruff to metrics_sample.py
Browse files Browse the repository at this point in the history
  • Loading branch information
sadra-barikbin committed Jul 17, 2024
1 parent 7668844 commit 5c7f67d
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions src/lighteval/metrics/metrics_sample.py
Original file line number Diff line number Diff line change
Expand Up @@ -418,7 +418,7 @@ def __init__(
normalize_gold: callable = None,
normalize_pred: callable = None,
):
"""A BERT scorer class. Relies on some called extracted from `bert-score`. By default, will use the
r"""A BERT scorer class. Relies on some called extracted from `bert-score`. By default, will use the
`microsoft/deberta-large-mnli` as scorer. For each tokenized (pred, target) pair, it computes Precision,
Recall and F1 as following:
Expand All @@ -427,7 +427,7 @@ def __init__(
Recall = \sum_{t=1}^{len(target)} \div{max(Cos.Sim.(target_t, pred))}{IDF(target_t)}
F1 = \div{Precision * Recall}{Precision + Recall}
in which `Cos.Sim.` is the Cosine Similarity metric and `IDF(.)` represents the Inverse Document
Frequency of its input token. It defaults to 1 for all tokens and 0 for EOS and SEP tokens.
Expand Down

0 comments on commit 5c7f67d

Please sign in to comment.