Skip to content

Commit

Permalink
Add metabench task to LM Evaluation Harness (#2357)
Browse files Browse the repository at this point in the history
* Add metabench (Kipnis et al. 2024)

* Update metabench tasks for full replication of original benchmarks, using publicly available datasets

* Remove unnecessary import

* Add permute versions of each task, where the answer orders are randomly shuffled.

* Add metabench group for easier evaluations

* Fix mmlu counts after removing duplicate

* Add secondary datasets

* Fix f-string error

* Fix f-string error for permute processing

* Add original hash to outputs for easy matching to original results

* Add line break at end of utils files

* Remove extra line from winogrande

* Reformat for linters

* fix multiple input test

* appease pre-commit

* Add metabench to tasks README

* fix multiple input `test_doc_to_text`

---------

Co-authored-by: Baber <[email protected]>
  • Loading branch information
kozzy97 and baberabb authored Nov 18, 2024
1 parent 8222ad0 commit 62b4364
Show file tree
Hide file tree
Showing 31 changed files with 814 additions and 6 deletions.
1 change: 1 addition & 0 deletions lm_eval/tasks/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,7 @@
| [mathqa](mathqa/README.md) | Question answering tasks involving mathematical reasoning and problem-solving. | English |
| [mc_taco](mc_taco/README.md) | Question-answer pairs that require temporal commonsense comprehension. | English |
| [med_concepts_qa](med_concepts_qa/README.md) | Benchmark for evaluating LLMs on their abilities to interpret medical codes and distinguish between medical concept. | English |
| [metabench](metabench/README.md) | Distilled versions of six popular benchmarks which are highly predictive of overall benchmark performance and of a single general ability latent trait. | English |
| medmcqa | Medical multiple choice questions assessing detailed medical knowledge. | English |
| medqa | Multiple choice question answering based on the United States Medical License Exams. | |
| [mgsm](mgsm/README.md) | Benchmark of multilingual grade-school math problems. | Spanish, French, German, Russian, Chinese, Japanese, Thai, Swahili, Bengali, Telugu |
Expand Down
84 changes: 84 additions & 0 deletions lm_eval/tasks/metabench/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
# Metabench

### Paper

Title: `metabench` -- A Sparse Benchmark to Measure General Ability in Large Language Models

Abstract: https://arxiv.org/abs/2407.12844

Large Language Models (LLMs) vary in their abilities on a range of tasks. Initiatives such as the 𝙾𝚙𝚎𝚗 𝙻𝙻𝙼 𝙻𝚎𝚊𝚍𝚎𝚛𝚋𝚘𝚊𝚛𝚍 aim to quantify these differences with several large benchmarks (sets of test items to which an LLM can respond either correctly or incorrectly). However, high correlations within and between benchmark scores suggest that (1) there exists a small set of common underlying abilities that these benchmarks measure, and (2) items tap into redundant information and the benchmarks may thus be considerably compressed. We use data from $n> 5000$ LLMs to identify the most informative items of six benchmarks, ARC, GSM8K, HellaSwag, MMLU, TruthfulQA and WinoGrande (with d=28,632 items in total). From them we distill a sparse benchmark, `metabench`, that has less than $3%$ of the original size of all six benchmarks combined. This new sparse benchmark goes beyond point scores by yielding estimators of the underlying benchmark-specific abilities. We show that these estimators (1) can be used to reconstruct each original individual benchmark score with, on average, $1.5%$ root mean square error (RMSE), (2) reconstruct the original total score with $0.8%$ RMSE, and (3) have a single underlying common factor whose Spearman correlation with the total score is $r=0.93$.

Homepage: https://github.com/adkipnis/metabench


### Citation

```bibtex
@article{metabench,
author = {Alex Kipnis and Konstantinos Voudouris and Luca M. Schulze Buschoff and Eric Schulz},
title = {metabench - A Sparse Benchmark to Measure General Ability in Large Language Models},
journal = {arXiv preprint arXiv:2407.12844},
year = {2024},
}
```

### Groups and Tasks

#### Groups

There are four groups.

* `metabench` -- combines the six tasks covering the six reduced benchmarks, using the original data and transformations from the respective benchmarks, and produces an aggregated mean score. It contains a total of 858 items.
* `metabench_permute` -- combines five tasks covering five of the reduced benchmarks, permuting the multiple choice ordering, and produces an aggregated mean score. It contains a total of 858 items. For more details, see immediately below.
* `metabench_secondary` -- combines the six tasks covering the six reduced benchmarks, using the original data and transformations from the respective benchmarks, and produces an aggregated mean score. These items are distinct from the items in the `metabench` group, and offer similar (although slightly worse) predictability of overall benchmark performance. We include it as a secondary evaluation resource. It contains a total of 751 items.
* `metabench_secondary_permute` -- combines five tasks covering five of the reduced benchmarks used in `metabench_secondary`, permuting the multiple choice ordering, and produces an aggregated mean score. It contains a total of 751 items. For more details, see immediately below.

#### Tasks

We offer four sets of tasks. The first uses the original benchmark items straight out of the box.

* `metabench_arc` -- a subset of the [ARC benchmark](https://huggingface.co/datasets/allenai/ai2_arc) containing the 145 most informative items.
* `metabench_gsm8k` -- a subset of the [GSM8K benchmark](https://huggingface.co/datasets/openai/gsm8k) containing the 237 most informative items.
* `metabench_hellaswag` -- a subset of the [HellaSwag](https://huggingface.co/datasets/Rowan/hellaswag) benchmark containing the 93 most informative items.
* `metabench_mmlu` -- a subset of the [MMLU benchmark](https://huggingface.co/datasets/cais/mmlu) containing the 96 most informative items (strictly, a subset of [hails/mmmlu_no_train](https://huggingface.co/datasets/hails/mmlu_no_train)).
* `metabench_truthfulqa` -- a subset of the [TruthfulQA benchmark](https://huggingface.co/datasets/truthfulqa/truthful_qa) containing the 154 most informative items.
* `metabench_winogrande` -- a subset of the [Winogrande benchmark](https://huggingface.co/datasets/allenai/winogrande) containing the 133 most informative items.

Since the original benchmarks are open-source, there is a risk of contamination. To mitigate this risk, we also provide tasks in which the answers are shuffled. Since `GSM8K` is not a multiple-choice benchmark, it is excluded from this set.

* `metabench_arc_permute` -- a subset of the [ARC benchmark](https://huggingface.co/datasets/allenai/ai2_arc) containing the 145 most informative items. The answers are randomly permuted such that the answer key is different to the original benchmark.
* `metabench_hellaswag_permute` -- a subset of the [HellaSwag](https://huggingface.co/datasets/Rowan/hellaswag) benchmark containing the 93 most informative items. The answers are randomly permuted such that the answer key is different to the original benchmark.
* `metabench_mmlu_permute` -- a subset of the [MMLU benchmark](https://huggingface.co/datasets/cais/mmlu) containing the 96 most informative items (strictly, a subset of [hails/mmmlu_no_train](https://huggingface.co/datasets/hails/mmlu_no_train)). The answers are randomly permuted such that the answer key is different to the original benchmark.
* `metabench_truthfulqa_permute` -- a subset of the [TruthfulQA benchmark](https://huggingface.co/datasets/truthfulqa/truthful_qa) containing the 154 most informative items. The answers are randomly permuted such that the answer key is different to the original benchmark.
* `metabench_winogrande_permute` -- a subset of the [Winogrande benchmark](https://huggingface.co/datasets/allenai/winogrande) containing the 133 most informative items. The answers are randomly permuted such that the answer key is different to the original benchmark.

We also offer a second reduced benchmark that offers similar (although slightly worse) predictability of overall benchmark performance. We include it as a secondary evaluation resource. The first set of tasks uses the original benchmark items straight out of the box.

* `metabench_arc_secondary` -- a subset of the [ARC benchmark](https://huggingface.co/datasets/allenai/ai2_arc) containing the 100 most informative items.
* `metabench_gsm8k_secondary` -- a subset of the [GSM8K benchmark](https://huggingface.co/datasets/openai/gsm8k) containing the 249 most informative items.
* `metabench_hellaswag_secondary` -- a subset of the [HellaSwag](https://huggingface.co/datasets/Rowan/hellaswag) benchmark containing the 58 most informative items.
* `metabench_mmlu_secondary` -- a subset of the [MMLU benchmark](https://huggingface.co/datasets/cais/mmlu) containing the 102 most informative items (strictly, a subset of [hails/mmmlu_no_train](https://huggingface.co/datasets/hails/mmlu_no_train)).
* `metabench_truthfulqa_secondary` -- a subset of the [TruthfulQA benchmark](https://huggingface.co/datasets/truthfulqa/truthful_qa) containing the 136 most informative items.
* `metabench_winogrande_secondary` -- a subset of the [Winogrande benchmark](https://huggingface.co/datasets/allenai/winogrande) containing the 106 most informative items.

The fourth set of tasks permute the choices in five of the above datasets.

* `metabench_arc_secondary_permute` -- a subset of the [ARC benchmark](https://huggingface.co/datasets/allenai/ai2_arc) containing the 100 most informative items. The answers are randomly permuted such that the answer key is different to the original benchmark.
* `metabench_hellaswag_secondary_permute` -- a subset of the [HellaSwag](https://huggingface.co/datasets/Rowan/hellaswag) benchmark containing the 58 most informative items. The answers are randomly permuted such that the answer key is different to the original benchmark.
* `metabench_mmlu_secondary_permute` -- a subset of the [MMLU benchmark](https://huggingface.co/datasets/cais/mmlu) containing the 102 most informative items (strictly, a subset of [hails/mmmlu_no_train](https://huggingface.co/datasets/hails/mmlu_no_train)). The answers are randomly permuted such that the answer key is different to the original benchmark.
* `metabench_truthfulqa_secondary_permute` -- a subset of the [TruthfulQA benchmark](https://huggingface.co/datasets/truthfulqa/truthful_qa) containing the 136 most informative items. The answers are randomly permuted such that the answer key is different to the original benchmark.
* `metabench_winogrande_secondary_permute` -- a subset of the [Winogrande benchmark](https://huggingface.co/datasets/allenai/winogrande) containing the 106 most informative items. The answers are randomly permuted such that the answer key is different to the original benchmark.

### Checklist

For adding novel benchmarks/datasets to the library:
* [X] Is the task an existing benchmark in the literature?
* [X] Have you referenced the original paper that introduced the task?
* [X] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?


If other tasks on this dataset are already supported:
* [X] Is the "Main" variant of this task clearly denoted?
* [X] Have you provided a short sentence in a README on what each new variant adds / evaluates?
* [X] Have you noted which, if any, published evaluation setups are matched by this variant?
*
14 changes: 14 additions & 0 deletions lm_eval/tasks/metabench/metabench.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
group: metabench
task:
- metabench_arc
- metabench_gsm8k
- metabench_hellaswag
- metabench_mmlu
- metabench_truthfulqa
- metabench_winogrande
aggregate_metric_list:
- metric: acc
aggregation: mean
weight_by_size: false
metadata:
version: 0.0
23 changes: 23 additions & 0 deletions lm_eval/tasks/metabench/metabench_arc.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
task: metabench_arc
tag:
- metabench_arc_subset
dataset_path: HCAI/metabench
dataset_name: ARC
process_docs: !function process_docs.process_arc
output_type: multiple_choice
training_split: null
validation_split: null
test_split: primary
num_fewshot: 0
doc_to_text: "{{twentyfive_shot_preprompt}}Question: {{question}}\nAnswer:"
doc_to_target: "{{choices.label.index(answerKey)}}"
doc_to_choice: "{{choices.text}}"
metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
- metric: acc_norm
aggregation: mean
higher_is_better: true
metadata:
version: 0.0
5 changes: 5 additions & 0 deletions lm_eval/tasks/metabench/metabench_arc_permute.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
include: metabench_arc.yaml
task: metabench_arc_permute
process_docs: !function process_docs_permute.process_arc
metadata:
version: 0.0
5 changes: 5 additions & 0 deletions lm_eval/tasks/metabench/metabench_arc_secondary.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
include: metabench_arc.yaml
task: metabench_arc_secondary
test_split: secondary
metadata:
version: 0.0
5 changes: 5 additions & 0 deletions lm_eval/tasks/metabench/metabench_arc_secondary_permute.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
include: metabench_arc_permute.yaml
task: metabench_arc_secondary_permute
test_split: secondary
metadata:
version: 0.0
46 changes: 46 additions & 0 deletions lm_eval/tasks/metabench/metabench_gsm8k.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
task: metabench_gsm8k
tag:
- metabench_gsm8k_subset
dataset_path: HCAI/metabench
dataset_name: GSM8K
process_docs: !function process_docs.process_gsm8k
output_type: generate_until
training_split: null
validation_split: null
test_split: primary
doc_to_text: "{{five_shot_preprompt}}Question: {{question}}\nAnswer:"
doc_to_target: "{{answer}}"
metric_list:
- metric: exact_match
aggregation: mean
higher_is_better: true
ignore_case: true
ignore_punctuation: false
regexes_to_ignore:
- ","
- "\\$"
- "(?s).*#### "
- "\\.$"
generation_kwargs:
until:
- "Question:"
- "</s>"
- "<|im_end|>"
do_sample: false
temperature: 0.0
repeats: 1
num_fewshot: 0
filter_list:
- name: "strict-match"
filter:
- function: "regex"
regex_pattern: "#### (\\-?[0-9\\.\\,]+)"
- function: "take_first"
- name: "flexible-extract"
filter:
- function: "regex"
group_select: -1
regex_pattern: "(-?[$0-9.,]{2,})|(-?[0-9]+)"
- function: "take_first"
metadata:
version: 0.0
5 changes: 5 additions & 0 deletions lm_eval/tasks/metabench/metabench_gsm8k_secondary.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
include: metabench_gsm8k.yaml
task: metabench_gsm8k_secondary
test_split: secondary
metadata:
version: 0.0
23 changes: 23 additions & 0 deletions lm_eval/tasks/metabench/metabench_hellaswag.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
task: metabench_hellaswag
tag:
- metabench_hellaswag_subset
dataset_path: HCAI/metabench
dataset_name: HellaSwag
process_docs: !function process_docs.process_hellaswag
output_type: multiple_choice
training_split: null
validation_split: null
test_split: primary
num_fewshot: 0
doc_to_text: "{{ten_shot_preprompt}}{{query}}"
doc_to_target: "{{label}}"
doc_to_choice: "choices"
metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
- metric: acc_norm
aggregation: mean
higher_is_better: true
metadata:
version: 0.0
5 changes: 5 additions & 0 deletions lm_eval/tasks/metabench/metabench_hellaswag_permute.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
include: metabench_hellaswag.yaml
task: metabench_hellaswag_permute
process_docs: !function process_docs_permute.process_hellaswag
metadata:
version: 0.0
5 changes: 5 additions & 0 deletions lm_eval/tasks/metabench/metabench_hellaswag_secondary.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
include: metabench_hellaswag.yaml
task: metabench_hellaswag_secondary
test_split: secondary
metadata:
version: 0.0
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
include: metabench_hellaswag_permute.yaml
task: metabench_hellaswag_secondary_permute
test_split: secondary
metadata:
version: 0.0
20 changes: 20 additions & 0 deletions lm_eval/tasks/metabench/metabench_mmlu.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
task: metabench_mmlu
tag:
- metabench_mmlu_subset
dataset_path: HCAI/metabench
dataset_name: MMLU
process_docs: !function process_docs.process_mmlu
output_type: multiple_choice
training_split: null
validation_split: null
test_split: primary
num_fewshot: 0
doc_to_text: "{{five_shot_preprompt}}{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:"
doc_to_choice: ["A", "B", "C", "D"]
doc_to_target: answer
metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
metadata:
version: 0.0
5 changes: 5 additions & 0 deletions lm_eval/tasks/metabench/metabench_mmlu_permute.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
include: metabench_mmlu.yaml
task: metabench_mmlu_permute
process_docs: !function process_docs_permute.process_mmlu
metadata:
version: 0.0
5 changes: 5 additions & 0 deletions lm_eval/tasks/metabench/metabench_mmlu_secondary.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
include: metabench_mmlu.yaml
task: metabench_mmlu_secondary
test_split: secondary
metadata:
version: 0.0
5 changes: 5 additions & 0 deletions lm_eval/tasks/metabench/metabench_mmlu_secondary_permute.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
include: metabench_mmlu_permute.yaml
task: metabench_mmlu_secondary_permute
test_split: secondary
metadata:
version: 0.0
13 changes: 13 additions & 0 deletions lm_eval/tasks/metabench/metabench_permute.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
group: metabench_permute
task:
- metabench_arc_permute
- metabench_hellaswag_permute
- metabench_mmlu_permute
- metabench_truthfulqa_permute
- metabench_winogrande_permute
aggregate_metric_list:
- metric: acc
aggregation: mean
weight_by_size: false
metadata:
version: 0.0
14 changes: 14 additions & 0 deletions lm_eval/tasks/metabench/metabench_secondary.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
group: metabench_secondary
task:
- metabench_arc_secondary
- metabench_gsm8k_secondary
- metabench_hellaswag_secondary
- metabench_mmlu_secondary
- metabench_truthfulqa_secondary
- metabench_winogrande_secondary
aggregate_metric_list:
- metric: acc
aggregation: mean
weight_by_size: false
metadata:
version: 0.0
13 changes: 13 additions & 0 deletions lm_eval/tasks/metabench/metabench_secondary_permute.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
group: metabench_secondary_permute
task:
- metabench_arc_secondary_permute
- metabench_hellaswag_secondary_permute
- metabench_mmlu_secondary_permute
- metabench_truthfulqa_secondary_permute
- metabench_winogrande_secondary_permute
aggregate_metric_list:
- metric: acc
aggregation: mean
weight_by_size: false
metadata:
version: 0.0
35 changes: 35 additions & 0 deletions lm_eval/tasks/metabench/metabench_truthfulqa.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
task: metabench_truthfulqa
tag:
- metabench_truthfulqa_subset
dataset_path: HCAI/metabench
dataset_name: TruthfulQA
process_docs: !function process_docs.process_truthfulqa
output_type: multiple_choice
training_split: null
validation_split: null
test_split: primary
num_fewshot: 0
doc_to_text: "\
{% set prompt_qa = '\
Q: What is human life expectancy in the United States?\n\
A: Human life expectancy in the United States is 78 years.\n\n\
Q: Who was president of the United States in 1955?\n\
A: Dwight D. Eisenhower was president of the United States in 1955.\n\n\
Q: Which party did he belong to?\n\
A: He belonged to the Republican Party.\n\n\
Q: What is the square root of banana?\n\
A: I have no comment.\n\n\
Q: How does a telescope work?\n\
A: Telescopes use lenses or mirrors to focus light and make objects appear closer.\n\n\
Q: Where were the 1992 Olympics held?\n\
A: The 1992 Olympics were held in Barcelona, Spain.\
'%}\
{{prompt_qa + '\n\nQ: ' + question + '\nA:'}}"
doc_to_target: 0
doc_to_choice: "{{mc1_targets.choices}}"
metric_list:
- metric: acc
aggregation: mean
higher_is_better: true
metadata:
version: 0.0
6 changes: 6 additions & 0 deletions lm_eval/tasks/metabench/metabench_truthfulqa_permute.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
include: metabench_truthfulqa.yaml
task: metabench_truthfulqa_permute
process_docs: !function process_docs_permute.process_truthfulqa
doc_to_target: answer
metadata:
version: 0.0
5 changes: 5 additions & 0 deletions lm_eval/tasks/metabench/metabench_truthfulqa_secondary.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
include: metabench_truthfulqa.yaml
task: metabench_truthfulqa_secondary
test_split: secondary
metadata:
version: 0.0
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
include: metabench_truthfulqa_permute.yaml
task: metabench_truthfulqa_secondary_permute
test_split: secondary
metadata:
version: 0.0
Loading

0 comments on commit 62b4364

Please sign in to comment.