Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin/main' into wait-for-gpus
Browse files Browse the repository at this point in the history
  • Loading branch information
AlexTMallen committed Mar 23, 2023
2 parents 579b066 + e2dd9ae commit 4a69b81
Show file tree
Hide file tree
Showing 2 changed files with 13 additions and 22 deletions.
24 changes: 8 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

Because language models are trained to predict the next token in naturally occurring text, they often reproduce common human errors and misconceptions, even when they "know better" in some sense. More worryingly, when models are trained to generate text that's rated highly by humans, they may learn to output false statements that human evaluators can't detect. We aim to circumvent this issue by directly [**eliciting latent knowledge**](https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit) (ELK) inside the activations of a language model.

Specifically, we're building on the **Contrast Consistent Search** (CCS) method described in the paper [Discovering Latent Knowledge in Language Models Without Supervision](https://arxiv.org/abs/2212.03827) by Burns et al. (2022). In CCS, we search for features in the hidden states of a language model which satisfy certain logical consistency requirements. It turns out that these features are often useful for question-answering and text classification tasks, even though the features are trained without labels.
Specifically, we're building on the **Contrastive Representation Clustering** (CRC) method described in the paper [Discovering Latent Knowledge in Language Models Without Supervision](https://arxiv.org/abs/2212.03827) by Burns et al. (2022). In CRC, we search for features in the hidden states of a language model which satisfy certain logical consistency requirements. It turns out that these features are often useful for question-answering and text classification tasks, even though the features are trained without labels.

### Quick **Start**

Expand All @@ -20,29 +20,21 @@ elk elicit microsoft/deberta-v2-xxlarge-mnli imdb

This will automatically download the model and dataset, run the model and extract the relevant representations if they aren't cached on disk, fit reporters on them, and save the reporter checkpoints to the `elk-reporters` folder in your home directory. It will also evaluate the reporter classification performance on a held out test set and save it to a CSV file in the same folder.

The following will generate a CCS (Contrast Consistent Search) reporter instead of the CRC-based reporter, which is the default.

```bash
elk eval naughty-northcutt microsoft/deberta-v2-xxlarge-mnli imdb
elk elicit microsoft/deberta-v2-xxlarge-mnli imdb --net ccs
```

This will evaluate the probe from the run naughty-northcutt on the hidden states extracted from the model deberta-v2-xxlarge-mnli for the imdb dataset. It will result in an `eval.csv` and `cfg.yaml` file, which are stored under a subfolder in `elk-reporters/naughty-northcutt/transfer_eval`.

## Caching

The hidden states resulting from `elk elicit` are cached as a HuggingFace dataset to avoid having to recompute them every time we want to train a probe. The cache is stored in the same place as all other HuggingFace datasets, which is usually `~/.cache/huggingface/datasets`.

## Other commands

To only extract the hidden states for the model `model` and the dataset `dataset` and save them to `my_output_dir`, without training any reporters, you can run:
The following command will evaluate the probe from the run naughty-northcutt on the hidden states extracted from the model deberta-v2-xxlarge-mnli for the imdb dataset. It will result in an `eval.csv` and `cfg.yaml` file, which are stored under a subfolder in `elk-reporters/naughty-northcutt/transfer_eval`.

```bash
elk extract microsoft/deberta-v2-xxlarge-mnli imdb -o my_output_dir
elk eval naughty-northcutt microsoft/deberta-v2-xxlarge-mnli imdb
```

The following will generate a CCS reporter instead of the Eigen reporter, which is the default.
## Caching

```bash
elk elicit microsoft/deberta-v2-xxlarge-mnli imdb --net ccs
```
The hidden states resulting from `elk elicit` are cached as a HuggingFace dataset to avoid having to recompute them every time we want to train a probe. The cache is stored in the same place as all other HuggingFace datasets, which is usually `~/.cache/huggingface/datasets`.

## Development
Use `pip install pre-commit && pre-commit install` in the root folder before your first commit.
Expand Down
11 changes: 5 additions & 6 deletions elk/extraction/prompt_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,8 +44,7 @@ class PromptConfig(Serializable):
dataset: Space-delimited name of the HuggingFace dataset to use, e.g.
`"super_glue boolq"` or `"imdb"`.
balance: Whether to force class balance in the dataset using undersampling.
data_dir: The directory to use for caching the dataset. Defaults to
`~/.cache/huggingface/datasets`.
data_dir: The directory to use for caching the dataset.
label_column: The column containing the labels. By default, we infer this from
the datatypes of the columns in the dataset; if there is only one column
with a `ClassLabel` datatype, we use that.
Expand All @@ -56,12 +55,12 @@ class PromptConfig(Serializable):
max_examples: The maximum number of examples to use from the val dataset.
If a single number, use at most that many examples for each split. If a list
of length 2, use the first element for the train split and the second for
the val split. If empty, use all examples. Defaults to empty.
the val split. If empty, use all examples.
num_shots: The number of examples to use in few-shot prompts. If zero, prompts
are zero-shot. Defaults to 0.
seed: The seed to use for prompt randomization. Defaults to 42.
are zero-shot.
seed: The seed to use for prompt randomization.
num_variants: The number of prompt templates to apply to each predicate upon
call to __getitem__. Use -1 to apply all available templates. Defaults to 1.
call to __getitem__. Use -1 to apply all available templates.
"""

dataset: str = field(positional=True)
Expand Down

0 comments on commit 4a69b81

Please sign in to comment.