diff --git a/README.md b/README.md index 3f2e9973..c021a3ca 100644 --- a/README.md +++ b/README.md @@ -4,7 +4,7 @@ Because language models are trained to predict the next token in naturally occurring text, they often reproduce common human errors and misconceptions, even when they "know better" in some sense. More worryingly, when models are trained to generate text that's rated highly by humans, they may learn to output false statements that human evaluators can't detect. We aim to circumvent this issue by directly [**eliciting latent knowledge**](https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit) (ELK) inside the activations of a language model. -Specifically, we're building on the **Contrast Consistent Search** (CCS) method described in the paper [Discovering Latent Knowledge in Language Models Without Supervision](https://arxiv.org/abs/2212.03827) by Burns et al. (2022). In CCS, we search for features in the hidden states of a language model which satisfy certain logical consistency requirements. It turns out that these features are often useful for question-answering and text classification tasks, even though the features are trained without labels. +Specifically, we're building on the **Contrastive Representation Clustering** (CRC) method described in the paper [Discovering Latent Knowledge in Language Models Without Supervision](https://arxiv.org/abs/2212.03827) by Burns et al. (2022). In CRC, we search for features in the hidden states of a language model which satisfy certain logical consistency requirements. It turns out that these features are often useful for question-answering and text classification tasks, even though the features are trained without labels. ### Quick **Start** @@ -20,29 +20,21 @@ elk elicit microsoft/deberta-v2-xxlarge-mnli imdb This will automatically download the model and dataset, run the model and extract the relevant representations if they aren't cached on disk, fit reporters on them, and save the reporter checkpoints to the `elk-reporters` folder in your home directory. It will also evaluate the reporter classification performance on a held out test set and save it to a CSV file in the same folder. +The following will generate a CCS (Contrast Consistent Search) reporter instead of the CRC-based reporter, which is the default. + ```bash -elk eval naughty-northcutt microsoft/deberta-v2-xxlarge-mnli imdb +elk elicit microsoft/deberta-v2-xxlarge-mnli imdb --net ccs ``` -This will evaluate the probe from the run naughty-northcutt on the hidden states extracted from the model deberta-v2-xxlarge-mnli for the imdb dataset. It will result in an `eval.csv` and `cfg.yaml` file, which are stored under a subfolder in `elk-reporters/naughty-northcutt/transfer_eval`. - -## Caching - -The hidden states resulting from `elk elicit` are cached as a HuggingFace dataset to avoid having to recompute them every time we want to train a probe. The cache is stored in the same place as all other HuggingFace datasets, which is usually `~/.cache/huggingface/datasets`. - -## Other commands - -To only extract the hidden states for the model `model` and the dataset `dataset` and save them to `my_output_dir`, without training any reporters, you can run: +The following command will evaluate the probe from the run naughty-northcutt on the hidden states extracted from the model deberta-v2-xxlarge-mnli for the imdb dataset. It will result in an `eval.csv` and `cfg.yaml` file, which are stored under a subfolder in `elk-reporters/naughty-northcutt/transfer_eval`. ```bash -elk extract microsoft/deberta-v2-xxlarge-mnli imdb -o my_output_dir +elk eval naughty-northcutt microsoft/deberta-v2-xxlarge-mnli imdb ``` -The following will generate a CCS reporter instead of the Eigen reporter, which is the default. +## Caching -```bash -elk elicit microsoft/deberta-v2-xxlarge-mnli imdb --net ccs -``` +The hidden states resulting from `elk elicit` are cached as a HuggingFace dataset to avoid having to recompute them every time we want to train a probe. The cache is stored in the same place as all other HuggingFace datasets, which is usually `~/.cache/huggingface/datasets`. ## Development Use `pip install pre-commit && pre-commit install` in the root folder before your first commit. diff --git a/elk/extraction/prompt_dataset.py b/elk/extraction/prompt_dataset.py index 0a120743..002674e9 100644 --- a/elk/extraction/prompt_dataset.py +++ b/elk/extraction/prompt_dataset.py @@ -44,8 +44,7 @@ class PromptConfig(Serializable): dataset: Space-delimited name of the HuggingFace dataset to use, e.g. `"super_glue boolq"` or `"imdb"`. balance: Whether to force class balance in the dataset using undersampling. - data_dir: The directory to use for caching the dataset. Defaults to - `~/.cache/huggingface/datasets`. + data_dir: The directory to use for caching the dataset. label_column: The column containing the labels. By default, we infer this from the datatypes of the columns in the dataset; if there is only one column with a `ClassLabel` datatype, we use that. @@ -56,12 +55,12 @@ class PromptConfig(Serializable): max_examples: The maximum number of examples to use from the val dataset. If a single number, use at most that many examples for each split. If a list of length 2, use the first element for the train split and the second for - the val split. If empty, use all examples. Defaults to empty. + the val split. If empty, use all examples. num_shots: The number of examples to use in few-shot prompts. If zero, prompts - are zero-shot. Defaults to 0. - seed: The seed to use for prompt randomization. Defaults to 42. + are zero-shot. + seed: The seed to use for prompt randomization. num_variants: The number of prompt templates to apply to each predicate upon - call to __getitem__. Use -1 to apply all available templates. Defaults to 1. + call to __getitem__. Use -1 to apply all available templates. """ dataset: str = field(positional=True)