Skip to content

Latest commit

 

History

History
137 lines (101 loc) · 8.5 KB

README.md

File metadata and controls

137 lines (101 loc) · 8.5 KB

Evaluation

This directory contains a set of scripts to evaluate the performance of different presses on different datasets. We currently support the following datasets:

Please refer to the README of each dataset for more information on how the Hugging Face dataset was generated.

Usage

To evaluate a press on a dataset, you can run the following command:

python evaluate.py --dataset <dataset_name> --data_dir <data_dir> --model <model_name> --press_name <press_name> --compression_ratio <ratio> 

For instance,

python evaluate.py --dataset loogle --data_dir shortdep_qa --model meta-llama/Meta-Llama-3.1-8B-Instruct --press_name expected_attention --compression_ratio 0.5
  • Results (predictions & metrics) are saved in the results directory.
  • All available presses are listed in the PRESS_DICT variable in evaluate.py.
  • Additional arguments are --device, --fraction, --max_new_tokens, --max_context_length and --compress_questions. For more information, run python evaluate.py --help
  • Finally we also provide a bash script evaluate.sh to facilitate the evaluation of multiple presses (1 per GPU) with different compression ratios.

Benchmarks

We provide benchmark results from 7 presses and 3 models. We include a variant of SnapKV where we include the question in the compression process as in the original paper (snapkv w/ question). All performance curves can be found in the assets directory, and predictions are available here.

RULER

Average performance the 13 tasks of the RULER dataset with 4k context length (per task results here):

RULER

Observations:

  • snapkv w/ question consistently outperforms other methods. However this method can't be use for use cases such as prompt caching as it requires the question to be known beforehand.
  • All presses show degradation in performance even for small compression ratios.
  • llama3.1-8b-instruct is more robust to compression than other models and expected attention performs better than others.
  • mistral-nemo-instruct-2407 is more robust to random pruning than other models.
  • For phi-3.5-mini and mistral-nemo-instruct-2407, all presses perform poorly compared to baseline presses such as random (remove KV pairs randomly) or streaming llm (remove the middle KV pairs). This is especially true for the subtask niah_single_3 where most presses fail to perform a proper copy-paste of a long needle in a haystack. This might be related to induction heads
  • For phi-3.5-mini, we ran an additional experiment with a different compression ratio per layer (as in this notebook) which largely outperformed it's uniform compression counterpart (see purple cross on 2nd plot). The ratios where determined by grid search on 20/6500 samples from RULER (so results can be questionable).

Loogle

Shortdep_qa shortdep_qa Shortdep_cloze shortdep_cloze Longdep_qa longdep_qa

Observations:

  • Metrics are adapted from loogle benchmark, see here. The plot show the average score (mean over all submetrics) for each task.
  • The metrics are not always correlated with the quality of the answer, especially for longdep_qa task. LLM-as-a-judge may better suited for a more refined evaluation.
  • Again, snapkv w/ question consistently outperforms other methods.
  • In longdep_qa, the model looses track on counting (e.g. answer to "How many times is person x mentioned?" gets lower with increased compression ratio). This is not necessarily reflected in the metrics.
  • Llama3.1-8b-instruct seems to be more robust to compression.
  • Observed attention context had to be truncated at 10 000 tokens to prevent OOM issues, as the attention matrix needs to be materialized.
  • For shortdep_cloze task, the output formatting is often ignored leading to performance degradation even for low compression ratios. Interestingly, the model may still be able to answer the question correctly.
  • mistral-nemo-instruct-2407 fails to perform well on the shortdep_cloze task, even without compression, and is thus not reported.
  • shortdep_cloze task runs OOM for phi-3.5-mini at compression ratio 0.0 and is thus missing.

Infinitebench

kv_retrieval kv_retrieval longbook_choice_eng longbook_choice_eng longbook_qa_eng longbook_qa_eng longdialogue_qa_eng longdialogue_qa_eng

Observations:

  • All task where run with max_len=70_000 tokens, except for observed attention which used 10_000 tokens.
  • For kv-retrieval subtask, streaming LLM (keep last N tokens) performs better than other methods. While this may be surprising at first, respecting the format of the task (Extract the value corresponding to the specified key in the JSON object below. JSON data: {"7de93460-b65f-404e-9a7d-af2da2c8abb5": "2d9ab7c8-394a-4062-9928-310e39201a2f", ...}. Key: "70d1b207-d1e8-4591-95b8-9c85aceb8956" helps to understand this behavior. The information is homogeneously distributed in the context, and any token could potentially be relevant for answering the question. Streaming LLM will have access to all last tokens, while other methods will potentially create "holes".
  • Mistral-nemo-instruct-2407 performs poorly on kv-retrieval subtask compared to other models and is thus excluded from the plots.
  • For longbook-choice-eng, many compression methods are able to obtain good compression ratios. Thus, longbook-choice-eng is an example of a task that can be compressed effectively.
  • For longbook-qa-eng, expected attention and snapkv perform better than other methods (note the performance difference of llama3.1-8b-instruct and phi3.5/mistral-nemo).
  • For longdialogue-qa-eng, there's an interesting crossover between different compression methods. For higher compression, snapkv performs relatively well across models.

Conclusions

The methods benchmarked so far are not able to efficiently compress the KV cache while maintaining performance on several long-context datasets and models. In particular, exact information retrieval tasks such as kv-retrieval are challenging for the current methods. Further methods could be explored:

  • {Layer,Head}-wise pruning: pruning with a different compression ratio for each layer or head as in DMC, FastGen or DuoAttention
  • Adaptive pruning: pruning based on a score, and not a uniform fixed ratio
  • Taking into account inter-layer dependencies such as in PyramidKV
  • Move beyond pruning, as this method is fundamentally limited (see last figure in this notebook)
  • Fine-tuning LLMs to deal with compressed KV caches

We encourage contributions to explore these ideas and improve the performance of long-context LLMs with compressed caches.

How to add a dataset

Each dataset directory is structured as follows:

$dataset
├── README.md
├── calculate_metrics.py
├── create_huggingface_dataset.py

Where:

  • create_huggingface_dataset.py is a script that generates the Hugging Face dataset from the original dataset. Each dataset is associated with a set of parquet files with the following structure:
    • context: ...
    • question: ...
    • answer_prefix: ...
    • answer: ...
    • max_new_tokens: ...
  • calculate_metrics.py is a script that calculates the metrics based on the output of evaluate.py