Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upstream checkpoint 2 #6

Open
wants to merge 21 commits into
base: based-fork-3
Choose a base branch
from

Conversation

sedrick-keh-tri
Copy link

No description provided.

davidbhoffmann and others added 21 commits February 15, 2024 09:39
…shot 0% -> 42%) (EleutherAI#1356)

* update bbh, gsm8k, mmlu parsing logic and prompts

* remove the formatting prompt (bbh) + minor update (mmlu)

* update bbh, gsm8k, mmlu zeroshot, revert fewshots

* update bbh, gsm8k, mmlu version, forward changes to gsm8k-cot

* remove take_last, update to use docs parameters

* add newline

* ruff formatting

* Update pyproject.toml

* fix format

---------

Co-authored-by: Hailey Schoelkopf <[email protected]>
* haerae_reimplementation

* edited Readme and add few_shot settings

* edited readme

* newlines at end of each files

* Modifying the README file

* applied pre-commit
* add key lookup for same contexts

* nit

* appease pre-commit

* nit

* use `expand` (in-place view) rather than `repeat`

* try mixed grouping

* add docs.

* nit

* nit

* nits

* fix tests

* Move greedy_tokens calculation out of cache loop

* nit

* nits

* add test

* nits

* fix name conflict

* fix name conflict

* chunk tensor

* move Collator

* nits/docstring

* fixup

* fixup

* group contexts only for decoders

* pre-commit

* fix `generate_until` test

* fix `generate_until` test

* Update lm_eval/models/huggingface.py

Co-authored-by: Hailey Schoelkopf <[email protected]>

* add docs

* nit

* add docs

* add docs

* add 'logits_cache' arg

* bugfix

---------

Co-authored-by: Hailey Schoelkopf <[email protected]>
* add new task GPQA_n_shot

* add new task GPQA_zeroshot

* correct GPQA_zeroshot filename

* Add randomly shuffle choices

* Correct missing parentheses

* delete wrong tasks

* Add README

* Update lm_eval/tasks/gpqa/zeroshot/_gpqa_zeroshot_yaml

* Update lm_eval/tasks/gpqa/n_shot/utils.py

* Update lm_eval/tasks/gpqa/n_shot/utils.py

* Update lm_eval/tasks/gpqa/README.md

* placate linter

* linter

---------

Co-authored-by: Hailey Schoelkopf <[email protected]>
* update kmmlu default formatting

* Update _default_kmmlu_yaml

* Delete lm_eval/tasks/kmmlu/utils.py

* new tasks implemented

* add direct tasks

* update direct evaluate

* update direct eval

* add cot sample

* update cot

* add cot

* Update _cot_kmmlu_yaml

* add kmmlu90

* Update and rename _cot_kmmlu.yaml to _cot_kmmlu_yaml

* Create kmmlu90.yaml

* Update _cot_kmmlu_yaml

* add direct

* Update _cot_kmmlu_yaml

* Update and rename kmmlu90.yaml to kmmlu90_cot.yaml

* Update kmmlu90_direct.yaml

* add kmmlu hard

* Update _cot_kmmlu_yaml

* Update _cot_kmmlu_yaml

* update cot

* update cot

* erase typo

* Update _cot_kmmlu_yaml

* update cot

* Rename dataset to match k-mmlu-hard

* removed kmmlu90

* fixed name 'kmmlu_cot' to 'kmmlu_hard_cot' and revised README

* applied pre-commit before pull requests

* rename datasets and add notes

* Remove DS_Store cache

* Update lm_eval/tasks/kmmlu/README.md

Co-authored-by: Hailey Schoelkopf <[email protected]>

* Change citations and reflect reviews on version

* Added kmmlu_hard and fixed other errors

* fixing minor errors

* remove duplicated

* Rename files

* try ".index"

* minor fix

* minor fix again

* fix revert.

* minor fix. thank for hailey

---------

Co-authored-by: GUIJIN SON <[email protected]>
Co-authored-by: Hailey Schoelkopf <[email protected]>
* loglikelihood refactor using template lm

* linter

* fix whitespace in target + prompt for CoT gsm8k (EleutherAI#1275)

* Make `parallelize=True` vs. `accelerate launch` distinction clearer in docs (EleutherAI#1261)

* Make parallelize=True distinction clearer in documentation.

* run linter

* Allow parameter edits for registered tasks when listed in a benchmark (EleutherAI#1273)

* benchmark yamls allow minor edits of already registered tasks

* add documentation

* removed print

* Fix data-parallel evaluation with quantized models (EleutherAI#1270)

* add WIP device_map overrides

* update handling outside of accelerate launcher

* change .to(device) log to debug level

* run linter

* Rework documentation for explaining local dataset (EleutherAI#1284)

* rewor documentation for explaining local dataset

* fix typo

* Update new_task_guide.md

* Re-add citation

It looks like Google Scholar has [already noticed](https://scholar.google.com/scholar?hl=en&as_sdt=0%2C9&authuser=2&q=%22A+framework+for+few-shot+language+model+evaluation%2C+12+2023%22&btnG=) the updated citation block so let's add it back in.

* Update CITATION.bib (EleutherAI#1285)

Bumping CITATION.bib to match re-adding the citation in readme. 

cc @StellaAthena

* Update nq_open.yaml (EleutherAI#1289)

* Update README.md with custom integration doc (EleutherAI#1298)

* Update README.md

* punctuation

---------

Co-authored-by: Hailey Schoelkopf <[email protected]>

* Update nq_open.yaml (EleutherAI#1305)

* Update nq_open.yaml

change regex

* Bump NQ version

---------

Co-authored-by: Hailey Schoelkopf <[email protected]>

* Update task_guide.md (EleutherAI#1306)

* Update pyproject.toml (EleutherAI#1312)

* Fix polemo2_in.yaml config name (EleutherAI#1313)

* Update pyproject.toml (EleutherAI#1314)

* Fix group register (EleutherAI#1315)

* tuple should be considered as well

* set option to keep callable as callable

* Update task_guide.md (EleutherAI#1316)

* Update polemo2_in.yaml (EleutherAI#1318)

* don't pass extra kwargs to mamba any more (EleutherAI#1328)

* Fix Issue regarding stderr (EleutherAI#1327)

* add fix fordeciding if stderr is N/A or not

* process N/A

* Add `local-completions` support using OpenAI interface (EleutherAI#1277)

* Add `local-completions` support using OpenAI interface

* Refactor oa_completion

* Address tokenizer comments and change request chunks to batch size

* Add warning message for tiktoken backend

* fix formatting

* fix whitespace

* Update README.md

---------

Co-authored-by: Hailey Schoelkopf <[email protected]>

* fallback to classname when LM doesnt have config (EleutherAI#1334)

* fix a trailing whitespace that breaks a lint job (EleutherAI#1335)

* skip "benchmarks" in changed_tasks (EleutherAI#1336)

* Update migrated HF dataset paths (EleutherAI#1332)

* Update arc_easy.yaml

* Update flan_cot.yaml

* update HF dataset path

* Update freeform.yaml

* Update flan_cot.yaml

---------

Co-authored-by: Lintang Sutawika <[email protected]>

* Don't use `get_task_dict()` in task registration / initialization (EleutherAI#1331)

* don't use get_task_dict() as a helper, it will download the dataset!

* pre-commit

* Update README.md

---------

Co-authored-by: lintangsutawika <[email protected]>

* manage default (greedy) gen_kwargs in vllm (EleutherAI#1341)

* manage default (greedy) gen_kwargs in vllm better

* mirror HF `do_sample`

* just need to set temp=0 for greedy

* modified default gen_kwargs to work better with CLI; changed prompt_logprobs=1 (EleutherAI#1345)

* update links to task_guide.md (EleutherAI#1348)

* `Filter` docs not offset by `doc_id`  (EleutherAI#1349)

* get `doc` from instance

* acceletate bugfix: get ground doc from instance

* convert filter to `process_result`

* get docs from instances in `FilterEnsemble`

* rename

* nit

* better looping

* fix typehint

* Add FAQ on `lm_eval.tasks.initialize_tasks()` to README (EleutherAI#1330)

* Update README.md

* [!Tip]

* Refix issue regarding stderr (EleutherAI#1357)

* Add causalLM OpenVino models (EleutherAI#1290)

* added intel optimum

* added intel optimum in readme

* modified intel optimum

* modified intel optimum

* modified intel optimum

* modified install optimum

* modified path of IR file

* added openvino_device

* added openvino_device2

* changed optimum-causal to openvino-causal

* Update README.md

* Update README.md

* remove `lm_eval.base` import

* update openvino-causal -> openvino ; pass device through super().__init__()

* Update README.md

* Add optimum to tests dependencies

* apply pre-commit

* fix so tests pass

---------

Co-authored-by: Hailey Schoelkopf <[email protected]>
Co-authored-by: haileyschoelkopf <[email protected]>

* Apply some best practices and guideline recommendations to code (EleutherAI#1363)

* raise Exception, not a string

Additional info https://peps.python.org/pep-0352/#exception-hierarchy-changes
https://docs.python.org/3.8/tutorial/errors.html#raising-exceptions

* Apply PEP8 recommendation to prefer isinstance

"Object type comparisons should always use isinstance() instead of comparing types directly"
https://peps.python.org/pep-0008/

* Remove dangerous default mutable values in arguments

https://pylint.readthedocs.io/en/stable/user_guide/messages/warning/dangerous-default-value.html

* Format logging messages with fstring (not with format)

Additional info
https://pylint.readthedocs.io/en/stable/user_guide/messages/warning/logging-format-interpolation.html
There are also discussions about the speed of formatting while logging or some unintended code executions
pylint-dev/pylint#2395
https://stackoverflow.com/a/54368109
but at least one format (fstring one) will be used throughout the project

* Specify utf-8 encoding for `open` explicitly

If not specified, it may be supposed differently in different environments, OSes, and Python versions. See
https://peps.python.org/pep-0597/
https://docs.python.org/3.11/library/locale.html#locale.getencoding
https://docs.python.org/3.10/library/os.html#utf8-mode
https://pylint.readthedocs.io/en/stable/user_guide/messages/warning/unspecified-encoding.html

Helps also if some code from English language tasks is taken as inspiration for tasks in non-English languages.

* Use inline-ignoring comments to pass pre-commit instead of identity process

https://flake8.pycqa.org/en/3.0.1/user/ignoring-errors.html#in-line-ignoring-errors
https://www.flake8rules.com/rules/F841.html

flake8 comments are supported by ruff: https://docs.astral.sh/ruff/linter/#error-suppression

* serialize callable functions in config (EleutherAI#1367)

* delay filter init; remove `*args` (EleutherAI#1369)

* delay filter init; remove `*args`

* bugfix

* optimize

* type hint

* Fix unintuitive `--gen_kwargs` behavior (EleutherAI#1329)

* don't override do_sample if no value for it is passed

* Update gen_kwargs override condition

* Update huggingface.py

* Update huggingface.py

* run linters

* silence an erroneous warning

* Publish to pypi (EleutherAI#1194)

* publish to pypi

* lint

* Update publish.yml

* minor

* Make dependencies compatible with PyPI (EleutherAI#1378)

* make deps not point to github urls

* formatting

* try making PyPI only run on tag pushes

* Add support for RWKV models with World tokenizer (EleutherAI#1374)

* Add support for RWKV models with World tokenizer

The RWKV line of model with the World tokenizer, does not allow the padding token to be configured, and has its value preset as 0

This however fails all the "if set" checks, and would cause the tokenizer to crash.

A tokenizer class name check was added, in addition to a model type check, as there exists RWKV models which uses the neox tokenizers

* Update huggingface.py

Genericized so that this supports any RWKVWorld tokenizer, and added a fall-back for if the HF implementation name changes.

* Comply with formatting guidelines

* fix format

---------

Co-authored-by: Stella Biderman <[email protected]>
Co-authored-by: Hailey Schoelkopf <[email protected]>

* add bypass metric (EleutherAI#1156)

* add bypass metric

* fixed `bypass` metric.

* add task attributes if predict_only

* add `predict_only` checks

* add docs

* added `overide_metric`, `override_config` to `Task`

* nits

* nit

* changed --predict_only to generations; nits

* nits

* nits

* change gen_kwargs warning

* add note about `--predict_only` in README.md

* added `predict_only`

* move table to bottom

* nit

* change null aggregation to bypass (conflict)

* bugfix; default `temp=0.0`

* typo

* loglikelihood refactor using template lm

* lint

* code review

* neuron optimum

* Mention TemplateLM in model_guide.md

* Update lm_eval/api/model.py

* fix linter

* fix format

* fix format

* fix format

---------

Co-authored-by: Hailey Schoelkopf <[email protected]>
Co-authored-by: Lintang Sutawika <[email protected]>
Co-authored-by: Stella Biderman <[email protected]>
Co-authored-by: Mark Saroufim <[email protected]>
Co-authored-by: Hannibal046 <[email protected]>
Co-authored-by: Danielle Pintz <[email protected]>
Co-authored-by: Quentin Lhoest <[email protected]>
Co-authored-by: kwrobel.eth <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Brian Vaughan <[email protected]>
Co-authored-by: Baber Abbasi <[email protected]>
Co-authored-by: thnkinbtfly <[email protected]>
Co-authored-by: NoushNabi <[email protected]>
Co-authored-by: haileyschoelkopf <[email protected]>
Co-authored-by: LSinev <[email protected]>
Co-authored-by: Eugene Cheah <[email protected]>
* log group membership

* no stray prints

* Update evaluator.py
…EleutherAI#1440)

* fix the issue EleutherAI#1391, wrong contexts in mgsm tasks

* fix yaml issue for having two target_delimiter lines. For COT tasks, keep the one with a space (default)

* regenerate all task yaml files
- change naming so that file name will match with task name
- task|file follows a consistent naming way, mgsm_(mode)_(lang) for three modes, i.e., direct, en_cot, and native_cot

* English CoTs should have a space as target_delimiter

* Update utils.py

* Apply suggestions from code review

---------

Co-authored-by: Hailey Schoelkopf <[email protected]>
* add wandb as extra dependency

* wandb metrics logging

* refactor

* log samples as tables

* fix linter

* refactor: put in a class

* change dir

* add panels

* log eval as table

* improve tables logging

* improve reports logging

* precommit run

* ruff check

* handle importing reports api gracefully

* ruff

* compare results

* minor pre-commit fixes

* build comparison report

* ruff check

* log results as artifacts

* remove comparison script

* update dependency

* type annotate and docstring

* add example

* update readme

* fix typo

* teardown

* handle outside wandb run

* gracefully fail reports creation

* precommit checks

* add report url to summary

* use wandb  printer for better url stdout

* fix ruff

* handle N/A and groups

* fix eval table

* remove unused var

* update wandb version req + disable reports stdout

* remove reports feature to TODO

* add label to multi-choice question data

* log model predictions

* lints

* loglikelihood_rolling

* log eval result for groups

* log tables by group for better handling

* precommit

* choices column for multi-choice

* graciously fail wandb

* remove reports feature

* track system metrics + total eval time + stdout

---------

Co-authored-by: Lintang Sutawika <[email protected]>
…erAI#1458)

* Fixed generation args issue affection openai completion model

* Fixed hf unit test; removed pop attributes in OpenAi completion.

* fix format

* fix format

---------

Co-authored-by: Hailey Schoelkopf <[email protected]>
…utherAI#1464)

* Save git_hash to results even if git is not available to call as subprocess

* Store more info about environment and transformers version in results to help researchers track inconsistencies

* moved added logging to logging_utils

* moved get_git_commit_hash to logging_utils.py

* moved add_env_info inside evaluator
* add arabic mmlu

* update the description

* add readme file
)

* add add_bos_token to HFLM

* add BOS token flag to other local model classes

---------

Co-authored-by: Lintang Sutawika <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.