Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Different evaluation frameworks for LLMs #105

Open
Anindyadeep opened this issue Oct 30, 2023 · 0 comments
Open

Different evaluation frameworks for LLMs #105

Anindyadeep opened this issue Oct 30, 2023 · 0 comments
Assignees
Labels
content text & code

Comments

@Anindyadeep
Copy link
Member

Type

new chapter

Chapter/Page

eval-datasets

Description

The evaluation page is really good, however, it would be awesome if we could add some information on the following evaluation frameworks.

  1. HELM by Stanford.
  2. LM Evaluation Harness by Eluther AI.
  3. Code Evaluation Harness by BigCode.

The content should be mainly regarding how they are trying to do evaluation and how to get started with each.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
content text & code
Projects
None yet
Development

No branches or pull requests

2 participants