Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

enhance evaluator by polishing prompt and adding info #181

Merged
merged 9 commits into from
Jun 1, 2024

Conversation

Monstertail
Copy link
Collaborator

@Monstertail Monstertail commented Jun 1, 2024

Closes #167 Closes #147

πŸ“‘ Description

(1) align the score criteria with human-eval #167. Thanks @chengzr01's help.
(2) change parser to collect dimension scores #147 .

βœ… Checks

  • My pull request adheres to the code style of this project
  • My code requires changes to the documentation
  • I have updated the documentation as required
  • All the tests have passed
  • Branch name follows type/descript (e.g. feature/add-llm-agents)
  • Ready for code review

β„Ή Additional Information

@Monstertail
Copy link
Collaborator Author

I am testing the correctness of the code.

@Monstertail
Copy link
Collaborator Author

All tests pass( including the real tests of model calls in test_eval). Ready for code review. @lwaekfjlk

@Monstertail Monstertail requested a review from lwaekfjlk June 1, 2024 01:42
@lwaekfjlk
Copy link
Member

nice, i will check later.

@lwaekfjlk lwaekfjlk changed the title Feature/enhance evaluator collection: (1) align the score criteria with human-eval; (2) change parser to collect dimension scores. enhance evaluator by polishing prompt and adding info Jun 1, 2024
@lwaekfjlk lwaekfjlk merged commit faa8172 into main Jun 1, 2024
0 of 6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants