Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Metrics definition and settings used for LLM benchmarks #2012

Open
vineel96 opened this issue Jan 3, 2025 · 1 comment
Open

Metrics definition and settings used for LLM benchmarks #2012

vineel96 opened this issue Jan 3, 2025 · 1 comment

Comments

@vineel96
Copy link

vineel96 commented Jan 3, 2025

Hello,
I have query regarding metrics definition and settings used in LLM benchmarks in MLCommons (https://mlcommons.org/benchmarks/inference-datacenter/).
For LLM-Q/A task in the benchmark table in above link:

  1. How TPOT and TTFT are calculated? Can you source code for it? For TPOT how many tokens are generated?
  2. For openocra dataset, input prompt provided is "system_prompt" + "question" or "question" only? Since openocra dataset has both colmuns
  3. For quality metrics in table the values for ROUGE-1 is its precision , recall or fmeasure?
@vineel96
Copy link
Author

vineel96 commented Jan 6, 2025

Hi @arjunsuresh, any information regarding this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant