You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am planning to submit LLM models to llm-perf-leaderboard for testing various compression and quantization frameworks.
I saw the description "Hardware/Backend/Optimization performance requests should be made in the llm-perf-backend repository and will be added to the 🤗 LLM-Perf Leaderboard 🏋️ automatically."
Which script(s) do I need to run for performance evaluation requests in this repo? With a quick tour of scripts, maybe the benchmark script and push-dataset script seems enough. However, I want to confirm the maintainer's opinion in advance.
The text was updated successfully, but these errors were encountered:
I am planning to submit LLM models to llm-perf-leaderboard for testing various compression and quantization frameworks.
I saw the description "Hardware/Backend/Optimization performance requests should be made in the llm-perf-backend repository and will be added to the 🤗 LLM-Perf Leaderboard 🏋️ automatically."
Which script(s) do I need to run for performance evaluation requests in this repo? With a quick tour of scripts, maybe the benchmark script and push-dataset script seems enough. However, I want to confirm the maintainer's opinion in advance.
The text was updated successfully, but these errors were encountered: