-
Notifications
You must be signed in to change notification settings - Fork 380
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The result from open_llm_leaderboard is not as expected. #48
Comments
This is likely the issue of the auto-converted fast tokenizer. I've created an issue here |
@young-geng looks like the issue in that repo was fixed last week. I'm assuming this could be retried now? (@chi2liu) |
@c0bra There has not yet been a new release of huggingface/transformers since the fix has been merged: https://github.com/huggingface/transformers/releases. I assume we still need to wait for this. The already existing entries for OpenLLaMa on the leader-board disappeared around a week ago as well. Maybe there is a connection and the maintainers of the leader-board removed the results, because they learned of the bug and are now waiting for the next release of huggingface/transformers... That's just my guess, though. |
@codesoap Yeah I've contacted the maintainers for the leaderboard for a re-evaluation request, and the model should be in the queue right now. |
open-llama-7b-open-instruct is pending evaluation in open_llm_leaderboard. They confirmed that they fine-tuned with |
OpenLLaMa 3B result is not pending. is there any reason? |
open_llm_leaderboard had updated the result for open-llama-3b and open-llama-7b.
This result is much worse than llama-7b and does not match expectations. Is it because of the fast tokenizer issue mention in the document?
The text was updated successfully, but these errors were encountered: