-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can not reproduce results on the table #3
Comments
Thanks for raising this issue, we are currently investigating it. Based on initial checking, it may be due to changed behavior of LlamaTokenizer when we upgraded transformer library version (git+https://github.com/huggingface/transformers.git@057e1d74733f52817dc05b673a340b4e3ebea08c to 4.28.1) |
Thanks a lot! So can the provided transformers version(git+https://github.com/huggingface/transformers.git@057e1d74733f52817dc05b673a340b4e3ebea08c ) help me reproduce the correct results? |
We are currently retesting the models, but it would be a great help if you could also try with the older transformers version (pip install git+https://github.com/huggingface/transformers.git@057e1d74733f52817dc05b673a340b4e3ebea08c). If you can also reproduce the results, then we know the cause of the issue for sure, and we can revert to this transformers version in the short term. In the long term, we may need to debug the LlamaTokenizer in the newer library version |
We have confirmed that the problem is due to transformers library version, this has been fixed in the latest commit. For example, the command |
Thanks for your reply. I've tried again and the result seems to be fine. Can you provide more details about the cause of this problem, so I won't have such problems as the version conflict in the future. I'd be very grateful! |
No problem, we are still working to ensure that the issue is fully resolved in the newer transformer version, it is a subtle issue as the newer LlamaTokenizer tokenizes whitespace a bit differently |
Hi, May I ask the update about this issue. |
Great thanks for your work! I try exacy the same setting but I got different results on MMLU and BBH. The alpaca-tuned llama always perform worse than original llama(7B or 13B). Is there anything wrong with the loaded models?
The text was updated successfully, but these errors were encountered: