We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
用Llama-3.2-11B-Vision-Instruct来测评MMStar和HallusionBench时,模型输出句子不完整(生成的prediction),大多数分词之后集中在130token。但是测MMVet时,就不会出现这种情况。
The text was updated successfully, but these errors were encountered:
您好,针对mcq和y/n数据集,我们将max_new_token设定成了128,您可以在https://github.com/open-compass/VLMEvalKit/blob/main/vlmeval/vlm/llama_vision.py#L200进行修改
Sorry, something went wrong.
知道啦,谢谢
FangXinyu-0913
No branches or pull requests
用Llama-3.2-11B-Vision-Instruct来测评MMStar和HallusionBench时,模型输出句子不完整(生成的prediction),大多数分词之后集中在130token。但是测MMVet时,就不会出现这种情况。
The text was updated successfully, but these errors were encountered: