You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I meet this problem when using the same input parameters but varying the limit value, the responses differ. This behavior is unexpected, as changing the limit should only affect the number of results returned, not the response content for identical cases.
This is my code: lm_eval --model hf --limit 0.01 --model_args pretrained=../../../llms/Meta-Llama-3.1-8B-Instruct --tasks gsm8k_cot --device cuda:4 --log_samples --batch_size 1 --output_path ./results/gsm8k_cot/hf_test_0.01_v2.json --gen_kwargs max_gen_toks=512 do_sample=true temperature=0.5 --seed 42,42,42,42
However, when the limit is changed to 0.015, the response is different in the same case. And nearly all responses in the two settings differ.
How can I fix this bug and get the same response? Thanks.
The text was updated successfully, but these errors were encountered:
Hello, I meet this problem when using the same input parameters but varying the limit value, the responses differ. This behavior is unexpected, as changing the limit should only affect the number of results returned, not the response content for identical cases.
This is my code:
lm_eval --model hf --limit 0.01 --model_args pretrained=../../../llms/Meta-Llama-3.1-8B-Instruct --tasks gsm8k_cot --device cuda:4 --log_samples --batch_size 1 --output_path ./results/gsm8k_cot/hf_test_0.01_v2.json --gen_kwargs max_gen_toks=512 do_sample=true temperature=0.5 --seed 42,42,42,42
However, when the limit is changed to 0.015, the response is different in the same case. And nearly all responses in the two settings differ.
How can I fix this bug and get the same response? Thanks.
The text was updated successfully, but these errors were encountered: