You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"gen_args_0": {
"arg_0": "Question: a shopkeeper sold an article offering a discount of 5 % and earned a profit of 31.1 % . what would have been the percentage of profit earned if no discount had been offered ?\nAnswer:",
"arg_1": " 38"
},
"gen_args_1": {
"arg_0": "Question: a shopkeeper sold an article offering a discount of 5 % and earned a profit of 31.1 % . what would have been the percentage of profit earned if no discount had been offered ?\nAnswer:",
"arg_1": " 27.675"
},
"gen_args_2": {
"arg_0": "Question: a shopkeeper sold an article offering a discount of 5 % and earned a profit of 31.1 % . what would have been the percentage of profit earned if no discount had been offered ?\nAnswer:",
"arg_1": " 30"
}
Why are there two gen_args, and why are the resps numerical values?
How can we determine the correct answer generated by the LLM?
How to get the model’s original response?
The text was updated successfully, but these errors were encountered:
"gen_args_0": {
"arg_0": "Question: a shopkeeper sold an article offering a discount of 5 % and earned a profit of 31.1 % . what would have been the percentage of profit earned if no discount had been offered ?\nAnswer:",
"arg_1": " 38"
},
"gen_args_1": {
"arg_0": "Question: a shopkeeper sold an article offering a discount of 5 % and earned a profit of 31.1 % . what would have been the percentage of profit earned if no discount had been offered ?\nAnswer:",
"arg_1": " 27.675"
},
"gen_args_2": {
"arg_0": "Question: a shopkeeper sold an article offering a discount of 5 % and earned a profit of 31.1 % . what would have been the percentage of profit earned if no discount had been offered ?\nAnswer:",
"arg_1": " 30"
}
............
"filtered_resps": [
["-7.680142164230347", "False"],
["-17.299633383750916", "False"],
Why are there two gen_args, and why are the resps numerical values?
How can we determine the correct answer generated by the LLM?
How to get the model’s original response?
The text was updated successfully, but these errors were encountered: