Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fail to reproduce Deepseek-math result #2555

Open
zhuqiangLu opened this issue Dec 10, 2024 · 8 comments
Open

fail to reproduce Deepseek-math result #2555

zhuqiangLu opened this issue Dec 10, 2024 · 8 comments
Labels
asking questions For asking for clarification / support on library usage. validation For validation of task implementations.

Comments

@zhuqiangLu
Copy link

zhuqiangLu commented Dec 10, 2024

Hi there, I failed to reproduce the reported deepseek-math result on gsm-8k benchmark with 8shot cot. But the result is significantly much lower than the reported result (0.82 vs 0.64).

@baberabb
Copy link
Contributor

baberabb commented Dec 10, 2024

Hi! can you provide the reference?

@zhuqiangLu
Copy link
Author

I ran the following command
accelerate launch -m lm_eval --model hf --model_args pretrained=deepseek-ai/deepseek-math-7b-instruct,dtype=bfloat16 --apply_chat_template --fewshot_as_multiturn --log_samples --output_path eval_results --tasks gsm8k_cot --batch_size 4

Here is the result
image

According to the official deekseek-math repo, gsm-8k-cot should be 82.9%.
I ran the deepseek-math evaluation on the same model and got 82.18% (not exactly the reported performance, but close enough to me)
image

@zhuqiangLu
Copy link
Author

Here is an evaluation sample.There are some wired unicode characters, I guess the degenerated performance is led by the chat template.
image
Also, according to deepseek-math official repo, they do suggest avoiding using chat template function:
image

@zhuqiangLu
Copy link
Author

I am not familiar with unicode, but according to an online unicode translator, the \uff5c is "|" and \u2581 is "_".

So, "<\uff5cbegin\u2581of\u2581sentence\uff5c>User" = "<|begin_of_sentence>User" which is exactly the bos_token of Deepseek-math.

But I am not sure how to fix this.

@baberabb
Copy link
Contributor

baberabb commented Dec 11, 2024

I think by "avoiding", they mean if you want to format the chats manually. Otherwise looks like the chat_template does format the messages in a similar way.

Seems like their implementation is similar to gsm8k_cot_llama except for the fewshots, instead of ...The final answer is <answer_number> at the end they have
...\nSo the answer is $\\boxed{<answer_number>}$.source

The doc_to_text should also be changed to:
doc_to_text: "Q: {{question}}\nA:" source

Their answer extraction is also a quite different. You can remove the filter_list in the config and use a per sample process_results like minerva MATH does it

process_results: !function utils.process_results
and
def process_results(doc: dict, results: List[str]) -> Dict[str, int]:

where the process_results function will call eval_last_single_answer( extract_gsm_few_shot_cot_answer(pred) )

The eval_last_single_answer calls a lot of helper functions as well, so you'll have to copy them over as well.

Hope this is helpful!

@baberabb baberabb added validation For validation of task implementations. asking questions For asking for clarification / support on library usage. labels Dec 12, 2024
@zhuqiangLu
Copy link
Author

A quick update of the reproduced results. On 50 samples, I can get the reported result (82.x), I am going to run experiment on the whole benchmark later today but currently I am out of GPUs.
Here is my configuration for the experiment:

dataset_name: main
dataset_path: gsm8k
doc_to_target: '{{answer.split(''####'')[-1].strip() if answer is defined else target}}'
# doc_to_text: "Given the following problem, reason and give a final answer to the problem.\nProblem: {{question}}\nYour response should end with \"The final answer is [answer]\" where [answer] is the response to the problem.\n"
doc_to_text: "Q:{{question}}\nPlease reason step by step, and put your final answer within \\boxed{}."
# doc_to_text: "Q: {{question}}\nA:"
fewshot_config:
  sampler: first_n
  samples:
  - question: There are 15 trees in the grove. Grove workers will plant trees in the
      grove today. After they are done, there will be 21 trees. How many trees did
      the grove workers plant today?
    target: There are 15 trees originally. Then there were 21 trees after some more
      were planted. So there must have been 21 - 15 = 6. \nSo The final answer is $\\boxed{6}$.
  - question: If there are 3 cars in the parking lot and 2 more cars arrive, how many
      cars are in the parking lot?
    target: There are originally 3 cars. 2 more cars arrive. 3 + 2 = 5. \nSo The final answer is $\\boxed{5}$.
  - question: Leah had 32 chocolates and her sister had 42. If they ate 35, how many
      pieces do they have left in total?
    target: Originally, Leah had 32 chocolates. Her sister had 42. So in total they
      had 32 + 42 = 74. After eating 35, they had 74 - 35 = 39. \nSo the final answer is $\\boxed{39}$.
  - question: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12
      lollipops. How many lollipops did Jason give to Denny?
    target: Jason started with 20 lollipops. Then he had 12 after giving some to Denny.
      So he gave Denny 20 - 12 = 8. \nSo the final answer is $\\boxed{8}$.
  - question: Shawn has five toys. For Christmas, he got two toys each from his mom and
      dad. How many toys does he have now?
    target: Shawn started with 5 toys. If he got 2 toys each from his mom and dad,
      then that is 4 more toys. 5 + 4 = 9. \nSo the final answer is $\\boxed{9}$.
  - question: There were nine computers in the server room. Five more computers were
      installed each day, from monday to thursday. How many computers are now in the
      server room?
    target: There were originally 9 computers. For each of 4 days, 5 more computers
      were added. So 5 * 4 = 20 computers were added. 9 + 20 is 29. \nSo the final answer is $\\boxed{29}$.
  - question: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday,
      he lost 2 more. How many golf balls did he have at the end of wednesday?
    target: Michael started with 58 golf balls. After losing 23 on tuesday, he had
      58 - 23 = 35. After losing 2 more, he had 35 - 2 = 33 golf balls.\nSo the final answer is $\\boxed{33}$.
  - question: Olivia has $23. She bought five bagels for $3 each. How much money does
      she have left?
    target: Olivia had 23 dollars. 5 bagels for 3 dollars each will be 5 x 3 = 15
      dollars. So she has 23 - 15 dollars left. 23 - 15 is 8. So the final answer is $\\boxed{8}$
process_results: !function utils.process_results_deepseek
generation_kwargs:
  do_sample: false
  until:
  - '<|end▁of▁sentence|>'
  - '<|eot_id|>'
  - '<|start_header_id|>user<|end_header_id|>'
  - 'Q:'
  - </s>
  - <|im_end|>
tag:
- chain_of_thought
metadata:
  version: 3.0
metric_list:
- aggregation: mean
  higher_is_better: true
  ignore_case: true
  ignore_punctuation: false
  metric: exact_match
  regexes_to_ignore:
  - ','
  - \$
  - '(?s).*#### '
  - \.$
num_fewshot: 8
output_type: generate_until
repeats: 1
task: gsm8k_cot_deepseek
test_split: test

process_results_deepseek is defined as

def process_results_deepseek(doc: dict, results: List[str]) -> Dict[str, int]:

    candidates = results[0]

    item = dict()
    item['answer'] = doc['answer'].split('####')[-1].strip()
    # item['prediction'] = extract_answer(candidates)
    item['prediction'] = extract_gsm_few_shot_cot_answer(None, candidates, None)

    if is_correct(item):
        retval = 1
    else:
        retval = 0

    results = {
        "exact_match": retval,
    }
    return results

Where extract_gsm_few_shot_cot_answer and is_correct are copied from official deepseek-math repro.

The experiment command is
accelerate launch -m lm_eval --model hf --model_args pretrained=deepseek-ai/deepseek-math-7b-instruct,dtype=bfloat16,add_bos_token=False --apply_chat_template --fewshot_as_multiturn --log_samples --output_path eval_results --tasks $TASK --batch_size 1 --limit 50

Devices: 2 x A6000

@zhuqiangLu
Copy link
Author

Here is a result
image

@zhuqiangLu
Copy link
Author

I will close this issue once getting expected result on the full gsm8k-test benchmark.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
asking questions For asking for clarification / support on library usage. validation For validation of task implementations.
Projects
None yet
Development

No branches or pull requests

2 participants