Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation result of bigcode/starcoder2-3b on gsm8k_pal does not matched the paper #272

Open
nongfang55 opened this issue Sep 13, 2024 · 0 comments

Comments

@nongfang55
Copy link

I tried to evaluate the model bigcode/starcoder2-3b on the benchmark pal-gsm8k-greedy using the command below

accelerate launch --main_process_port 6789 main.py --model bigcode/starcoder2-3b --max_length_generation 2048 --tasks pal-gsm8k-greedy --n_samples 1 --batch_size 1 --do_sample False --allow_code_execution

Then I got the result

  "pal-gsm8k-greedy": {
    "accuracy": 0.04624715693707354,
    "num_failed_execution": 1016
  },

Since the reported value in the paper starcoder2 is 27.7 (Table 14), the result from harness does not match the paper.

Is there someone else who can check out why this happened?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant