-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inquiry about accept length results for EAGLE-Qwen2-7B-Instruct #143
Comments
Thank you for your interest. Could you please provide more detailed error information from Qwen on the main branch? |
As for the error on the main branch: |
@Liyuhui-12 @zhangtia16 Hello, can you provide the test benchmarks for EAGLE Qwen2? The alpha value I tested on the EAGLE-Qwen2-72B-Instruct model is relatively low.
The alpha of EAGLE-Qwen2-72B-Instruct is [0.5 0.34 0.32 0.33 0.47], |
I have configured my setup similarly to your points 1-5 (modified v1-branch, bf16, mt-bench, chain-draft, temperature=0), with the only difference being that I am using the EAGLE-Qwen2-7B-Instruct checkpoints provided by the authors. Here are my alpha results: [0.31, 0.24, 0.25, 0.31, 0.31], corresponding with an accept length of 1.87 (already considering the +1 token issue). |
How is the accept length of 1.87 calculated?
I did a test on EAGLE-Qwen2-7B-Instruct, and the result is as follows: |
Since the authors did not directly output the acceptance length, I modified the code to calculate it. For details on the modification, please refer to issue #146. In summary, we record the number of accepted tokens at each step for every sample. Finally, the average number of accepted tokens (first averaged across the steps of a single sample, and then averaged across all samples) represents the acceptance length of the dataset. As for your implementation, I think the right version should be Btw, did you used the released checkpoints on MT-bench to get the alpha results? |
@zhangtia16 You are correct. I modified accept length method and got new results for the EAGLE-Qwen2-7B-Instruct model, which are generally consistent with yours. chain-draft Here is the new calculation method for the accept length:
The checkpoints used are those published by the author at https://huggingface.co/yuhuili/EAGLE-Qwen2-7B-Instruct. If our testing results are correct, the model's performance does not appear to surpass that of the Vicuna and Llama models released by the author. |
Hi EAGLE Team,
Thank you for your contributions to the community!
I downloaded the released weights for EAGLE on Qwen2-7B-Instruct from https://huggingface.co/yuhuili/EAGLE-Qwen2-7B-Instruct. However, while testing the weights on the MT-Bench dataset, I noticed that the accept length is relatively low as follows:
Model:Qwen2-7B-Instruct
Dataset:MT-Bench
EAGLE version: EAGLE-1
For your information, I successfully reproduced the EAGLE1-Vicuna-7B results, achieving an accept length of over 3. Additionally, I have utilized your newly released Qwen2-related codes (modeling_qwen2_kv.py) from the EAGLE-2 code branch; however, I was unable to run it successfully with the EAGLE-2 code branch, as mentioned in issue 136. Consequently, I adapted the Qwen2-related codes to the EAGLE-1 code branch for testing.
I'm curious about the low accept length I'm experiencing with EAGLE-Qwen2. I see that only the weights for EAGLE-Qwen2 were released, without accompanying results. Could you please share the accept length or any other results for EAGLE-Qwen2 on MT-Bench?
Thank you!
The text was updated successfully, but these errors were encountered: