You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your contributions to the community!
I downloaded the released weights for EAGLE on Qwen2-7B-Instruct from https://huggingface.co/yuhuili/EAGLE-Qwen2-7B-Instruct. However, while testing the weights on the MT-Bench dataset, I noticed that the accept length is relatively low as follows:
For your information, I successfully reproduced the EAGLE1-Vicuna-7B results, achieving an accept length of over 3. Additionally, I have utilized your newly released Qwen2-related codes (modeling_qwen2_kv.py) from the EAGLE-2 code branch; however, I was unable to run it successfully with the EAGLE-2 code branch, as mentioned in issue 136. Consequently, I adapted the Qwen2-related codes to the EAGLE-1 code branch for testing.
I'm curious about the low accept length I'm experiencing with EAGLE-Qwen2. I see that only the weights for EAGLE-Qwen2 were released, without accompanying results. Could you please share the accept length or any other results for EAGLE-Qwen2 on MT-Bench?
Thank you!
The text was updated successfully, but these errors were encountered:
As for the error on the main branch:
1.the function “initialize_tree” in utils_alpha.py returns 5 arguments, whereas the “forward” function in ea_model.py outputs only 3 arguments.
2.I noticed that the authors removed the “logits_processor” argument from the “forward” function in ea_model.py in the main branch, compared to the code branch of EAGLE-1. Could the authors please explain why this argument was deleted? I see that “logits_processor” is still being passed into the function call in evaluation/gen_ea_alpha_vicuna.py in the main branch.
Hi EAGLE Team,
Thank you for your contributions to the community!
I downloaded the released weights for EAGLE on Qwen2-7B-Instruct from https://huggingface.co/yuhuili/EAGLE-Qwen2-7B-Instruct. However, while testing the weights on the MT-Bench dataset, I noticed that the accept length is relatively low as follows:
Model:Qwen2-7B-Instruct
Dataset:MT-Bench
EAGLE version: EAGLE-1
For your information, I successfully reproduced the EAGLE1-Vicuna-7B results, achieving an accept length of over 3. Additionally, I have utilized your newly released Qwen2-related codes (modeling_qwen2_kv.py) from the EAGLE-2 code branch; however, I was unable to run it successfully with the EAGLE-2 code branch, as mentioned in issue 136. Consequently, I adapted the Qwen2-related codes to the EAGLE-1 code branch for testing.
I'm curious about the low accept length I'm experiencing with EAGLE-Qwen2. I see that only the weights for EAGLE-Qwen2 were released, without accompanying results. Could you please share the accept length or any other results for EAGLE-Qwen2 on MT-Bench?
Thank you!
The text was updated successfully, but these errors were encountered: