You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I'm a little confused about the "pass@k" metric in the paper.
It appears that on APPS you reported test case average (unstrict accuracy), although you referred to the "proportion of problems successfully solved" as the pass@k metric, which is the widely recognized definition. However, the accuracies of CodeRL, WizardCoder in table 4,5 seem to be strict.
Could you clarify which metric is being used for evaluation? Your confirmation would be greatly appreciated. Thank you.
The text was updated successfully, but these errors were encountered:
Hi, I'm a little confused about the "pass@k" metric in the paper.
It appears that on APPS you reported test case average (unstrict accuracy), although you referred to the "proportion of problems successfully solved" as the pass@k metric, which is the widely recognized definition. However, the accuracies of CodeRL, WizardCoder in table 4,5 seem to be strict.
Could you clarify which metric is being used for evaluation? Your confirmation would be greatly appreciated. Thank you.
The text was updated successfully, but these errors were encountered: