You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
According to the filter function used here, CEBin seems to only be trained&evaluated with those functions with more than 5 basic blocks.
I used similar filter conditions to filter binary functions on the BinaryCorp dataset, and used the extracted functions to evaluate jTrans. And I found the the recall@1 metric of jTrans is also very high. The results of jTrans I got are as follows:
model
O0-O3
O1-O3
O2-O3
O0-Os
O1-Os
O2-Os
jTrans
0.6027
0.7514
0.8631
0.6233
0.7046
0.7870
jTrans(reported in your paper)
0.376
0.580
0.661
0.443
0.586
0.585
CEBin(in your paper)
0.776
0.826
0.920
0.839
0.874
0.845
It's a little confusing.. So I wonder whether jTrans is evaluated with the same binary function set with CEBin in your experiments.
The text was updated successfully, but these errors were encountered:
Thanks for your feedback, we reported the jTrans performance on BinaryCorp according to the original paper of jTrans. It seems that this filtering strategy will greatly affect the evaluation results.
According to the filter function used here, CEBin seems to only be trained&evaluated with those functions with more than 5 basic blocks.
I used similar filter conditions to filter binary functions on the BinaryCorp dataset, and used the extracted functions to evaluate jTrans. And I found the the recall@1 metric of jTrans is also very high. The results of jTrans I got are as follows:
It's a little confusing.. So I wonder whether jTrans is evaluated with the same binary function set with CEBin in your experiments.
The text was updated successfully, but these errors were encountered: