Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about the results on the coat dataset. #3

Open
LDR-KDD opened this issue Aug 13, 2020 · 3 comments
Open

Question about the results on the coat dataset. #3

LDR-KDD opened this issue Aug 13, 2020 · 3 comments

Comments

@LDR-KDD
Copy link

LDR-KDD commented Aug 13, 2020

Thanks for sharing the code. I follow your instructions, directly run the code (no modification) and get the following results in the ranking_all.csv file.

,DCG@3,DCG@5,DCG@8,Recall@3,Recall@5,Recall@8,MAP@3,MAP@5,MAP@8
wmf,0.0578918,0.0786351,0.1034409,0.0652348,0.1101706,0.1790738,0.0422196,0.054651,0.0690561
expomf,0.0909645,0.1102964,0.1276119,0.0959223,0.1364149,0.1836412,0.0848237,0.097459,0.1076237
relmf,0.0573253,0.0784824,0.1036326,0.064685,0.1102209,0.1801171,0.0422954,0.0555759,0.0704563
bpr,0.0664874,0.0864072,0.1110801,0.0748996,0.1178387,0.1868158,0.0483589,0.0607375,0.0754249
ubpr,0.0586047,0.0793928,0.1034167,0.0665724,0.1117508,0.1782271,0.0409541,0.0537214,0.0678179

The results are different from your paper results of Coat in the Table 2. Can you tell me how to get your paper results?

@LDR-KDD
Copy link
Author

LDR-KDD commented Aug 13, 2020

for seed in np.arange(num_sims):

tf.set_random_seed(12345)

In addition, The paper reports the ranking metrics averaged over 10 different initializations. However, the seed in the above lines is never used in the for-loop. The seed is always 12345.

@LDR-KDD
Copy link
Author

LDR-KDD commented Aug 14, 2020

,DCG@3,DCG@5,DCG@8,Recall@3,Recall@5,Recall@8,MAP@3,MAP@5,MAP@8
wmf,0.0460291,0.0620843,0.079735,0.0519853,0.0864598,0.1356667,0.0331771,0.0424727,0.0520434
expomf,0.076085,0.0892335,0.1024027,0.0827668,0.11127,0.1477925,0.0635857,0.0718887,0.079471
relmf,0.04364,0.0592101,0.0778098,0.0498028,0.083325,0.135134,0.031848,0.0408073,0.0507573
bpr,0.0445924,0.0603531,0.0784485,0.0510519,0.0849282,0.1353349,0.0324443,0.0414795,0.051234
ubpr,0.0447948,0.0598265,0.0782432,0.0512751,0.0835302,0.1348472,0.0324984,0.0412117,0.0511174

I also conducted the experiments on the Yahoo dataset and can not reproduce the results reported in your paper.

@EricLangezaal
Copy link

We are also experiencing the same problem, where none of the results in the paper seem reproducable. By running the code as-is, we get results that are almost identical to those posted in this issue, where ExpoMF is always performing best. The same issue holds for the cold-start and rare-item tables.

We tried fixing the seed issue mentioned such that each run is actually seeded differently. This does lead to a lot more variation in the results per run, but the averaged results are also similar to those posted in this issue, again not supporting any of the conclusions in the paper.

@usaito Could you perhaps explain how we can approximately reproduce the results from the paper?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants