Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test and val gap in log file #1

Open
jerrywyn opened this issue Mar 24, 2023 · 1 comment
Open

test and val gap in log file #1

jerrywyn opened this issue Mar 24, 2023 · 1 comment

Comments

@jerrywyn
Copy link

jerrywyn commented Mar 24, 2023

I tried to run the code and found in the log file that the validation result and the test result are very different, and the validation result is much worse than the test result. Why is this?
On the other hand, I run the code to get the result than the result in the paper, why is that?
script:
bs=128, dataset='voc2007', estimator='ours', filter_outlier=False, lr=5e-05, nc=20, nepochs=20, noise_rate_n=0.0343, noise_rate_p=0.4, nworkers=4, out='./results/multi-label-reweight_p0.4n0.0343_voc2007_ours_resnet50/', root='./data/voc/', sample_epoch=10, sample_th=0.5, seed=1, warmup_epoch=20, weight_decay=0
result:
图片
result in paper:
图片

@jerrywyn jerrywyn changed the title loss inf error test and val gap in log file Mar 27, 2023
@ShikunLi
Copy link
Owner

Thanks for your interest in our work.
First, the validation set is noisy in our setting, and the test set is clean. Hence, the validation result is much worse than the test result.
Second, the differences between the results obtained by running the code and the results in the paper may be because the label noise simulation is varied for different random seeds, and we make the implementation simpler when preparing the code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants