Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

There is some conditions that categorical focal loss does not improve from start #29

Open
statcom opened this issue Nov 4, 2022 · 0 comments

Comments

@statcom
Copy link

statcom commented Nov 4, 2022

This may not be a problem in your code but in the algorithm itself. The loss function (categorical focal loss, CFL) worked well in some model/data but did not converge at all in other cases. For example, I tried CFL with ResNet50 for bacterial detection which didn't converge at all with 26% train accuracy for 4 classes while categorical cross entropy didn't have this problem with 90% test accuracy. But CFL for 1D-CNN with the same data converged well with 99% test accuracy.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant