Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nll_loss requires log-probabilities as input, which is not given in our code #65

Open
moritzschaefer opened this issue Nov 22, 2023 · 0 comments
Assignees

Comments

@moritzschaefer
Copy link
Collaborator

moritzschaefer commented Nov 22, 2023

@MarcoMorik

https://pytorch.org/docs/stable/generated/torch.nn.NLLLoss.html states that

The input given through a forward call is expected to contain log-probabilities of each class. [...] Obtaining log-probabilities in a neural network is easily achieved by adding a LogSoftmax layer in the last layer of your network. You may use CrossEntropyLoss instead, if you prefer not to add an extra layer.

    "relative_ce": partial(torch.nn.functional.nll_loss, reduction='sum'),
    "relative_cdf": lambda output, label: torch.nn.functional.nll_loss((output+1e-10).log(), label, reduction="sum")

The way we use these losses does not seem to guarantee that our inputs correspond to log-probabilities. Let me know if you have thoughts about this and if not, let's maybe normalize the outputs of the comparison to adhere to the requirements

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants