You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To my understanding, I think it would initialize new fully connected layer in each epoch of training.
I don't understand how this layer can be optimized via backpropagation, as it would be re-initialized each time.
It would be a great help if anyone can teach me why such inference is wrong.
The text was updated successfully, but these errors were encountered:
Hi @junikkoma , nn.Linear is used to initialize the weights of fc layer, and during model training, forward() function is called. The layer only get initialized once here:
Hi, thank you for great implementation. I appreciate your work as well as your generosity for opening it.
As mentioned in title, I have a question about line 35 of loss_functions.py, as given below
Angular-Penalty-Softmax-Losses-Pytorch/loss_functions.py
Line 35 in c41d599
To my understanding, I think it would initialize new fully connected layer in each epoch of training.
I don't understand how this layer can be optimized via backpropagation, as it would be re-initialized each time.
It would be a great help if anyone can teach me why such inference is wrong.
The text was updated successfully, but these errors were encountered: