You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
say we have several terms in the loss function, e.g. PDE loss, boundary loss, initial loss. Then we can impose fixed weights on the loss function with loss_weights in
model.compile("adam", lr = 1e-3, loss_weights = [1, 10, 10])
Is there a way to make these weights dynamic such that they counteract the issue of gradient imbalance between different magnitudes of gradients of different loss terms? As far as I know, there would be two possibilities in theory:
To do this, you need to either modify the source code, or implement a callback to change the weights during training.
Based on my experience, fixed and adaptive weights have similar effects. As you can see in the papers you mentioned, the adaptive weights quickly converge to a fixed number, and thus fixed weights are basically sufficient. Also, it is recommended to use hard constraints for BC/IC, see FAQ.
Hi, @lululxvi
say we have several terms in the loss function, e.g. PDE loss, boundary loss, initial loss. Then we can impose fixed weights on the loss function with loss_weights in
model.compile("adam", lr = 1e-3, loss_weights = [1, 10, 10])
Is there a way to make these weights dynamic such that they counteract the issue of gradient imbalance between different magnitudes of gradients of different loss terms? As far as I know, there would be two possibilities in theory:
The text was updated successfully, but these errors were encountered: