Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adaptive weights #215

Closed
mariusmerkle opened this issue Feb 6, 2021 · 1 comment
Closed

Adaptive weights #215

mariusmerkle opened this issue Feb 6, 2021 · 1 comment

Comments

@mariusmerkle
Copy link
Contributor

Hi, @lululxvi

say we have several terms in the loss function, e.g. PDE loss, boundary loss, initial loss. Then we can impose fixed weights on the loss function with loss_weights in

model.compile("adam", lr = 1e-3, loss_weights = [1, 10, 10])

Is there a way to make these weights dynamic such that they counteract the issue of gradient imbalance between different magnitudes of gradients of different loss terms? As far as I know, there would be two possibilities in theory:

  1. Is this possible to implement in DeepXDE?/Does there already exist a way to use adaptive weights in DeepXDE?
  2. What's your experience on both fixed and adaptive weights in the loss function? Do adaptive weights accelerate convergence in your experience?
@lululxvi
Copy link
Owner

lululxvi commented Feb 9, 2021

  1. To do this, you need to either modify the source code, or implement a callback to change the weights during training.
  2. Based on my experience, fixed and adaptive weights have similar effects. As you can see in the papers you mentioned, the adaptive weights quickly converge to a fixed number, and thus fixed weights are basically sufficient. Also, it is recommended to use hard constraints for BC/IC, see FAQ.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants