-
Notifications
You must be signed in to change notification settings - Fork 778
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implementing Adaptive Loss Weights via Callback - tf.compat.v1 #1085
Comments
...
self.model.compile("adam", lr=1e-3, decay=None,
loss_weights=[1, 1, 1, 1])
self.model.sess.run(tf.global_variables_initializer()) Sorry for posting this a little early (in hindsight), but maybe this solution is helpful for others as well.... |
Hello Philipp,@PhilippBrendel and @lululxvi Can you provide implementation of adaptive loss weights class? I am also trying to apply the same thing for 2d wave equation. Since a FNN and MsFFN seem to fail, i would like to try adaptive weights. Thanks in advance |
Hi, I'll use this issue to centralize information regarding adaptive loss weighting:
Other references of interest:
Note: "lambda slighly improves the accuracy" (Fig 13). According to Fig. 13 it does not seem to be so efficient. As stated by @lululxvi in #215, "based on my experience, fixed and adaptive weights have similar effects. As you can see in the papers you mentioned, the adaptive weights quickly converge to a fixed number, and thus fixed weights are basically sufficient. Also, it is recommended to use hard constraints for BC/IC, see FAQ". I would definitely define adaptive weighting as callbacks. I'll try to figure out a structure for implementing a simple weighting scheme. I think that adaptive weighting can be useful for more involved loss terms (with e.g. 4-5 terms). |
Hi! I have been exploring this issue. For all these adaptive weighting techniques, we want to be able to update So The callback would adapt the weights as a For example, in Lines 181 to 182 in 683682c
So, to begin with, we would comment out these two lines, and put them somewhere else at the beginning of train function. Then, with a few changes, we could define and use Do you agree @lululxvi ? Also, do you prefer if I move this discussion to a new issue? Or could you please re-open this issue? |
@pescap Yes, that sounds good. |
As a first step, I am trying to facilitate the update of |
Really looking forward to this implementation! |
Working on this in #1586 |
|
Hi! Thank you for proposing! We could start with the |
Yes, could you give me the Slack ID or URL? |
Can you please send an email to @lululxvi asking him to add you? |
Hi everyone,
I've read in other Issues (e.g. #215 and #908) that adaptive Loss-Weights are not high-priority for DeepXDE, but I still want to test some approaches for that, as I see quite some potential for my current use-case.
However, implementing this via a Callback like the following does not really work for me so far (cf. Error-message below).
Click for Error Message
I'm not an Expert on Core-Tensorflow (especially not TF1), so if anyone could give me an advice on what I'm doing wrong or how I can fix this, I'd really appreciate it!
Cheers,
Philipp
Ps: In #331 are some more comments, but I don't think they apply to my problem, as I'm not interested in using gradients for the weights initially.
The text was updated successfully, but these errors were encountered: