-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adaptive lambdas #32
Adaptive lambdas #32
Conversation
Currently adaptive lambdas are not working due to pytorch features.
Now adaptive lambdas are working
- Fix for output for Fourier features - Adaptive lambdas now work for systems - 1D Navier-Stokes example
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Надо поработать над понятностью кода сначала. Все примеры ещё не запускал.
tedeous/metrics.py
Outdated
return loss | ||
|
||
def casual_loss(self, lambda_bound: Union[int, float] = 10, tol: float = 0) -> torch.Tensor: | ||
def casual_loss(self, tol: float = 0) -> torch.Tensor: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
А зачем его вычислять можно пояснить?
tedeous/metrics.py
Outdated
|
||
return loss | ||
|
||
def loss_evaluation(self, lambda_bound: Union[int, float] = 10, weak_form: Union[None, list] = None, tol=0) -> Union[default_loss, weak_loss, casual_loss]: | ||
def compute(self, tol=0) -> \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tol влияет на штраф, можно тоже всюду пояснить, как влияет-то?
tedeous/metrics.py
Outdated
self.boundary = Bounds(self.grid, self.prepared_bconds, self.model, | ||
self.mode, weak_form) | ||
|
||
def evaluate(self, iter: int, update_every_lambdas: Union[None, int], tol: float)-> torch.Tensor: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Если это выносится пользователю, то мб лучше kwargs использовать? .evaluate(-1, None, 0) так себе информативно.
Small fixes for examples and docstrings
* Fix memory overflow and Expand model validation * Version update for actions --------- Co-authored-by: Titov Roman <r.titov@itmo.ru> Co-authored-by: SuperSashka <heretik.unlimited@gmail.com>
NTK -> sensitivity analysis
Docstrings update
Updated project structure. - new modules ('eval' for evaluating operator and boundaries, 'losses' for all losses). Minor improvements due to new improvements.
Currently adaptive lambdas are not working due to pytorch features.
Now adaptive lambdas are working
- Fix for output for Fourier features - Adaptive lambdas now work for systems - 1D Navier-Stokes example
Small fixes for examples and docstrings
NTK -> sensitivity analysis
Updated project structure. - new modules ('eval' for evaluating operator and boundaries, 'losses' for all losses). Minor improvements due to new improvements.
Added normalized loss as stop criterion
What we have done so far - we fully revamped adaprive lambdas routine such that it is computed using dispersion part directly with Sobol indices (one may refer to https://github.com/ITMO-NSS-team/torch_DE_solver/blob/adaptive_lambdas_sobol/examples/adaptive_disp_ODE.py and https://github.com/ITMO-NSS-team/torch_DE_solver/blob/adaptive_lambdas_sobol/examples/adaptive_disp_wave_eq.py examples with my experiments), not neural tangent kernel eigenvalues analogue. It is done since NTK does not work for anything exept single PDE - in NTK case we would have left ODE and systems out. Secondly, we rework the loss - it is now computed in two faces - one with lambdas for gradient descent and one with normalized for stopping crtiterion. Even though it is a bit of pulls back everything - namely, training process is not quite connected with stop criterion - it would be benefit for parameter unification. Additionally, we split Dirichlet and initial (we made a step further and split Dirichlet, operator, periodic) conditions in terms of lambda like big guys. Adaptive lambdas are also split. I hope we making last second fixes and pull new shiny feautes asap. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems that everything is worked as planned at the timebeing, old functionality is preserved and the new one - adaptive lambdas - works as intended.
It should be noted that the stop criterion work is changed, so it may be required to change work with parameters eps, patience and abs_loss in existing code. The tutorial on solver tuning is on its way.
Realization of adaptive lambdas and dynamic loss function for boundary conditions.