Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adaptive lambdas #32

Merged
merged 35 commits into from
Jul 26, 2023
Merged

Adaptive lambdas #32

merged 35 commits into from
Jul 26, 2023

Conversation

nikiniki1
Copy link
Collaborator

Realization of adaptive lambdas and dynamic loss function for boundary conditions.

Currently adaptive lambdas are not working due to pytorch features.
Now adaptive lambdas are working
@SuperSashka SuperSashka requested a review from aminevdam June 21, 2023 13:01
- Fix for output for Fourier features
- Adaptive lambdas now work for systems
- 1D Navier-Stokes example
Copy link
Member

@SuperSashka SuperSashka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Надо поработать над понятностью кода сначала. Все примеры ещё не запускал.

return loss

def casual_loss(self, lambda_bound: Union[int, float] = 10, tol: float = 0) -> torch.Tensor:
def casual_loss(self, tol: float = 0) -> torch.Tensor:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

А зачем его вычислять можно пояснить?


return loss

def loss_evaluation(self, lambda_bound: Union[int, float] = 10, weak_form: Union[None, list] = None, tol=0) -> Union[default_loss, weak_loss, casual_loss]:
def compute(self, tol=0) -> \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tol влияет на штраф, можно тоже всюду пояснить, как влияет-то?

self.boundary = Bounds(self.grid, self.prepared_bconds, self.model,
self.mode, weak_form)

def evaluate(self, iter: int, update_every_lambdas: Union[None, int], tol: float)-> torch.Tensor:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Если это выносится пользователю, то мб лучше kwargs использовать? .evaluate(-1, None, 0) так себе информативно.

nikiniki1 and others added 5 commits June 27, 2023 17:03
Small fixes for examples and docstrings
* Fix memory overflow and Expand model validation

* Version update for actions

---------

Co-authored-by: Titov Roman <r.titov@itmo.ru>
Co-authored-by: SuperSashka <heretik.unlimited@gmail.com>
@SuperSashka SuperSashka removed the request for review from aminevdam July 12, 2023 11:22
@SuperSashka SuperSashka linked an issue Jul 12, 2023 that may be closed by this pull request
nikiniki1 and others added 17 commits July 13, 2023 12:44
NTK -> sensitivity analysis
Docstrings update
Updated project structure.
- new modules ('eval' for evaluating operator and boundaries, 'losses' for all losses).
Minor improvements due to new improvements.
Currently adaptive lambdas are not working due to pytorch features.
Now adaptive lambdas are working
- Fix for output for Fourier features
- Adaptive lambdas now work for systems
- 1D Navier-Stokes example
Small fixes for examples and docstrings
NTK -> sensitivity analysis
Updated project structure.
- new modules ('eval' for evaluating operator and boundaries, 'losses' for all losses).
Minor improvements due to new improvements.
@SuperSashka
Copy link
Member

SuperSashka commented Jul 24, 2023

What we have done so far - we fully revamped adaprive lambdas routine such that it is computed using dispersion part directly with Sobol indices (one may refer to https://github.com/ITMO-NSS-team/torch_DE_solver/blob/adaptive_lambdas_sobol/examples/adaptive_disp_ODE.py and https://github.com/ITMO-NSS-team/torch_DE_solver/blob/adaptive_lambdas_sobol/examples/adaptive_disp_wave_eq.py examples with my experiments), not neural tangent kernel eigenvalues analogue. It is done since NTK does not work for anything exept single PDE - in NTK case we would have left ODE and systems out.

Secondly, we rework the loss - it is now computed in two faces - one with lambdas for gradient descent and one with normalized for stopping crtiterion. Even though it is a bit of pulls back everything - namely, training process is not quite connected with stop criterion - it would be benefit for parameter unification.

Additionally, we split Dirichlet and initial (we made a step further and split Dirichlet, operator, periodic) conditions in terms of lambda like big guys. Adaptive lambdas are also split.

I hope we making last second fixes and pull new shiny feautes asap.

Copy link
Member

@SuperSashka SuperSashka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that everything is worked as planned at the timebeing, old functionality is preserved and the new one - adaptive lambdas - works as intended.

It should be noted that the stop criterion work is changed, so it may be required to change work with parameters eps, patience and abs_loss in existing code. The tutorial on solver tuning is on its way.

@nikiniki1 nikiniki1 merged commit a19e359 into main Jul 26, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

New layers and adaptive lambdas
3 participants