Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Fea] Add ReloBRalo #694

Merged
merged 5 commits into from
Dec 13, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,9 @@
| 微分方程 | [洛伦兹方程](./zh/examples/lorenz.md) | 数据驱动 | Transformer-Physx | 监督学习 | [Data](https://github.com/zabaras/transformer-physx) | [Paper](https://arxiv.org/abs/2010.03957) |
| 微分方程 | [若斯叻方程](./zh/examples/rossler.md) | 数据驱动 | Transformer-Physx | 监督学习 | [Data](https://github.com/zabaras/transformer-physx) | [Paper](https://arxiv.org/abs/2010.03957) |
| 算子学习 | [DeepONet](./zh/examples/deeponet.md) | 数据驱动 | MLP | 监督学习 | [Data](https://deepxde.readthedocs.io/en/latest/demos/operator/antiderivative_unaligned.html) | [Paper](https://export.arxiv.org/pdf/1910.03193.pdf) |
| 微分方程 | 梯度增强的物理知识融合PDE求解<sup>coming soon</sup> | 机理驱动 | gPINN | 半监督学习 | - | [Paper](https://www.sciencedirect.com/science/article/abs/pii/S0045782522001438?via%3Dihub) |
| 微分方程 | [梯度增强的物理知识融合 PDE 求解](https://github.com/PaddlePaddle/PaddleScience/blob/develop/examples/gpinn/poisson_1d.py) | 机理驱动 | gPINN | 无监督学习 | - | [Paper](https://doi.org/10.1016/j.cma.2022.114823) |
| 积分方程 | [沃尔泰拉积分方程](./zh/examples/volterra_ide.md) | 机理驱动 | MLP | 无监督学习 | - | [Project](https://github.com/lululxvi/deepxde/blob/master/examples/pinn_forward/Volterra_IDE.py) |
| 微分方程 | [分数阶微分方程](https://github.com/PaddlePaddle/PaddleScience/blob/develop/examples/fpde/fractional_poisson_2d.py) | 机理驱动 | MLP | 无监督学习 | - | - |

<br>
<p align="center"><b>技术科学(AI for Technology)</b></p>
Expand Down
2 changes: 1 addition & 1 deletion docs/stylesheets/extra.css
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@

.md-grid {
/* readable page width */
max-width: 70%;
max-width: 1440px;
}

.md-header__topic > .md-ellipsis {
Expand Down
3 changes: 2 additions & 1 deletion docs/zh/api/loss.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,8 +23,9 @@
handler: python
options:
members:
- AGDA
- LossAggregator
- PCGrad
- AGDA
- Relobralo
show_root_heading: true
heading_level: 3
4 changes: 3 additions & 1 deletion ppsci/loss/mtl/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,11 +17,13 @@
from ppsci.loss.mtl.agda import AGDA
from ppsci.loss.mtl.base import LossAggregator
from ppsci.loss.mtl.pcgrad import PCGrad
from ppsci.loss.mtl.relobralo import Relobralo

__all__ = [
"AGDA",
"LossAggregator",
"PCGrad",
"AGDA",
"Relobralo",
]


Expand Down
117 changes: 117 additions & 0 deletions ppsci/loss/mtl/relobralo.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,117 @@
# Copyright (c) 2023 PaddlePaddle Authors. All Rights Reserved.

# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at

# http://www.apache.org/licenses/LICENSE-2.0

# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from __future__ import annotations

from typing import List

import paddle
from paddle import nn


class Relobralo(nn.Layer):
r"""
**Re**lative **Lo**ss **B**alancing with **Ra**ndom **Lo**okback

[Multi-Objective Loss Balancing for Physics-Informed Deep Learning](https://arxiv.org/abs/2110.09813)

Args:
num_losses (int): Number of losses.
alpha (float, optional): Ability for remembering past in paper. Defaults to 0.95.
beta (float, optional): Parameter for generating $\rho$ from bernoulli distribution,
and $E[\rho](=\beta)$ should be close to 1. Defaults to 0.99.
tau (float, optional): Temperature factor. Equivalent to softmax when $\tau$=1.0,
equivalent to argmax when $\tau$=0. Defaults to 1.0.
eps (float, optional): $\epsilon$ to avoid divided by 0 in losses. Defaults to 1e-8.

Examples:
>>> import paddle
>>> from ppsci.loss import mtl
>>> model = paddle.nn.Linear(3, 4)
>>> loss_aggregator = mtl.Relobralo(2)
>>> for i in range(5):
... x1 = paddle.randn([8, 3])
... x2 = paddle.randn([8, 3])
... y1 = model(x1)
... y2 = model(x2)
... loss1 = paddle.sum(y1)
... loss2 = paddle.sum((y2 - 2) ** 2)
... loss_aggregator([loss1, loss2]).backward()
"""

def __init__(
self,
num_losses: int,
alpha: float = 0.95,
beta: float = 0.99,
tau: float = 1.0,
eps: float = 1e-8,
) -> None:
super().__init__()
self.step = 0
self.num_losses: int = num_losses
self.alpha: float = alpha
self.beta: float = beta
self.tau: float = tau
self.eps: float = eps
self.register_buffer("losses_init", paddle.zeros([self.num_losses]))
self.register_buffer("losses_prev", paddle.zeros([self.num_losses]))
self.register_buffer("lmbda", paddle.ones([self.num_losses]))

def _softmax(self, vec: paddle.Tensor) -> paddle.Tensor:
max_item = vec.max()
result = paddle.exp(vec - max_item) / paddle.exp(vec - max_item).sum()
return result

def _compute_bal(
self, losses_vec1: paddle.Tensor, losses_vec2: paddle.Tensor
) -> paddle.Tensor:
return self.num_losses * (
self._softmax(losses_vec1 / (self.tau * losses_vec2 + self.eps))
)

def __call__(self, losses: List[paddle.Tensor], step: int = 0) -> "Relobralo":
self.step = step
losses_stacked = paddle.stack(losses) # [num_losses, ]

if self.step == 0:
self.loss = losses_stacked.sum()
with paddle.no_grad():
paddle.assign(losses_stacked.detach(), self.losses_init)
else:
with paddle.no_grad():
# 1. update lambda_hist
rho = paddle.bernoulli(paddle.to_tensor(self.beta))
lmbda_hist = rho * self.lmbda + (1 - rho) * self._compute_bal(
losses_stacked, self.losses_init
)

# 2. update lambda
paddle.assign(
self.alpha * lmbda_hist
+ (1 - self.alpha)
* self._compute_bal(losses_stacked, self.losses_prev),
self.lmbda,
)

# 3. compute reweighted total loss with lambda
self.loss = (losses_stacked * self.lmbda).sum()

# update losses_prev at the end of each step
with paddle.no_grad():
paddle.assign(losses_stacked.detach(), self.losses_prev)
return self

def backward(self) -> None:
self.loss.backward()