-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
using LBFGS optimizer in pytorch lightening the model is not converging as compared to native pytoch + LBFGS #4083
Comments
do you have the code for native PyTorch + LBFGS for the same? |
this is the code including MNIST and LBFGS that works fine with native pytorch:
|
You don't need to override optimizer_step... you're only doing it to pass in the second_order closure, but that's exactly what the default implementation does
|
@williamFalcon still should converge right?? even if the overridden method is doing the same update. Maybe a bug here if it's not converging in pl. will check this. |
@williamFalcon we modified the code by removing optimizer_step, however it dose not help solving the issue. |
ok found something. not sure if it's correct or not since I haven't used LBFGS before. I checked that optim.LBFGS calls closure 20 times for each step and in this example it doesn't call any step and But pl calls an explicit training_step with the closure obviously that means it will be called 21 times + an explicit These are my observations. Anyone with prior experience with LBFGS optimizer can confirm the right way to do this. |
how many times does it get called with pytorch? LBFGS is a quasi knewton method which means it does not compute the hessian directly but instead it approximates it. I assume pytorch calls step multiple times to do this approximation? |
the given example calls it 20 times. |
the default value for the number of iterations is 20 times , based on the pytorch help: torch.optim.LBFGS(params, lr=1, max_iter=20, max_eval=None, tolerance_grad=1e-07, tolerance_change=1e-09, history_size=100, line_search_fn=None) |
@williamFalcon We are in the process of developing a code that requires me to use LBFGS optimizer. I'd like to use pytorch-lightening platform for this code. do you think that the LBFGS issue can be resolved any time soon in the later versions? |
ok will check this if I get some time :) |
@williamFalcon it seems that the LBFGS optimizer in the latest version of pytorch-lightening carries the same issue as the previous versions. Is there a way to fix this issue temporarily up to the time that bug gets fixed. |
@Borda , @edenlightning ,LBFGS issue dose not seem be fixed in the latest version of pytorch Lightening. should we hope that this issue could be fixed in the near future? we started a project using pytorch lightening and got stuck because of not being able to use LBFGS optimizer. if it is not fixed yet, would be possible to expedite resolving this issue? |
This issue has been automatically marked as stale because it hasn't had any recent activity. This issue will be closed in 7 days if no further activity occurs. Thank you for your contributions, Pytorch Lightning Team! |
@williamFalcon @Borda @edenlightning Since this thread will be closed automatically within the next 48 hours, I decided to mention you guys with the hope that the bug gets fixed in a meaningful period. I also appreciate @justusschock for his efforts to fix the issue. Ignoring a bug will not fix it, and it dramatically stops the research activities of people who trusted lightning. Please help us with fixing the bug. |
@carmocca I am very thankful if you take a look at the discussion made here to see whether you can help us fix the issue. The LBFGS bug in lightning has dramatically impacted an important project that I am working on. |
Apologize for the delay! We try our best to take a look at every issue with the resources that we have. We bumped the priority for this one and will try to prioritize in the next sprints! |
@edenlightning I greatly appreciate your help on this subject. |
As @justusschock added the tests in #4190 and I confirmed locally with PL code example (originally from @peymanpoozesh)import os
import warnings
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import DataLoader, random_split
from torchvision import transforms
from torchvision.datasets import MNIST
import pytorch_lightning as pl
warnings.filterwarnings("ignore")
pl.seed_everything(42)
class LightningMNISTClassifier(pl.LightningModule):
def __init__(self):
super(LightningMNISTClassifier, self).__init__()
self.layer_1 = nn.Linear(28 * 28, 128)
self.layer_2 = nn.Linear(128, 256)
self.layer_3 = nn.Linear(256, 10)
def forward(self, x):
batch_size, channels, width, height = x.size()
x = x.view(batch_size, -1)
x = self.layer_1(x)
x = torch.relu(x)
x = self.layer_2(x)
x = torch.relu(x)
x = self.layer_3(x)
x = torch.log_softmax(x, dim=1)
return x
def prepare_data(self):
transform = transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]
)
mnist_train = MNIST(os.getcwd(), train=True, download=True, transform=transform)
self.mnist_train, self.mnist_val = random_split(
mnist_train, [55000, 5000], generator=torch.Generator().manual_seed(42)
)
def train_dataloader(self):
dl = DataLoader(self.mnist_train, batch_size=1024, num_workers=0)
return dl
def configure_optimizers(self):
# optimizer = optim.Adam(self.parameters(), lr=1e-3)
optimizer = optim.LBFGS(self.parameters(), lr=0.01, max_iter=20)
return optimizer
def training_step(self, train_batch, batch_idx):
x, y = train_batch
logits = self.forward(x)
loss = F.nll_loss(logits, y)
return {"loss": loss}
def training_step_end(self, outputs):
print("closure_loss:", outputs["loss"].item())
return outputs
def main():
model = LightningMNISTClassifier()
trainer = pl.Trainer(
max_epochs=30,
progress_bar_refresh_rate=0,
weights_summary=None,
# fast_dev_run=20,
)
trainer.fit(model)
if __name__ == "__main__":
main() native PyTorch code example (originally from @peymanpoozesh)import os
import warnings
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import DataLoader, random_split
from torchvision import transforms
from torchvision.datasets import MNIST
from pytorch_lightning import seed_everything
warnings.filterwarnings("ignore")
seed_everything(42)
class PytorchMNISTClassifier(nn.Module):
def __init__(self):
super(PytorchMNISTClassifier, self).__init__()
self.layer_1 = nn.Linear(28 * 28, 128)
self.layer_2 = nn.Linear(128, 256)
self.layer_3 = nn.Linear(256, 10)
def forward(self, x):
batch_size, channels, width, height = x.size()
x = x.view(batch_size, -1)
x = self.layer_1(x)
x = torch.relu(x)
x = self.layer_2(x)
x = torch.relu(x)
x = self.layer_3(x)
x = torch.log_softmax(x, dim=1)
return x
def main():
device = torch.device("cpu")
model = PytorchMNISTClassifier().to(device)
# optimizer=optim.Adam(model.parameters(),lr=1e-3)
optimizer = optim.LBFGS(model.parameters(), lr=0.01, max_iter=20)
transform = transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]
)
mnist_train = MNIST(os.getcwd(), train=True, download=True, transform=transform)
mnist_train, mnist_val = random_split(
mnist_train, [55000, 5000], generator=torch.Generator().manual_seed(42)
)
dl = DataLoader(mnist_train, batch_size=1024, num_workers=0)
for epoch in range(30):
for i, (x, y) in enumerate(dl):
x = x.to(device)
y = y.to(device)
def closure():
logits = model(x)
optimizer.zero_grad()
loss = F.nll_loss(logits, y)
loss.backward(retain_graph=True)
print("closure_loss:", loss.item())
return loss
loss_out = optimizer.step(closure=closure)
if __name__ == "__main__":
main() my env$ wget https://raw.githubusercontent.com/PyTorchLightning/pytorch-lightning/master/tests/collect_env_details.py
$ python collect_env_details.py
* CUDA:
- GPU:
- available: False
- version: None
* Packages:
- numpy: 1.19.5
- pyTorch_debug: False
- pyTorch_version: 1.7.1+cpu
- pytorch-lightning: 1.1.4
- tqdm: 4.56.0
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor:
- python: 3.8.5
- version: #1 SMP Debian 4.19.160-2 (2020-11-28) I have no idea how I could investigate this further. @carmocca @rohitgr7 Could you help here if you have time...? EDIT (Jan 28, 2021): Not sure how this helps us debug, but I realised that if we change the value of |
@peymanpoozesh @Bajo1994 Sorry for the delay. I haven't figured out why LBFGS behaves differently between Lightning and native PyTorch, but I found an easy workaround, so let me share it here. The workaround is to use the manual optimization instead of the default automatic optimization (See my notebook linked below for the complete code using BoringModel): class Model(pl.LightningModule):
def __init__(self, ...):
self.automatic_optimization = False
...
def training_step(self, batch, batch_idx):
optimizer = self.optimizers()
def closure():
output = self.layer(batch)
loss = self.loss(batch, output)
optimizer.zero_grad()
self.manual_backward(loss)
return loss
optimizer.step(closure=closure) See also: |
Here are the minimal code examples using BoringModel. Lightning codeimport pytorch_lightning as pl
import torch
from torch.utils.data import DataLoader, Dataset
pl.seed_everything(42)
class RandomDataset(Dataset):
def __init__(self, size, num_samples):
self.len = num_samples
self.data = torch.randn(num_samples, size)
def __getitem__(self, index):
return self.data[index]
def __len__(self):
return self.len
class BoringModel(pl.LightningModule):
def __init__(self):
super().__init__()
self.layer = torch.nn.Linear(32, 2)
def forward(self, x):
return self.layer(x)
def loss(self, batch, prediction):
return torch.nn.functional.mse_loss(prediction, torch.ones_like(prediction))
def training_step(self, batch, batch_idx):
output = self.layer(batch)
loss = self.loss(batch, output)
return {"loss": loss}
def training_step_end(self, training_step_outputs):
loss = training_step_outputs["loss"]
print("loss:", loss.item())
return training_step_outputs
def configure_optimizers(self):
# optimizer = torch.optim.Adam(self.parameters(), lr=1e-3)
optimizer = torch.optim.LBFGS(self.parameters(), lr=0.01, max_iter=20)
return optimizer
def main():
ds = RandomDataset(32, 100000)
dl = DataLoader(ds, batch_size=1024)
model = BoringModel()
trainer = pl.Trainer(
progress_bar_refresh_rate=0,
fast_dev_run=1,
)
trainer.fit(model, dl)
if __name__ == "__main__":
main() Pure PyTorch codeimport torch
import torch.nn as nn
from pytorch_lightning import seed_everything
from torch.utils.data import DataLoader, Dataset
seed_everything(42)
class RandomDataset(Dataset):
def __init__(self, size, num_samples):
self.len = num_samples
self.data = torch.randn(num_samples, size)
def __getitem__(self, index):
return self.data[index]
def __len__(self):
return self.len
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.layer = torch.nn.Linear(32, 2)
def forward(self, x):
return self.layer(x)
def main():
ds = RandomDataset(32, 100000)
dl = DataLoader(ds, batch_size=1024)
model = Model()
# optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
optimizer = torch.optim.LBFGS(model.parameters(), lr=0.01, max_iter=20)
for epoch in range(3):
for i, x in enumerate(dl):
def closure():
prediction = model(x)
loss = torch.nn.functional.mse_loss(prediction, torch.ones_like(prediction))
optimizer.zero_grad() # removing this line causes the same bug as in Lightning script
loss.backward()
print("loss:", loss.item())
return loss
loss_out = optimizer.step(closure=closure)
if __name__ == '__main__':
main() |
@akihironitta Why doesn't |
@justusschock Fixed! (It was just for print debugging from another script because LightningOptimizer doesn't return the output of |
@akihironitta @carmocca I am very thankful for your great effort on this bug. I am looking forward to resuming my project as soon as you update the pl package. In my code, I like to switch between LBFGS and Adam optimizers. I like to use the LBFGS when the loss is large and then switch to Adam. I hope switching between these two optimizers would be smooth in pl (I had difficulties in switching between these two optimizers in native PyTorch). I will keep you posted if there is any problem. |
Common bugs:
Comparing the results of LBFGS + Pytorch lightening to native pytorch + LBFGS, Pytorch lightening is not able to update wights and model is not converging. there are some issues to point out:
🐛 Bug
LBFGS + Pytorch Lightening has problem converging and weights are updating as compared to Adam + Pytorch lightening.
Code sample
Expected behavior
Environment
Please copy and paste the output from our
environment collection script
(or fill out the checklist below manually).
You can get the script and run it with:
Environment:
-Colab and pycharm
-PyTorch version: 1.6.0+CPU and GPU
-pytorch-lightning==1.0.0rc3
The text was updated successfully, but these errors were encountered: