-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No params passed to click #111
Comments
Here's a simple code that should reproduce the error (at least, it did for me on my machine): import torch
from torch import nn
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torch.utils.data import random_split
from torchvision.datasets import MNIST
from torchvision import transforms
import pytorch_lightning as pl
from clearml import Task
import click
class LitAutoEncoder(pl.LightningModule):
def __init__(self):
super().__init__()
self.encoder = nn.Sequential(
nn.Linear(28 * 28, 64),
nn.ReLU(),
nn.Linear(64, 3))
self.decoder = nn.Sequential(
nn.Linear(3, 64),
nn.ReLU(),
nn.Linear(64, 28 * 28))
def forward(self, x):
embedding = self.encoder(x)
return embedding
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.parameters(), lr=1e-3)
return optimizer
def training_step(self, train_batch, batch_idx):
x, y = train_batch
x = x.view(x.size(0), -1)
z = self.encoder(x)
x_hat = self.decoder(z)
loss = F.mse_loss(x_hat, x)
self.log('train_loss', loss)
return loss
def validation_step(self, val_batch, batch_idx):
x, y = val_batch
x = x.view(x.size(0), -1)
z = self.encoder(x)
x_hat = self.decoder(z)
loss = F.mse_loss(x_hat, x)
self.log('val_loss', loss)
@click.command()
def train():
task = Task.init(project_name='examples', task_name='hello world')
task.execute_remotely(queue_name='default')
# data
dataset = MNIST('', train=True, download=True, transform=transforms.ToTensor())
mnist_train, mnist_val = random_split(dataset, [55000, 5000])
train_loader = DataLoader(mnist_train, batch_size=32, num_workers=12)
val_loader = DataLoader(mnist_val, batch_size=32, num_workers=12)
# model
model = LitAutoEncoder()
# training
trainer = pl.Trainer(gpus=1, precision=16)
trainer.fit(model, train_loader, val_loader)
if __name__ == '__main__':
train() This is the simplest example you can do with
|
By the way, I should be up to date with:
|
hi @aurelien-m
We keep you updated regarding the fix :) |
Hey, thanks for coming back to me. Unfortunately our code base it pretty big and strongly relies on |
Closing this as it was already fixed in released versions. Please reopen if required. |
I'm getting the following error in my
clearml-agent
:I tried to look into it and it seems that everything is empty:
Any ideas as to what could go wrong?
The text was updated successfully, but these errors were encountered: