-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WandbLogger does not mark uploaded model as 'artifact' #4903
Comments
@borisdayma mind have look? |
This is a great idea! We could upload artifacts at the end of the run or as the model trains. Finally, I think each artifact name should be related to the run id so if we want to supersede a specific artifact, we will need to use the same run id (knowing that W&B still let you use any number of alias Let me know if you have any comments before I try to implement it. |
Thanks for the response. I think following the checkpoint settings (e.g. preserve best top-k) would be nice, as you've mentioned in your comment. Then we can try the models in the other script using wandb API (e.g. automatic download). |
This issue has been automatically marked as stale because it hasn't had any recent activity. This issue will be closed in 7 days if no further activity occurs. Thank you for your contributions, Pytorch Lightning Team! |
Keep issue active |
Hey @borisdayma, Did you start to work on it ? Best, |
I didn't have the time yet but it's definitely on my TODO list. |
I am now working on it and I just realized that #5537 introduced a change in behavior at this line. Models were supposed to be saved in the W&B run folder only when With artifacts, we can let PL save the files directly where it wants and upload models as artifacts separately. |
This issue has been automatically marked as stale because it hasn't had any recent activity. This issue will be closed in 7 days if no further activity occurs. Thank you for your contributions, Pytorch Lightning Team! |
PR has now been merged so this issue should be closed! |
🐛 Bug
I'm using WandbLogger with the latest
pytorch-lightning==1.0.8
. It seems like trained checkpoint is treated as merefile
not amodel artifact
, even I turned onlog_model=True
. It's much convenient to usemodel artifact
from other script so I hope that is done bypytorch-lightning
automatically.Environment
The text was updated successfully, but these errors were encountered: