Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ensure optimizer state references are cleared #100282

Closed
wants to merge 2 commits into from
Closed

Conversation

mlazos
Copy link
Contributor

@mlazos mlazos commented Apr 28, 2023

@mlazos mlazos requested a review from janeyx99 April 28, 2023 21:27
@pytorch-bot
Copy link

pytorch-bot bot commented Apr 28, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/100282

Note: Links to docs will display an error until the docs builds have been completed.

❗ 2 Active SEVs

There are 2 currently active SEVs. If your PR is affected, please view them below:

❌ 3 New Failures, 1 Unrelated Failure

As of commit 602ae0c:

NEW FAILURES - The following jobs have failed:

BROKEN TRUNK - The following job failed but were present on the merge base d4bf76c:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

Copy link
Contributor

@janeyx99 janeyx99 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the quick turnaround here! this looks good from the optim side, but it would be good to make sure this is okay with dynamo folks.

@mlazos
Copy link
Contributor Author

mlazos commented May 1, 2023

@pytorchbot merge -f "inductor torchbench failing on master"

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

torch.compile holds onto CUDA memory unexpectedly with optimizers
4 participants