-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Eager pip install #13198
Eager pip install #13198
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seem complete for me
21cc7b4
to
c2435ee
Compare
.github/workflows/ci_test-full.yml
Outdated
pip install -r requirements/examples.txt --find-links https://download.pytorch.org/whl/cpu/torch_stable.html --upgrade | ||
pip install -r requirements/test.txt --upgrade | ||
pip install -r requirements/examples.txt --find-links https://download.pytorch.org/whl/cpu/torch_stable.html -U --upgrade-strategy eager | ||
pip install -r requirements/test.txt -U --upgrade-strategy eager |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CI is failing because a test.txt
dependency requires protobuf
and since we are marking eager here, it gets upgraded to the latest version since it's not considering the pin in requirements.txt
Not sure about how we can resolve this problem. All requirements would need to be installed together instead of in steps so the resolution algorithm can find a working combination.
Closing, as I don't think we can fix the issue this way reliably. I think the best option now would be to have some sort of lock file by using a tool like poetry. |
@carmocca re-opened and set the eager mode only for the one CI which shall be running the latest versions... I think that is the best place we shall be testing compatibility 🐰 |
But it needs to be done for all requirements, not just PyTorch. In fact, last breakages were from dependabot upgrading DeepSpeed and NumPy. I don't see how it solves the issue described in #13198 (comment) |
fyi, shall be passing after #13441 |
yes, this won't address installing requirements together but set the proper upgrade for testing the bumped version are correct/feasible so let's do it step by step... |
I would prefer to have a general solution that works for all requirements before starting to make changes. This will make any possible issues pop up |
but be aware that no other CI/workflow requires running the latest, other rather uses cache, for example, Conda has specified PT versions and other like dockers are fine to build with a version available not necessary the latest... so I believe we shall just update this CI job |
@@ -143,7 +147,7 @@ jobs: | |||
run: | | |||
# adjust versions according installed Torch version | |||
python ./requirements/pytorch/adjust-versions.py requirements/pytorch/examples.txt | |||
pip install -r requirements/pytorch/examples.txt --find-links https://download.pytorch.org/whl/cpu/torch_stable.html --upgrade | |||
pip install $PIP_FLAG -r requirements/pytorch/examples.txt --find-links "$TORCH_URL --upgrade |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so what we can do here is reinstall the all above to reduce installing different versions to be not sliding with example requirements...
The point of this PR was that all installations would use the newest available version. For "oldest" jobs, there's nothing to do because they are pinned already to the min version:
What this PR tried to do is to always force eager ugprades. But this needs to be done for all dependencies. However issues arise from installing different dependencies in different steps since the resolution algorithm does not consider the requirements from a previous step. So I don't see how the changes in the PR now help. Another option would be to do the max version pinning just as we do for min version pinning. |
This pull request has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. If you need further help see our docs: https://pytorch-lightning.readthedocs.io/en/latest/generated/CONTRIBUTING.html#pull-request or ask the assistance of a core contributor here or on Slack. Thank you for your contributions. |
This pull request is going to be closed. Please feel free to reopen it create a new from the actual master. |
What does this PR do?
Fixes the problem described in #13048 (comment)
Does your PR introduce any breaking changes? If yes, please list them.
None
Before submitting
PR review
cc @carmocca @akihironitta @Borda @tchaton @rohitgr7