Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New Resolver: Rollout, Feedback Loops and Development Flow #6536

Closed
pradyunsg opened this issue May 25, 2019 · 103 comments
Closed

New Resolver: Rollout, Feedback Loops and Development Flow #6536

pradyunsg opened this issue May 25, 2019 · 103 comments
Labels
C: dependency resolution About choosing which dependencies to install type: maintenance Related to Development and Maintenance Processes

Comments

@pradyunsg
Copy link
Member

I've been thinking a bit about #988 (duh!) -- specifically how to roll it out so as to minimize breakage and maximize the opportunity to get useful feedback from users.

Filing this issue now that I finally have both thumbs + time at hand to do so. Obviously, all of what follows is up for discussion. :)


My current plan for rolling out the new resolver is based on exposing the new resolver behind a flag. The flow would be to not document it initially and add big fat warnings on the use of the flag. Once it is less experimental and more beta-ish, we can start inviting users to play with the new resolver. This would involve CTAs to users for asking them to try it out and provide feedback. This information might also be printed when run with the flag.

In terms of feedback management, I am thinking of requesting for feedback on a different repository's issue tracker. The reasoning behind putting issues on a different issue tracker, is to minimize noise here + allow more focused discussions/investigation. I'd bubble up anything that's more than a "bug in the resolution" to the main issue tracker (this one).

In terms of transitioning, I think once there's enough confidence in the new resolution logic, we can look into how we want to handle the transition. Having put this behind a flag, we'll have 2 options -- directly switch over in a release or "stabilize" the new resolver and do a (maybe multi-release?) "transition period". I do think that we can do the transition planning later, when we have a better understanding of the exact trade-offs involved.

In terms of git/GitHub, this is probably the first "experimental" feature implementation within pip. FWIW, I'm planning to do experiments etc on my fork and regularly merging progress to pip's main repository itself (solely code, into pip._internal.resolution). I don't want to be noisy on the main repository but I do want to keep master in sync with work on this.


Note that I'm putting #5051 as a blocker for this work because of how painful dealing with build logic was when building the prototype.

@pradyunsg pradyunsg added C: dependency resolution About choosing which dependencies to install type: maintenance Related to Development and Maintenance Processes labels May 25, 2019
@cjerdonek
Copy link
Member

I don't know how you have it planned out, but one comment is that I would encourage you to try to share code as much as possible between the new code and the current code, and refactor the current code as you're working to allow more sharing between the new and current code paths.

One reason is that if you're sharing more code, there will be less chance of breakage when you're toggling the new behavior off and on, because you'll be exercising that shared code in both states and you won't have as many potential differences in behavior to contend with.

@pfmoore
Copy link
Member

pfmoore commented May 25, 2019

This would involve CTAs to users for asking them to try it out and provide feedback

Our track record on getting advance feedback on new features has been pretty bad. We've tried beta releases, releasing new features with "opt out" flags that people can use if they hit issues, big publicity drives for breaking changes, and none of them seem to have worked.

My personal feeling is that "make it available and ask for feedback" is an interesting variation on what we've previously tried, but ultimately it won't make much difference. Too many people use the latest pip with default options in their automated build pipelines, and don't test before moving to a new pip version (we saw this with PEP 517).

I wonder - could we get a PSF grant to get resources to either do a big "real world" testing exercise for this feature, or (better still) develop a testing infrastructure for us? Such a project could include a call for projects to let us know their workflows and configurations, so that we can set up testing paths that ensure that new pip versions don't break them. Or even just use a grant to get someone experienced in the communications aspect of getting beta testers for new features to help us set up a better user testing programme?

In terms of git/GitHub, this is probably the first "experimental" feature implementation within pip

I'm not 100% sure what you mean by that. We've certainly had new features in the past that have been added while the "old way" was still present. We've not tended to leave them "off by default, enable to try them out", if that's what you mean, but that's mostly because we've never found any good way to get feedback (see above).

@pradyunsg
Copy link
Member Author

pradyunsg commented May 26, 2019

I spent ~60 minutes (re-re-re-re-re-)writing this one post, so now I will go take a look at places in New York! If you don't see an quick response from me, it's because I'll be in tourist mode.


I would encourage you to try to share code as much as possible between the new code and the current code, and refactor the current code as you're working to allow more sharing between the new and current code paths.

Definitely! This is 80% of why I'm putting #5051 ahead of this -- I intend to pay down a lot of the technical debt we've accumulated in our build logic so that it becomes easier to reuse (all of?) it. A bunch of the code will have to be 🔥 and I agree that the rest should definitely be reused as much as reasonable.

We've not tended to leave them "off by default, enable to try them out", if that's what you mean

Yep, indeed. I'm also hinting at development flow here -- IMO it would be okay to merge empty infrastructure (classes with a bunch of methods that are just raise NotImplementedError() that will get fleshed out in subsequent PRs) or one doesn't cover all the cases (half-baked implementation) into the master branch as long as that's only used behind the flag that is explicitly noted as "experimental/alpha".

re: feedback

I'm young, dumb and optimistic -- I want to make this rollout an opt-in, to get proactive feedback and act on it. By "proactive", I mean from folks who are willing to take out some extra time to try out alpha/beta functionality and inform us about how it is. I think if we make make enough noise and strategically target/reach out to people, we can get good "proactive" feedback from folks who have the time and energy to try out new functionality to help iron out the details/issues.

Looking at our recent "major" changes, I think most of the feedback we received was reactive -- from users realizing issues with their workflows when it broke, and then reaching out to inform us about it. A lot of them may not have had the time to help iron out the details of the new functionality, which causes a lot of friction. These also cost us a lot of our "churn budget" [1], which I don't want to spend more of, since Python Packaging doesn't really have much left anyway [2].

FWIW, I plan to borrow some ideas from the PyPI launch, like making blog posts at fairly visible locations (i.e. not my personal blog), possibly going on podcasts, well-timed actionable emails etc. I'm also looking for more prior art/avenues to communicate via. One of the (many!) things I learnt at PyCon, was that there are channels that we don't use, that will help spread information but won't seek out to check if we have any to spread.

To be clear, I'm not criticizing against the rollout approach we took for PEP 517, I think it's going well, especially given the fact that we're all volunteers. I'm trying to see what we can learn and actionable items to try to avoid the problems we had. Most of these items do involve more work from the maintainers and the main reason I am even spending all this time thinking about this, is because I view this as a fun learning exercise of how to do change management.

re: grants

Yep, I think we can definitely use a grant/more experienced person to help us figure out the communication, roll-outs and testing infrastucture. That does however need someone to do the grant-writing work and figuring out more concrete plans than I can make right now, since I don't have a more stable number of hours / week that I can guarantee.

FWIW, PSF has an ongoing contract to help figure out PyPA/Packaging-related communication with Changeset Consulting, so maybe we can leverage that?


I'm intentionally not @-mentioning people since this is fairly early in the planning state to add more people in the conversation.

Footnotes:

  1. A really nice term that @ pganssle used that I'm definitely going to use.
  2. This is why I've put Deprecate pip, pipX, and pipX.Y #3164 on the back burner, despite having an implementation of the "pip-cli" package proposed there and having reasonable consensus on how we want the rollout to look like.

@pfmoore
Copy link
Member

pfmoore commented May 26, 2019

I'm young, dumb and optimistic

:-) And I'm sometimes too old, weary and cynical. Let's go with your philosophy, it sounds much better :-)

@cjerdonek
Copy link
Member

Definitely! This is 80% of why I'm putting #5051 ahead of this -- I intend to pay down a lot of the technical debt we've accumulated in our build logic so that it becomes easier to reuse (all of?) it.

Great!

@brainwane
Copy link
Contributor

From IRC just now:

[sumanah] pradyunsg: is there anything we the pip & packaging community can do to help you get more work done faster on the resolver?
....
[pradyunsg] Actually, right now, inputs on #6536 would probably help me figure out how to approach the work / get feedback from people etc.
....
[sumanah] pradyunsg: re: New Resolver: Rollout, Feedback Loops and Development Flow #6536 -- the input you want is something like: is the feature flag approach a good idea? is it a good idea to get feedback via some mechanism other than the pip GitHub issues? is it a good idea to get a grant or similar to get realworld manual testing & robust testing infrastructure built, and/or proactive comms?
...
[pradyunsg] Yep -- whether the ideas I'm suggesting are good. Also any additional ideas/approaches/thoughts that might help the rollout + feedback be smoother would be awesome.

So:

Is the feature flag approach a good idea? Yes.

Is it a good idea to get feedback via some mechanism other than the pip GitHub issues? Yes. We should find automated ways to accept less structured bug reports from less expert users.

Would more robust testing infrastructure help? Yes, a lot, and this is someplace our sponsors might be able to help us out.

Could Changeset (me), under the existing contract with PSF to help with PyPA coordination/communications, help pip with proactive communications to get us more systematic realworld manual testing? Assuming that I have hours remaining in my contract by the time we want to start this rollout, yes.

is it a good idea to get a grant or similar to get more help with user experience, communications/publicity, and testing? Yes. The PSF grants would potentially be of interest, as would NLNet grants (for requests under 30,000 euros), potentially the Chan Zuckerberg essential open source software for science grant, and Mozilla's MOSS. The Packaging WG can be the applicant of record. If @pradyunsg or @pfmoore wants to give a "yeah that sounds interesting" nod, I can start investigating those possibilities with the WG.

@pfmoore
Copy link
Member

pfmoore commented Jun 13, 2019

If @pradyunsg or @pfmoore wants to give a "yeah that sounds interesting" nod,

It definitely sounds interesting to me :-)

@pradyunsg
Copy link
Member Author

@pradyunsg or @pfmoore wants to give a "yeah that sounds interesting" nod

nods yeah that sounds interesting

@pradyunsg
Copy link
Member Author

Would more robust testing infrastructure help? Yes, a lot, and this is someplace our sponsors might be able to help us out.

@brainwane Also relevant here is https://github.com/pypa/integration-test. I think getting this set up, is another potential area for funding -- we should add this to https://wiki.python.org/psf/Fundable%20Packaging%20Improvements.

@brainwane
Copy link
Contributor

OK! I've started talking with the PSF and with the Chan Zuckerberg Initiative folks about applying for a CZI grant via the Packaging Working Group. I've added some details to the Fundable Packaging Improvements page about why the new pip resolver's important, and added the integration-test project to that list. And I've started gathering names of user experience experts who have the capacity to research our complicated all-on-the-command-line package distribution/installation toolchain, talk with users to understand their mental model of what's happening and what ought to happen, and advise maintainers.

If we get money via grants from MOSS, CZI, or NLNET, I think we'd get the money ... October at the earliest, probably. A grant directly from the PSF would be faster probably but "Our current focus is Python workshops, conferences (esp. for financial aid), and Python diversity/inclusivity efforts."

@techalchemy
Copy link
Member

One consideration is that I know Brett & the folks over on the steering council are talking about investing in project management and looking into having some sort of paid resources for managing these projects (triage, project management, etc) and they are talking with the PSF directly. It may be worth reaching out and finding out what they are doing or thinking, since I heard some talks of long term sustainability and it'd be a good thing to be involved in those.

Feature flags are good, opt-ins are good. One thing you might consider is whether you could randomly prompt users to try out the resolver (like, very very very infrequently and only for an install at a time, i.e. not forcing them to turn it on permanently). Then you could indicate how the resolver was helpful (e.g. what did it do for them? what conflicts did it encounter and resolve?)

People coming from javascript or rust for example will also expect a lockfile of some kind, so that may be something to consider...

Sorry to jump in, glad to see this moving ahead!

@jriddy
Copy link

jriddy commented Jun 27, 2019

My personal feeling is that "make it available and ask for feedback" is an interesting variation on what we've previously tried, but ultimately it won't make much difference. Too many people use the latest pip with default options in their automated build pipelines, and don't test before moving to a new pip version (we saw this with PEP 517).

As one of the people that got bit by some PEP 517 issues for this very reason, I'd actually love to see an opt-in way of testing things out. But I only know about this kinda stuff because i subscribed to all the python packaging news sources I could after the --no-use-pep517 flag issue. What I'm saying is that spreading this kind of news is hard and is probably why feedback is hard to get.

I think more people would be interested in this if the information could be disseminated better. Is that what the resources you are seeking would allow for?

@chrish42
Copy link

chrish42 commented Jul 5, 2019

To continue on what jriddy is saying, I also feel it'll be really hard to get people to test various feature flags if they have to know about them, make changes to their CI setup for each new flag, etc.

What would seem much more doable, however, is if there is only one feature flag to know about, to test "what's coming up next" in terms of changes that need to be tested. Then people and companies could setup their CI to run that also (without failing builds for errors). I'm thinking of something similar to Rust, where these kinds of changes bake in the "beta" channel of the toolchain, and it's easy to setup another CI channel to run things on the beta toolchain, and send errors to someone.

The key thing is, this setup needs to be learned about and done only once, instead of having to continuously learn about new individual feature flags, modify CI setups or test them manually.

@jriddy
Copy link

jriddy commented Jul 5, 2019

What would seem much more doable, however, is if there is only one feature flag to know about,

In a sense, doesn't this already exist in the form of --pre? Could the beta release channel for pip just be a matter of running pip install --upgrade --pre pip?

@pradyunsg
Copy link
Member Author

Sorry to jump in, glad to see this moving ahead!

@techalchemy please, of all people, you definitely don't have to be sorry for pitching in this discussion.

@pradyunsg
Copy link
Member Author

pradyunsg commented Jul 6, 2019

Is that what the resources you are seeking would allow for?

To an extent, yes.

reg: beta releases/"channel" for pip

Thanks for chiming in @jriddy and @chrish42. While I think that generally that's definitely an useful/important conversation to have, I also feel it's slightly OT for this issue. None the less, I'll respond here once; if we want to discuss this more, let's open a new issue.

We've tried that in the past -- most recently with pip 10 -- but it hasn't worked out well. I am slightly skeptical of how well that might work going forward too, but I can also imagine that some changes to our process might result in this working smoothly for us. Maybe we could do a "beta only" set of features or something? I'd imagined -X all as a syntax for that in #5727. Maybe we could pick that up as a part of this rollout plan? Idk. We'll need to invest time and energy to figure this out. :)

@msarahan
Copy link

As mentioned in pypa/packaging-problems#25 (comment), I think it's important to have a consolidated explanation of how a solver changes the pip experience. Lots of people will be frustrated by the shift to a more rigid system (even though things should be more reliable overall, they'll get blocked in places where they are not currently getting blocked.

Having a central explanation of how things have changed and why it is a good change will make responding to those angry people much simpler. Post a link and see if they have any further questions.

The prerelease is a good idea. In conda, we have a prerelease channel, conda-canary. We encourage people to set up a CI job to run against canary in a way that helps them see if conda changes are going to break them. Ideally they let us know before we release that version. That channel has been a pretty dismal failure. The only time people really seem to use it is when they want to get the newest release to fix some bug that they are struggling with. We do not get many reports from our intended early adopters. I still think the prerelease is a good idea, because when a release goes poorly and people are angry with you for breaking their 700 managed nodes, you can say "well, it was available for a week before we released it. Why aren't you testing these things before you roll them out to 700 nodes?" You are giving people an opportunity to make things work better. Help them realize that passing on that opportunity means more pain for them down the line. It's worthwhile investment for them, and if they do it as part of their CI, it costs them no time aside from setup.

Regarding the flag: I think it's better to have a config option (perhaps in addition to a flag). I would not want to pass a flag all the time. I'm not sure if pip has this ability - maybe you tell people who want a more permanent switch to use the corresponding env var?

@pradyunsg
Copy link
Member Author

Regarding the flag:

pip's CLI options, automatically get mapped to a configuration file option and an environment variable, with the appropriate names.

@pradyunsg
Copy link
Member Author

@msarahan Thanks for chiming in, much appreciated! :)

@ncoghlan
Copy link
Member

Regarding the "let me do what I want" option to ignore broken dependencies, I think it would be desirable to structure the feature flag such that it can also serve as the opt out after the resolver gets turned on by default (for example, start with --avoid-conflicts as an opt-in, eventually move to --no-avoid-conflicts as an opt-out, but accept both options from the start)

You'll also want to consider how --ignore-installed interacts with the solver - when it is passed, you should probably ignore all the requirements for already installed packages.

Beyond that, handling things as smaller refactoring patches to make integration of the resolver easier is an excellent way to go (that's the approach that made the new configuration API for CPython possible: a lot of private refactoring that was eventually stable enough to make public)

@pradyunsg
Copy link
Member Author

@ncoghlan What does "opting out" of the resolver mean? Completely avoiding dependency resolution (and hence the resolver) is --no-deps. I understand that there's a need for an "ignore version conflicts on this package" or something along those lines.

Personally, I don't see any point in keeping the "keep first seen" resolution logic for longer than a transition period to a new resolver.

However, if there are use cases that these two options would not cover, I'd really like to know about them. :)


More broadly, if there are workflows that have issues with a strict resolver's behavior, I'm curious to know what those look like, as early as possible, to be able to figure out whether/how to support them.

@jriddy
Copy link

jriddy commented Aug 15, 2019

Personally, I don't see any point in keeping the "keep first seen" resolution logic for longer than a transition period to a new resolver.

IDK, I use this "feature" to do some pretty crazy stuff with builds, like...

# install just the packages I've built specifically
pip install --no-index --no-deps --find-links=/path/to/my/local/build/cache -r local-reqs.txt

# ...snip to later in a dockerfile, etc...

# install the deps from public PyPI
pip install -r local-reqs.txt

In this case i'm asking it to resolve my dependencies after i've installed some very pre-determined packages from a local wheelhouse. I suppose i could read my exact versions into that local-reqs file to make a resolver happy, but i've actually found the current behavior of pip quite useful in allowing for these kinds of arbitrary build injections steps. Could be a case of the spacebar heating workflow though, I'll admit.

But maybe the "naive resolution" behavior still has a use.

@pfmoore
Copy link
Member

pfmoore commented Aug 15, 2019

I agree with @pradyunsg. I don't think it's viable to maintain the existing code and a new resolver indefinitely. Certainly as a pip maintainer I have no interest in doing that.

From an end user POV, I accept that there could well be weird scenarios where the new resolver might not do the right thing. And having an emergency "give me back the old behaviour" flag is an important transition mechanism (although it's arguable whether "temporarily roll back to the previous version of pip" isn't just as good - even though things like common use of CI that automatically uses the latest pip make advocating that option problematic). But long term, why would we need to retain the current behaviour? I can imagine the following main situations:

  1. Resolver bug. Obvious possibility, easy fix - correct the bug in the next release of pip.
  2. Cases where the old resolver is wrong (generates results that fail to satisfy the constraints). We don't intend to support that going forward, surely? (At least not via anything less extreme than the user pinning what they want and using --no-deps to switch off the resolver).
  3. Cases where the old and new resolvers give different results, both of which satisfy the given constraints. Users can add constraints to force the old result (if they can't, that puts us back into (2)). We should give them time to do so, but then drop the old resolver, just like any other deprecated functionality.
  4. An edge case that we consider too complex/weird to support. This is like (3), but where we aren't asserting that the new resolver gives the "right" result. Users can still modify constraints to avoid the weird case, or pin and use --no-deps. But ultimately, we're saying "don't do that", and if users ignore that message, then again at some point we remove the old resolver saying "we warned you".

Are there any others that I've missed? In particular any where deprecating and then removing the old resolver isn't possible?

By the way, where's the best place to post "here's an edge case I thought of" scenarios, so that they don't get lost? I think it would be useful to collect as many weird situations as we can in advance, if only so we can get an early start on writing test cases :-)

PS We should probably also as part of prep work for the new resolver, survey what the "typical" constraint problems are (based on what's on PyPI). For my own part, it's pretty rare that I have anything more complex than "pip install ". It would be a shame to get so bogged down in the complex cases that we lose sight of the vast majority of simpler ones.

@rgommers
Copy link

  1. resolver is too slow (see conda). If I have to choose between a 20 min plus resolver, or the current behavior, often I want the current behavior (or at least try; in many cases it will happen to give a result that's fine).

  2. metadata wrong. not as much of a problem today, but it's easy to imagine cases that should be solvable but aren't. PyPI metadata is in worse shape than conda/conda-forge metadata, and it's already a problem for conda. if it's wrong and as a user I can't get a solution, I'll want to get some opt-out.

@pradyunsg
Copy link
Member Author

@rgommers For 6, the "ignore version conflicts on this package" style option could work, right?

ssbarnea added a commit to ssbarnea/molecule-podman that referenced this issue Nov 26, 2020
It seams that our tox -e py36-devel is also affected by the endless
loop install bug from the new resolver, we are forced to disable it,
at least for this job.

Example: https://github.com/ansible-community/molecule-podman/pull/23/checks?check_run_id=1458663833
Related: pypa/pip#6536
Related: Textualize/rich#446
ssbarnea added a commit to ansible-community/molecule-podman that referenced this issue Nov 26, 2020
It seams that our tox -e py36-devel is also affected by the endless
loop install bug from the new resolver, we are forced to disable it,
at least for this job.

Example: https://github.com/ansible-community/molecule-podman/pull/23/checks?check_run_id=1458663833
Related: pypa/pip#6536
Related: Textualize/rich#446
ssbarnea added a commit to ssbarnea/molecule-podman that referenced this issue Nov 26, 2020
It seams that our tox -e py36-devel is also affected by the endless
loop install bug from the new resolver, we are forced to disable it,
at least for this job.

Example: https://github.com/ansible-community/molecule-podman/pull/23/checks?check_run_id=1458663833
Related: pypa/pip#6536
Related: Textualize/rich#446
@brainwane
Copy link
Contributor

Here's my rough plan:

* Start a discussion on discuss.python.org asking for support

* Direct folks to a Slack channel that could serve as a communication channel between everyone

* Start a document outlining some FAQ and our responses

* Include a decision tree for new issue -> triaged issue

* Share this with the channel once we have a known release date

* Try and roughly schedule volunteers to be online & triaging in the days following the release

@di I recognize that the constant uncertainty and delays have probably kept you from being able to do the scheduling. The new release date is tomorrow, Monday, 30 November. If you now have a discussion thread and a decision tree to share, please go ahead and share them!

@pradyunsg
Copy link
Member Author

pradyunsg commented Nov 30, 2020

pip 20.3 has been released, and it has the new resolver by default! Here's the release announcement on the PSF blog: https://blog.python.org/2020/11/pip-20-3-release-new-resolver.html

@brainwane
Copy link
Contributor

We're now working on pip's next point releases, 20.3.2 and 20.3.3. Per #8936 (comment) I'm also updating some information sources to indicate that Python 2 users will still default to the old resolver.

We're currently planning to remove the legacy resolver in pip 21.0, in January, but we are open to the possibility of delaying that removal till 21.1, given that 20.3 came out on 30 November and given that we had previously planned to give at least a 3-month deprecation window.

@xavfernandez
Copy link
Member

xavfernandez commented Dec 15, 2020

I'm personally in favor of keeping the old resolver until 21.1 to give more time for our users to adapt and for the new resolver to improve :)

And with #6148 we should already have hopefully sufficient things to appease our cleanup needs in 21.0 ^^

@pradyunsg
Copy link
Member Author

I'm going to go out on a limb and say: let's defer the resolver removal and not do it in 21.0.

Then, the question becomes: when do we remove it? I think the obvious answer is 21.1. However, if we're strictly interested in minimising disruption at the cost of some additional support tickets, I'll say, let's remove it in 21.2. That way, we definitely give folks the 6 months we promise in our deprecation policy. OTOH, I'm pretty sure no maintainer is opposed to removing the resolver on a slightly expidited 4-5 month cycle, given that it's probably been our most communicated-about change. :)

@pradyunsg
Copy link
Member Author

pradyunsg commented Dec 25, 2020

Wait, that doesn't make my position clear: I prefer 21.2 to give people more time, but I'm also 100% on board for 21.1 if other maintainers prefer that. :)

PS: I'm calling dibs on the "remove legacy resolver" PR, whenever we decide to get to that.

(and for anyone wondering, pip has an "at least quarterly" release schedule, where we do YY.0 in Jan, YY.1 in Apr, YY.2 in July, YY.3 in Oct; with nuances around pre-releases, additional releases and availability)

@uranusjr
Copy link
Member

No matter the timeline, ideally I want to see the legacy resolver only go away after we provide migration paths for all “reasonable” legacy resolver usages. Namely:

  1. Tree-pruning in the new resolver (to bring the resolution behaviour to what humans expect)
  2. URL constraints
  3. A way to list available versions of a package

@Tankanow
Copy link

Tankanow commented Jan 6, 2021

I'm sorry for chiming in very late here. @pradyunsg, I really appreciate all of the proactive work you've done to prepare users for the new resolver. Unfortunately, the truth is no matter how proactive the team is, most users won't find out until their builds break.

For that reason, and more (which I'm happy to go into offline), I would recommend not ever removing the legacy resolver. You can mark the code deprecated and let users know that the legacy resolver is not getting improvements. Not removing the legacy resolver ensures that users are never permanently broken and always have a path forward. IMHO, keeping legacy code in a code base is low cost, especially given the high value of not breaking users.

@pradyunsg
Copy link
Member Author

pradyunsg commented Jan 6, 2021

IMHO, keeping legacy code in a code base is low cost, especially given the high value of not breaking users.

I strongly disagree that keeping the legacy resolver is low cost though. There's an extremely high maintenance cost of keeping the legacy resolver around, and maintaining two dependency resolvers in pip.

The legacy resolver is, IMO, the largest chunk of technical debt in pip's codebase. Removing it is a straight-up blocker for various significant improvements to the architectural design of pip (eg: build logic, state management for packages, improved dependency resolution mechanisms etc) and, as a consequence of that, for significant user-facing improvements.

I do agree that there's a cost to breaking users, and we'll work to minimise that. However, I don't think there's any version of this where we keep the legacy resolver around forever, and prevent the entire packaging ecosystem from benefiting from significant improvements to pip.

At the end of the day, it's much better long-term for us to do change management for removing the legacy resolver, rather than to maintain it for any amount of time more than absolutely necessary.

@pfmoore
Copy link
Member

pfmoore commented Jan 6, 2021

At the end of the day, it's much better long-term for us to do change management for removing the legacy resolver, rather than to maintain it for any amount of time more than absolutely necessary.

Also note that the old resolver will never go away. Pip 20.3.3 will be available for download essentially forever. So if people must continue to use the old resolver, they can pin their version of pip. They just have to accept that they are using an unsupported version, and will benefit from no future improvements to pip. Obviously we don't want that to happen (there's a non-zero maintenance cost even for just having to close bug reports as "won't fix, not reproducible in a supported version of pip") but it's an option for the few users who need it.

@Tankanow
Copy link

Tankanow commented Jan 6, 2021

@pfmoore, would that it were that easy! I've already seen build failures because libraries themselves require pip version 20.3+. So even pinning pip to <=20.3 does not solve these issues.

@pradyunsg, thank you for the thoughtful response! If keeping the legacy resolver is untenable, is it possible for the new resolver to operate in a mode that doesn't fail the install? That is, something like an '--ignore-incompatibilies' flag. Yes: I know of a bunch of workarounds to install dependencies that are deemed "incompatible", but they aren't as nice as pip install -r requirements.txt.

@brainwane
Copy link
Contributor

@Tankanow Question that will influence what is possible for pip maintainers going forward (given that, right now, I think we have about 0.2 people's time funded for pip maintenance): are you offering to comaintain this code, or offering funding, or offering to help gather funding, for further work? Thanks!

@pradyunsg
Copy link
Member Author

given that, right now, I think we have about 0.2 people's time funded for pip maintenance

If this 0.2 is supposed to be my time, that's not happened yet. We're definitely 100% volunteers at the moment.

@Tankanow
Copy link

Tankanow commented Jan 7, 2021

@brainwane, thanks for pointing this out. I can't believe such an important part of the ecosystem is so underfunded. I applaud you and @pradyunsg and all of the others who've dedicated their time to this project.

I will ask my team about corporate contributions to the project. Re my own time and money, I'm happy to contribute one or both to the project when possible. How can I find out more about what is needed?

@brainwane
Copy link
Contributor

https://pip.pypa.io/en/latest/user_guide/#deprecation-timeline still says that 21.0 will include the removal of the legacy resolver; @pradyunsg could you please update that to "21.1 or 21.2"? That way I can respond to a tweet and point to that documentation.

Hi @Tankanow - I'm sorry for the delay. (I'm behind on correspondence.) https://github.com/psf/fundable-packaging-improvements/blob/master/FUNDABLES.md is the easiest place to look at what's needed in terms of corporate funding! And if you have some personal time to spend improving Python packaging tools, there are bugs in https://github.com/pypa/warehouse/ and https://github.com/pypa/virtualenv/issues that need fixing and are filed in issues. Thanks!

@nbraud
Copy link

nbraud commented Feb 21, 2021

If keeping the legacy resolver is untenable, is it possible for the new resolver to operate in a mode that doesn't fail the install? That is, something like an '--ignore-incompatibilies' flag.
Yes: I know of a bunch of workarounds to install dependencies that are deemed "incompatible", but they aren't as nice as pip install -r requirements.txt.

@Tankanow Providing “nice” ways to install broken dependencies, is essentially the same as removing incentive for the maintainers to fix their dependencies specifications. I don't think that would be a good move for the ecosystem, mid- and long-term.

@Tankanow
Copy link

@nbraud, this is NOT how dependencies work in the real world. There is no such thing as a broken dependency at the library level. The reason is simple: "compatibility" is a fallacy at the library level. Libraries don't use every line of code in their dependent libraries; they usually only use a few functions or classes. Only a library consumer knows which parts of the library they use.

For example

  • I use LibraryA and LibraryB.
  • LibraryA happens to have its own dependency on LibraryB.
  • LibraryB releases a new version that has some code I need.
  • I know that the code I use in LibraryA is not broken by the new LibraryB release (even if some other code of LibraryA is broken by the release, I don't care because I don't use that code)
  • LibraryA does not update it's transitive dependencies to the latest LibraryB
  • According to pip, there are no "compatible" versions of LibraryA and LibraryB ... but that's simply not true in my use case.

There is no way for library maintainers to know in advance all of the use cases of their libraries. In the end, the pip dependency resolver is a lot of maintenance for no good reason, because even if I fastidiously manage my dependencies to meet all of the transitive requirements, it's still no guarantee that all of the libraries will actually work together. Only a development team knows if their combination of libraries works in their context (runtime, use case, etc.) ... and they know that only by exercising their code (via tests and running in production).

@pradyunsg
Copy link
Member Author

That is, something like an '--ignore-incompatibilies' flag. Yes: I know of a bunch of workarounds to install dependencies that are deemed "incompatible", but they aren't as nice as pip install -r requirements.txt.

That's #8076, which is where I'd suggest taking the rest of this discussion.

@brainwane
Copy link
Contributor

brainwane commented Apr 18, 2021

I'm behind on correspondence, and trying to catch up. In the course of closing some tabs, I came across several issues on GitHub, and tweets, that mention an issue/concern, in a Python-related project, involving pip's new resolver.

Anyone who is interested in helping a little bit: you can go there and comment to help them migrate to the new resolver.

  1. Trouble following packaging libraries tutorial packaging-problems#412
  2. requirements.txt strictness incompatible with pip 20.3 hvac/hvac#652
  3. Black install via pip fails on embedded Python on Windows using latest pip version (20.3) psf/black#1847
  4. Requirements are too strict with pip 20.3 docker/docker-py#2714
  5. Pip 20.3+ and its new dependency resolver heroku/heroku-buildpack-python#1109
  6. python::pip's 'latest' not compatible with latest 'pip' (version 20.3) due to changed output voxpupuli/puppet-python#586
  7. Latest release on pypi is 1.3.5, of which setup.py declares version 1.3.4 jiangwen365/pypyodbc#107
  8. Add support for pip 20.3 (a.k.a. use old dependency resolver) and disallow pip 21 Cog-Creators/Red-DiscordBot#4644
  9. BUG: Pip 20.3 is causing Linux py38 np dev pipeline to fail pandas-dev/pandas#38221
  10. pip or pip3 installation error has different version in metadata: 1.1.4 golismero/openvas_lib#48
  11. Support pip 20 dephell/dephell#472
  12. brew install qmk/qmk/qmk fails to install with pip error qmk/homebrew-qmk#5
  13. is PySocks a dependancy ? httpie/cli#990
  14. Error while installation: ERROR: The tar file (...) has a file (...) trying to install outside target directory (...) kivy/pyobjus#72
  15. Pip alert when running deploy_unicorn 4dn-dcic/tibanna#303
  16. pip-compile doesn't support the new pip resolver jazzband/pip-tools#1190
  17. Upcoming dependency resolver in pip carpentries-incubator/python-packaging-publishing#69
  18. Check dependencies for pip's new dep resolver openzim/python-scraperlib#52
  19. ERROR: numpy-1.18.5-cp38-cp38-macosx_11_0_x86_64.whl is not a supported wheel on this platform. apple/tensorflow_macos#46
  20. https://twitter.com/mwai_william/status/1336707246764548099
  21. https://twitter.com/jtm_tech/status/1339669581753950209
  22. https://twitter.com/gsvaca/status/1340854186724999168
  23. https://twitter.com/tshirtman/status/1336706996876308487

Edit by @uranusjr: Use ordered list for easier reference.

@uranusjr
Copy link
Member

I only read the first seven issues.

  1. Does not seem related to pip at all? The reporter was trying to run python setup.py bdist_wheel.
  2. Replied. (I think the issue is outdated.)
  3. They seemed to have already found a solution but were unable to find a person to release it? Black still hasn’t had a release since, which is a quite serious issue on its own, but I don’t think we can help with that.
  4. The issue report makes no sense to me. The requirements.txt is meant to be strict, and you shouldn’t use it (but pip install docker-py instead) to install. The reporter is the same as 2. and they got a similar reply there, so I think this is just a very confused user.
  5. Replied.
  6. The issue that seems to block them has been resolved, I’ve replied to see if they’re interested in progressing a fix.
  7. It seems like they lost contact to the project maintainer and have forked the project to pypyodbc/pypyodbc, which fixed the issue. The old issue can’t be closed because they need to old maintainer to do that, so there’s nothing more we can do here.

@ichard26
Copy link
Member

ichard26 commented Apr 20, 2021

Black maintainer chiming in:

Black still hasn’t had a release [...] but I don’t think we can help with that.

To clarify, please don't try to help us with this (releasing) cause you can't. There's some frustrating delays among the core team blocking the release. We got some rather inactive (yet important) core team members unfortunately (which is fine, just annoying that the core responsibilities haven't been managed well). Right now we are waiting for a bugfix from one of them to land.

We got plans to make releasing easier, enough to make it much more frequent, but progress has been painfully slow :/

@pradyunsg
Copy link
Member Author

Closing this out, since... uhm... we've released the resolver. 😅

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Feb 28, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
C: dependency resolution About choosing which dependencies to install type: maintenance Related to Development and Maintenance Processes
Projects
None yet
Development

No branches or pull requests