Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to pass parameters to be used during preparation of isolated build venv using --config-setting ? #725

Open
JanSobus opened this issue Jan 22, 2024 · 5 comments

Comments

@JanSobus
Copy link

JanSobus commented Jan 22, 2024

I've encountered a problem that doesn't seem to be covered very well in the docs.

I'm working in a corporate network with a corporate VPN and nominally use conda to manage my environments (using both private and public PyPI indeces). Getting packages to be installed in such conda env is easy with pip - either by setting HTTP_PROXY=<corp proxy> and HTTPS_PROXY=<corp proxy> env variables or using pip install some_package --proxy=<corp_proxy> explicitly.

However, when I'm trying to build packages with python -m build (setuptools backend), the process hangs indefinitely on fetching dependencies for the isolated build venv that is being created.
I figured out that happens because that venv is unaware of my environmental variables and tries to use stock pip install.
I can circumvent the problem by installing build dependencies in my dev conda env, followed by build without isolation using python -m build --no-isolation.

However, I realize that is a rather dirty way, I'd much prefer to inject proxy settings into venv construction process. Is there a way to do that with --config-setting ? I searched the net and read some issues here as well (#517 for example), but they seem to revolve more about build parameters than the venv itself.

The closest related issue I found was #464 , but those flags don't seem to be properly passed in by conda when venv is created automatically.

@James-E-A
Copy link

If you're lucky enough to have a proxy that pip can co-operate with using --proxy, is there any reason you can't just configure that on your system permanently?

; %AppData%\pip\pip.ini
[global]
proxy = socks5://outbound-proxy.internal.example.com:6969

@hjoukl
Copy link

hjoukl commented Feb 6, 2024

Similar situation here:
working in a corporate network, behind a proxy. I.e. PyPI is only accessible with appropriate proxy settings.
I'm not using conda environments, however - just plain venvs and pip.

Also trying a build with python -m build with a setuptools backend.

I can't reproduce though: it's possible for me to successfully build using an incantation like

# Use an appropriate PROXY_URL.
# REQUESTS_CA_BUNDLE necessary on this system due to requests not picking up all system ca paths -
# your mileage will vary.
#
$ http_proxy=$PROXY_URL https_proxy=$PROXY_URL REQUESTS_CA_BUNDLE=/etc/pki/tls/certs/ca-bundle.crt .venv/python3.11/dev/bin/python -m build

So the necessary setuptools installation done through build properly respects these env var proxy settings.

It's good to see this work, because I really would not like to configure system/user wide settings (basically any config files mentioned here: https://pip.pypa.io/en/stable/topics/configuration/#configuration-files). IMHO this is undesirable since it basically defeats isolation purposes, at another level. Especially in CI - unless you run this in a fully reproducable environment, e.g. a container.

However: the fact that python -m build requires a connection to PyPI at build time is undesirable in itself, in my books. I'd like to be able to build without internet connectivity, be it to avoid the possibility of temporary outages (what if pypi isn't reachable) or to build on a restricted system, not connected to the internet by design.

So It'd be nice if the "isolation venv" created by build could somehow bootstrap its pip and setuptools (and wheel) package install from the "caller venv", i.e. where build itself is installed.

I know that I could

  • probably point to another index URL (e.g. Artifactory or devpi or s.th.) or maybe a directory path (which means the need to put/download the packages there upfront)
  • create a separate build env myself and use python -m build --no-isolation from there, like @JanSobus already mentioned

Still, more things to get in place before being able to build. ;-)

@James-E-A
Copy link

James-E-A commented Feb 6, 2024

I really would not like to configure system/user wide settings — IMHO this is undesirable since it basically defeats isolation purposes, at another level.

@hjoukl To be fair, in this case, you literally do need pip config settings due to your TLS situation (setting aside the problem of "no offline building", that is). Setting use-feature = truststore or proxy = socks5://proxy.internal.contoso.com isn't going to contaminate the build.

(Frankly, if we lived in a slightly better world, it would be standard practice for IT to ship out the appropriate system-wide config files to reflect the reality of that system's networking configuration. But at least the per-user pip.ini works!)

Also, if you're not lucky enough to have a corporate proxy that's compatible with pip's proxy directive, you can get truststore working reliably by calling virtualenv, rather than venv, to create your environments; or you can just update to Python ≥3.11.8 or ≥3.12.2, which got 90% of the way to fixing this perennial problem for good. (The last 10% is just waiting for them to enable truststore by default, at which point your configfile worries will also evaporate.)

@hjoukl
Copy link

hjoukl commented Feb 7, 2024

Hi @JamesTheAwesomeDude,

@hjoukl To be fair, in this case, you literally do need pip config settings due to your TLS situation (setting aside the problem of "no offline building", that is). Setting use-feature = truststore or proxy = socks5://proxy.internal.contoso.com isn't going to contaminate the build.

I probably misunderstand but I don't quite get the need/advantages of use-feature = truststore in my case (have to read up on this), since my commands work just fine for me as shown. While maybe inconvenient, having to set REQUESTS_CA_BUNDLE is not a problem - it can easily be done in the CI actions/scripts/playbooks/.., per command.

curl doesn't need it, it obviously picks up this particular TLS cert path, opposed to requests (or httpx, which would need setting SSL_CERT_FILE) - probably the Python libs just use certifi, without taking system cert paths into any account (?).

(Frankly, if we lived in a slightly better world, it would be standard practice for IT to ship out the appropriate system-wide config files to reflect the reality of that system's networking configuration. But at least the per-user pip.ini works!)

There you go. :-) However, we sometimes need to access PyPI via proxy, internal package index without proxy, so user/global settings get in the way there sometimes, too. Can be healed by proper proxy setup with exceptions for local addresses, I'm sure. And maybe pip config file settings per index URL, too.

But the main problem with relying on system/user config in the CI use case is in IMHO: these user ini files or server config files are usually not in version control like the CI config/actions. So they are implicit to the build rather than explicit, and if modified by some well-meaning soul due to their needs aren't inherently rollback-capable. That well-meaning souls might well be the "central authority" over a technical build user "fixing" things for other stuff happening on the build server.

Of course you're fine if you have greater isolation e.g. spinning up the complete build server/env from scratch before each build - the container case, basically, with the full build server "recipe" in version-controlled config.

Also, if you're not lucky enough to have a corporate proxy that's compatible with pip's proxy directive, you can get truststore working reliably by calling virtualenv, rather than venv, to create your environments; or you can just update to Python ≥3.11.8 or ≥3.12.2, which got 90% of the way to fixing this perennial problem for good. (The last 10% is just waiting for them to enable truststore by default, at which point your configfile worries will also evaporate.)

Ah, now I see. Truststore enables system certs usage instead of relying on bundled certifi. Thanks for the pointer! Of course 3.11.8 isn't available "officially" yet on RHEL 8, though (currently at 3.11.5 in RHEL repos).

Best regards,
Holger

@James-E-A
Copy link

While maybe inconvenient, having to set REQUESTS_CA_BUNDLE is not a problem

Ah, if in your case you have reliable access to the CA Bundle files, then that would work instead. In my case, the CA bundle rotates somewhat regularly, and is distributed into the Windows certificate store by some obscure mechanism; I didn't realize other places use a permanent root CA that can be found once, and then statically configured.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants