Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Torch installation instructions for different platforms #7859

Closed
chitralverma opened this issue Oct 2, 2024 · 5 comments
Closed

Torch installation instructions for different platforms #7859

chitralverma opened this issue Oct 2, 2024 · 5 comments
Labels
duplicate This issue or pull request already exists question Asking for clarification or support

Comments

@chitralverma
Copy link

chitralverma commented Oct 2, 2024

Hi, coming from poetry, I am trying out uv due to poetry's super slow resolutions and bulk.

Our workflow is as follows,

  • Devs develop a project on their Macs with dependency sentence-transformers which pulls in torch as well.
  • Macs do not have GPU, so by default, we'd like to run all development and even deployment on CPU only
  • For the final product env, where GPUs are available, we would like to have an extras flag in the project, let's say 'cuda' which pulls in GPU dependencies.

In short, we'd like to use uv to create a project in which the project source code remains the same, it is developed on Mac and only pulls in CPU dependencies by default. But for the target deployment environment where GPUs are available, we should be able to deploy the same project with an extra flag that pulls in the torch's GPU dependencies as well.

In poetry, we were following something like this, any suggestions on how to do this uv/ rye as we are very new with this?

Also referencing, pytorch/pytorch#136275

Dev Env Details:
uv == latest, 0.4.18
python >= 3.8
platform: mac, sequoia, cpu-only env

Prod Env Details:
uv == latest, 0.4.18
python >= 3.8
platform: linux, gpu env

cc @charliermarsh @mitsuhiko

@zanieb
Copy link
Member

zanieb commented Oct 2, 2024

This is a duplicate of #5945 — there's a fair bit of discussion there. We're expanding support for this in #7769.

@zanieb zanieb added duplicate This issue or pull request already exists question Asking for clarification or support labels Oct 2, 2024
@chitralverma
Copy link
Author

closed by #7769

@chitralverma
Copy link
Author

@charliermarsh even though #7769 is merged, i was wondering how we can achieve the task mentioned in the description of this issue.

you may link index to source and source to dependency, but python markers do not support environment variables as markers and if i create an extra "cuda" i don't know how to specify source or index in the optional-dependencies section.

basically if i install my project with --extras cpu it should only pull torch from the CPU index and if i provide --extras cuda then it should pull torch from a cuda index.

this also brings up the question what is exactly the point of having sources separate from indexes. Won't it be easier to just have something like this?

[project]
dependencies = [
    "linetimer>=0.1.5",
    "numpy==1.24.1",
]
name = "uv-project"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.8"

[tool.uv]
managed = true
dev-dependencies = ["black>=24.8.0", "pytest>=8.3.3", "ruff>=0.6.8"]

[project.optional-dependencies]
cuda = [
    {name = "torch>=2.1.0", index = "pytorch-cuda", marker = "sys_platform != 'darwin'"},
    {name = "torch>=2.1.0", index = "pytorch-cpu", marker = "sys_platform == 'darwin'"},
    "sentence-transformers"
]

cpu = [
    {name = "torch>=2.1.0", index = "pytorch-cpu", marker = "sys_platform != 'darwin'"},
    {name = "torch>=2.1.0", index = "pytorch-cpu", marker = "sys_platform == 'darwin'"},
    "sentence-transformers"
]

[[tool.uv.index]]
name = "pytorch-cpu"
url = "https://download.pytorch.org/whl/cpu"
explicit = true

[[tool.uv.index]]
name = "pytorch-cuda"
url = "https://download.pytorch.org/whl/cu124"
explicit = true

[tool.uv.pip]
generate-hashes = true
universal = true

[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"

@charliermarsh
Copy link
Member

@chitralverma -- The specific case around using extras to manage this is not supported yet, because extras aren't mutually exclusive (you could enable both --extra cpu and --extra cuda in the above example, and then you'd get a conflict). We're working on support for conflicting extras, but managing them via features like that knowingly doesn't work yet.

@charliermarsh
Copy link
Member

I'm gonna combine this issue with #5945.

@charliermarsh charliermarsh closed this as not planned Won't fix, can't repro, duplicate, stale Oct 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
duplicate This issue or pull request already exists question Asking for clarification or support
Projects
None yet
Development

No branches or pull requests

3 participants