Skip to content

multiple versions of the same package can be installed in some cases when using conflicting extras #10985

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
BurntSushi opened this issue Jan 27, 2025 · 2 comments · Fixed by #11075
Assignees
Labels
bug Something isn't working

Comments

@BurntSushi
Copy link
Member

I found the problem by trying to reproduce the problem on a new repo. The problem is with package torchmetrics. When it is in the workspace package/dependency this case exists. I have yet not inspect what can be wrong with that package configuration.

Minimal example:

Child:

[project]
name = "child-surface"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.12"
dependencies = [
    "torchmetrics>=1.1.0",
]


[project.optional-dependencies]
cpu = [
    "torch==2.4.1", 
    "torchvision==0.19.1"
]
gpu = [
    "torch==2.4.1", 
    "torchvision==0.19.1",
    "nvidia-dali-cuda120==1.30.0",
]
[tool.uv]
conflicts = [
  [
    { extra = "cpu" },
    { extra = "gpu" },
  ],
]

[tool.uv.sources]
torch = [
  { index = "pytorch-cpu", extra = "cpu" },
  { index = "pytorch-cu124", extra = "gpu" },
]
torchvision = [
  { index = "pytorch-cpu", extra = "cpu" },
  { index = "pytorch-cu124", extra = "gpu" },
]
[[tool.uv.index]]
name = "pytorch-cpu"
url = "https://download.pytorch.org/whl/cpu"
explicit = true

[[tool.uv.index]]
name = "pytorch-cu124"
url = "https://download.pytorch.org/whl/cu124"
explicit = true

Parent:

[project]
name = "parent-surface"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.12"
dependencies = []

[tool.uv.sources]
child-surface = [{ workspace = true  },]

[tool.uv.workspace]
members = ["child-surface"]


[project.optional-dependencies]
cpu = [
    "child-surface[cpu]>=0.0.1", 
]
gpu = [
    "child-surface[gpu]>=0.0.1", 
]

[tool.uv]
conflicts = [
  [
    { extra = "cpu" },
    { extra = "gpu" },
  ],
]

Full zipped example with lock file:
parent-surface.zip


Edit: Looking at their repo and build system in setup.py I see that they define base requirements which is just torch >=2.0.0, <2.7.0

So uv resolver has installed torch==2.4.1+cu124 and then installs torch==2.4.1 which is strange having two different versions of the same package

This behavior changed after introducing this change

Originally posted by @mchalecki in #10590

@BurntSushi
Copy link
Member Author

After creating the above two pyproject.toml files (pyproject.toml and child-surface/pyproject.toml), just run uv sync --extra cpu and I at least get (on Linux):

 + torch==2.4.1
 + torch==2.4.1+cpu

The same thing happens for uv sync --extra gpu, but you get torch==2.4.1+cu124 instead.

My guess here is that when I relaxed the error checking around unconditional enabling of conflicting extras in #10875, this may have uncovered a new bug in how forks are created and managed for conflicting extras? Not sure. But the relevant lines in the lock file are:

[[package]]
name = "torchmetrics"
version = "1.6.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
    { name = "lightning-utilities" },
    { name = "numpy" },
    { name = "packaging" },
    { name = "torch", version = "2.4.1", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "(platform_machine == 'aarch64' and sys_platform == 'linux' and extra == 'extra-13-child-surface-cpu') or (platform_machine != 'aarch64' and extra == 'extra-13-child-surface-cpu' and extra == 'extra-13-child-surface-gpu') or (platform_machine != 'aarch64' and extra == 'extra-14-parent-surface-cpu' and extra == 'extra-14-parent-surface-gpu') or (sys_platform == 'darwin' and extra == 'extra-13-child-surface-cpu') or (sys_platform != 'linux' and extra == 'extra-13-child-surface-cpu' and extra == 'extra-13-child-surface-gpu') or (sys_platform != 'linux' and extra == 'extra-14-parent-surface-cpu' and extra == 'extra-14-parent-surface-gpu') or (sys_platform == 'darwin' and extra == 'extra-14-parent-surface-cpu' and extra == 'extra-14-parent-surface-gpu') or (extra != 'extra-13-child-surface-cpu' and extra == 'extra-14-parent-surface-cpu' and extra == 'extra-14-parent-surface-gpu')" },
    { name = "torch", version = "2.4.1", source = { registry = "https://download.pytorch.org/whl/cu124" }, marker = "(platform_machine == 'aarch64' and sys_platform == 'linux' and extra == 'extra-13-child-surface-gpu') or (platform_machine != 'aarch64' and extra == 'extra-13-child-surface-cpu' and extra == 'extra-13-child-surface-gpu') or (platform_machine != 'aarch64' and extra == 'extra-14-parent-surface-cpu' and extra == 'extra-14-parent-surface-gpu') or (sys_platform != 'linux' and extra == 'extra-13-child-surface-cpu' and extra == 'extra-13-child-surface-gpu') or (sys_platform != 'linux' and extra != 'extra-13-child-surface-cpu' and extra == 'extra-14-parent-surface-cpu' and extra == 'extra-14-parent-surface-gpu') or (extra != 'extra-13-child-surface-gpu' and extra == 'extra-14-parent-surface-cpu' and extra == 'extra-14-parent-surface-gpu')" },
    { name = "torch", version = "2.4.1", source = { registry = "https://pypi.org/simple" }, marker = "(extra == 'extra-13-child-surface-cpu' and extra == 'extra-13-child-surface-gpu') or (extra != 'extra-13-child-surface-cpu' and extra != 'extra-13-child-surface-gpu')" },
    { name = "torch", version = "2.4.1+cpu", source = { registry = "https://download.pytorch.org/whl/cpu" }, marker = "(platform_machine != 'aarch64' and sys_platform == 'linux' and extra == 'extra-13-child-surface-cpu') or (sys_platform != 'darwin' and sys_platform != 'linux' and extra == 'extra-13-child-surface-cpu') or (sys_platform == 'darwin' and extra == 'extra-13-child-surface-cpu' and extra == 'extra-13-child-surface-gpu') or (sys_platform == 'linux' and extra == 'extra-13-child-surface-cpu' and extra == 'extra-13-child-surface-gpu') or (sys_platform == 'darwin' and extra != 'extra-13-child-surface-gpu' and extra == 'extra-14-parent-surface-cpu' and extra == 'extra-14-parent-surface-gpu') or (sys_platform == 'linux' and extra == 'extra-13-child-surface-cpu' and extra == 'extra-14-parent-surface-cpu' and extra == 'extra-14-parent-surface-gpu') or (extra != 'extra-13-child-surface-cpu' and extra == 'extra-14-parent-surface-cpu' and extra == 'extra-14-parent-surface-gpu')" },
    { name = "torch", version = "2.4.1+cu124", source = { registry = "https://download.pytorch.org/whl/cu124" }, marker = "(platform_machine != 'aarch64' and extra == 'extra-13-child-surface-gpu') or (sys_platform != 'linux' and extra == 'extra-13-child-surface-gpu') or (extra == 'extra-13-child-surface-cpu' and extra == 'extra-13-child-surface-gpu') or (extra == 'extra-14-parent-surface-cpu' and extra == 'extra-14-parent-surface-gpu')" },
]
sdist = { url = "https://files.pythonhosted.org/packages/14/c5/8d916585d4d6eb158105c21b28cd4b0ed296d74e499bf8f104368de16619/torchmetrics-1.6.1.tar.gz", hash = "sha256:a5dc236694b392180949fdd0a0fcf2b57135c8b600e557c725e077eb41e53e64", size = 540022 }
wheels = [
    { url = "https://files.pythonhosted.org/packages/9d/e1/84066ff60a20dfa63f4d9d8ddc280d5ed323b7f06504dbb51c523b690116/torchmetrics-1.6.1-py3-none-any.whl", hash = "sha256:c3090aa2341129e994c0a659abb6d4140ae75169a6ebf45bffc16c5cb553b38e", size = 927305 },
]

The conflict markers are way too complicated me to digest at a glance here, so I think this one requires a bit more investigation.

@BurntSushi
Copy link
Member Author

And note that the conflicting extras defined in the parent package here are superfluous. If I remove them, the bug remains.

@BurntSushi BurntSushi changed the title multiple versions of the same package can be installed in some cases when using conflicting dependencies multiple versions of the same package can be installed in some cases when using conflicting extras Jan 27, 2025
@BurntSushi BurntSushi self-assigned this Jan 27, 2025
BurntSushi added a commit that referenced this issue Jan 29, 2025
In #10875, I relaxed the error checking during resolution to permit
dependencies like `foo[x1]`, where `x1` was defined to be conflicting.
In exchange, the error was, roughly speaking, moved to installation
time. This was achieved by looking at the full set of enabled extras
and checking whether any conflicts occurred. If so, an error was
reported. This ends up being more expressive and permits more valid
configurations.

However, in so doing, there was a bug in how the accumulated extras
were being passed to conflict marker evaluation. Namely, we weren't
accounting for the fact that if `foo[x1]` was enabled, then that fact
should be carried through to all conflict marker evaluations. This is
because some of those will use things like `extra != 'x1'` to indicate
that it should only be included if an extra *isn't* enabled.

In #10985, this manifested with PyTorch where `torch==2.4.1` and
`torch==2.4.1+cpu` were being installed simultaneously. Namely, the
choice to install `torch==2.4.1` was not taking into account that
the `cpu` extra has been enabled. If it did, then it's conflict
marker would evaluate to `false`. Since it didn't, and since
`torch==2.4.1+cpu` was also being included, we ended up installing both
versions.

The approach I took in this PR was to add a second breadth first
traversal (which comes first) over the dependency tree to accumulate all
of the activated extras. Then, only in the second traversal do we
actually build up the resolution graph.

Unfortunately, I have no automatic regression test to include here. The
regression test we _ought_ to include involves `torch`. And while we are
generally find to use those in tests that only generate a lock file, the
regression test here actually requires running installation. And
downloading and installing `torch` in tests is bad juju. So adding a
regression test for this is blocked on better infrastructure for PyTorch
tests. With that said, I did manually verify that the test case in #10985
no longer installs multiple versions of `torch`.

Fixes #10985
BurntSushi added a commit that referenced this issue Jan 29, 2025
In #10875, I relaxed the error checking during resolution to permit
dependencies like `foo[x1]`, where `x1` was defined to be conflicting.
In exchange, the error was, roughly speaking, moved to installation
time. This was achieved by looking at the full set of enabled extras
and checking whether any conflicts occurred. If so, an error was
reported. This ends up being more expressive and permits more valid
configurations.

However, in so doing, there was a bug in how the accumulated extras
were being passed to conflict marker evaluation. Namely, we weren't
accounting for the fact that if `foo[x1]` was enabled, then that fact
should be carried through to all conflict marker evaluations. This is
because some of those will use things like `extra != 'x1'` to indicate
that it should only be included if an extra *isn't* enabled.

In #10985, this manifested with PyTorch where `torch==2.4.1` and
`torch==2.4.1+cpu` were being installed simultaneously. Namely, the
choice to install `torch==2.4.1` was not taking into account that
the `cpu` extra has been enabled. If it did, then it's conflict
marker would evaluate to `false`. Since it didn't, and since
`torch==2.4.1+cpu` was also being included, we ended up installing both
versions.

The approach I took in this PR was to add a second breadth first
traversal (which comes first) over the dependency tree to accumulate all
of the activated extras. Then, only in the second traversal do we
actually build up the resolution graph.

Unfortunately, I have no automatic regression test to include here. The
regression test we _ought_ to include involves `torch`. And while we are
generally find to use those in tests that only generate a lock file, the
regression test here actually requires running installation. And
downloading and installing `torch` in tests is bad juju. So adding a
regression test for this is blocked on better infrastructure for PyTorch
tests. With that said, I did manually verify that the test case in #10985
no longer installs multiple versions of `torch`.

Fixes #10985
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant