Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to test newer model additions since last release (May 17)? #304

Open
Disonantemus opened this issue Aug 26, 2024 · 7 comments
Open

How to test newer model additions since last release (May 17)? #304

Disonantemus opened this issue Aug 26, 2024 · 7 comments

Comments

@Disonantemus
Copy link

How to test newer model additions since last release (May 17)?

If I use pipx like this: pipx install spandrel, only get last release v0.3.4 (from May 17).


New additions:

Low Light Models (only ones supported by Spandrel):

  • RetinexFormer
  • HVI-CIDNet

Newwer Upscaler:

  • SeemoRe

distro: Arch Linux x86_64
kernel: 6.6.22-1-lts
shell: bash 5.2.32
term: tmux
cpu: Intel i7-4790 (8) @ 3.600GHz
gpu: AMD ATI Radeon RX 470/480/570/570X/580/580X/590
@joeyballentine
Copy link
Member

Just released a new version. Sorry that took so long

@Disonantemus
Copy link
Author

I was able to install last spandrel with:

pipx install spandrel --include-deps

But chaiNNer still can't open those models, what I'm doing wrong?

@joeyballentine
Copy link
Member

chaiNNer needs to be properly updated to correctly use the next spandrel version. I do not have an ETA on that, sorry.

@Disonantemus
Copy link
Author

Disonantemus commented Sep 23, 2024

I have success using ComfyUI just 2 do Upscales to test newer models from spandrel, has nodes similar to chaiNNer, but it's a lot more complex, didn't work with onnx/ncnn models, only in cpu mode (because old AMD GPU).


Other projects that use spandrel, which one is easier? (to do just upscales):

@Teriks
Copy link

Teriks commented Sep 24, 2024

I have not fully tested the low light models with dgenerate but a quick test of RetinexFormer seems to "work"

dgenerate --sub-command image-process input.png --output output.png --align 1 --processors "upscaler;model=SDSD_indoor.pth" --device cpu

Unfortunately I do not have access to any AMD hardware to build and test around.

So I do not have my package setup to handle installing the ROCm torch backend, and possibly you would not be able to install it due to dependency issues.

I would like to figure that out by the next release since it is mostly a packaging / dependency issue which might require some janky trickery in setup.py

@Disonantemus
Copy link
Author

I did install dgenerate with this:

pipx install dgenerate[ncnn] --pip-args "--extra-index-url https://download.pytorch.org/whl/cu121/"

Takes A Lot of Storage Space! (like 6GB), if you install this besides comfyUI and ChaiNNer, all of them install cuda+torch in each venv (like 20GB!), there's a way to just share same venv 4 all? I don't know how to do that with pipx.


Was able to use it with only cpu with this shell script:

#!/bin/bash

dgenerate --sub-command image-process --device cpu in.png \
--output output/upscaled.png \
--processors "upscaler;model=/home/user123/models/2xNomosUni_span_multijpg.pth"

But didn't work with ncnn models and GPU (AMD), just did the upscale of 1/8 of the image and finish with and incomplete image, I did use this shell script:

#! /usr/bin/env bash

MODEL=/home/user123/models/realesr-animevideov3-x2.bin
PARAM=/home/user123/models/realesr-animevideov3-x2.param

dgenerate --sub-command image-process in.png \
--output output/upscaled.png \
--processors "upscaler-ncnn;model=${MODEL};param=${PARAM};use-gpu=true"

@Teriks
Copy link

Teriks commented Sep 24, 2024

I wouldn’t expect ncnn to work on anything except nvidia with use-gpu=True with dgenerate because the python binding is very finicky and I only have nvidia hardware to test it on.

I am working on making the package install with --extra-index-url https://download.pytorch.org/whl/rocm6.1

you may be able to install it this way from the branch: https://github.com/Teriks/dgenerate/tree/sd3_inpaint

i.e: pipx install git+https://github.com/Teriks/dgenerate.git@sd3_inpaint --pip-args "--extra-index-url https://download.pytorch.org/whl/rocm6.1/"

that would probably support generation with spandrels models on AMD (big maybe), torch still refers to the device as “cuda” with this backend for compatibility.

unfortunately you should not try to install any of these software packages into the same environment, they have very specific dependencies and that is why they isolate the environment.

they are very likely to break each other if you install them globally or side by side in a venv.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants
@Teriks @joeyballentine @Disonantemus and others