Releases: HolyWu/vs-realesrgan
Releases Β· HolyWu/vs-realesrgan
v5.1.0
- Add
num_batches
parameter. - Add
AnimeJaNai_HD_V3Sharp1
andAnimeJaNai_SD_V1beta34
models from https://github.com/the-database/mpv-upscale-2x_animejanai. - Switch to dynamo IR for compilation in TorchTRT. A few parameters are added and INT8 quantization is removed.
- Improve performance by using separate streams and non_blocking for moving tensors between CPU and GPU. Now
num_streams
arg has negligible influence on performance compared tonum_batches
arg. - Bump PyTorch to 2.6.0.dev.
- Bump Torch-TensorRT to 2.6.0.dev.
v5.0.0
- Add
2x_AnimeJaNai HD V3
models from https://github.com/the-database/mpv-upscale-2x_animejanai. - Add
AniScale 2
,OpenProteus
andAni4K v2
models from https://github.com/Sirosky/Upscale-Hub. - Change default model to AnimeJaNai_HD_V3_UltraCompact_2x.
- Remove
nvfuser
andcuda_graphs
. - Add support for TensorRT INT8 mode using Post Training Quantization (PTQ).
- Bump PyTorch to 2.4.0.dev.
- Bump TensorRT to 10.0.1.
- Bump Torch-TensorRT to 2.4.0.dev.
- Bump VapourSynth to R66.
v4.1.0
v4.0.1
- Switch to PyTorch again for inference. A few parameters are added and some parameters are removed.
- Add official ESRGAN x4 model and realesr-general-x4v3 model.
See Discussions for benchmarks of some models.