An experimental onnxruntime-web based in-browser version of waifu2x.
It works on web browser without uploading images to the remote server.
Pros
- No image size limitation (But you have web browser memory limit)
- Supports new art models includes 4x models
- Supports TTA
Cons
- It's very slow, like 90's dial-up internet access
- Modern web browser with WebAssembly support is required
Processing performance could be improved when WebGPU is available; experimental implementations with WebGPU may be available for public use around April 2023.
- Place ONNX models in
public_html/models
.
The pretrained models are available at https://github.com/nagadomi/nunif/releases (waifu2x_onnx_models_*.zip
).
- Publish
public_html
with a web server.
For testing purposes, web server can be run with the following command.
python3 -m waifu2x.unlimited_waifu2x.test_server
Open http://127.0.0.1:8812/ in web brwoser.
An example nginx config file is available at waifu2x/web/unlimited_waifu2x/appendix/unlimited.waifu2x.net.
Note that the size of the onnx file is very large. It is recommended to use CDN to reduce transfer fees.
Converting models to fp16 reduces model file size and data transfer bytes down to half. This only converts the parameters to fp16, keeps fp32 for input and output.
python convert_fp16.py -i <input onnx models dir> -o <output onnx models dir>
I have checked in several samples that the output is practically the same(PSNR 65+) as fp32 on javascript/wasm backend.