This repository supports some quality enhancement approaches for compressed image/video based on PyTorch and MMEditing.
Image approaches:
- MPRNet @ CVPR'21: Multi-stage structure.
- RBQE @ ECCV'20: Multi-exit structure and early-exit mechanism.
- CBDNet @ CVPR'19: Noise estimation. Originally for image denoising.
- ESRGAN @ ECCVW'18: Relativistic discriminator. Originally for image super resolution. PIRM'18 winner.
- RDN @ CVPR'18: Residual dense network. Originally for image super resolution.
- DnCNN @ TIP'17: Pioneer of CNN-based image denoising.
- DCAD @ DCC'17: Pioneer of HEVC compression artifacts reduction.
- U-Net @ MICCAI'15: Multi-scale structure. Originally for biomedical image processing.
- AR-CNN @ ICCV'15: Pioneer of CNN-based image compression artifacts reduction.
Video approaches:
- ProVQE @ CVPRW'22: Key-frame propagation. NTIRE'22 winner.
- BasicVSR++ @ CVPR'22: Flow-guided deformable alignment. Originally for video super resolution. NTIRE'21 winner.
- STDF @ AAAI'20: Deformable alignment.
- MFQEv2 @ TPAMI'19: Key-frame alignment.
- EDVR @ CVPR'19: Deformable alignment. Originally for video super resolution. NTIRE'19 winner.
Content:
Resources:
MMEditing is a submodule of PowerQE. One can easily upgrade the MMEditing, and add their models to PowerQE without modifying the MMEditing repository. One should clone PowerQE along with MMEditing like this:
git clone --depth 1 --recurse-submodules --shallow-submodules\
https://github.com/ryanxingql/powerqe.git
Install dependency:
environment.yml
- PyTorch v1 + MMCV v1 + MMEditing v0
Please refer to the document for detailed installation.
mkdir data
Place your data like this:
powerqe/data
`-- {div2k,div2k_lq/bpg/qp37}
`-- train
` `-- 0{001,002,...,800}.png
`-- valid
`-- 0{801,802,...,900}.png
Please refer to the document for detailed preparation.
#chmod +x tools/dist_train.sh # for the first time
conda activate pqe &&\
CUDA_VISIBLE_DEVICES=0\
PORT=29500\
tools/dist_train.sh\
configs/<config>.py\
1\
<optional-options>
- Activate environment.
- Use GPU 0.
- Use port 29500 for communication.
- Training script.
- Configuration.
- Use one GPU.
- Optional options.
Optional options:
--resume-from <ckp>.pth
: To resume the training status (model weights, number of iterations, optimizer status, etc.) from a checkpoint file.
#chmod +x tools/dist_test.sh # for the first time
conda activate pqe &&\
CUDA_VISIBLE_DEVICES=0\
PORT=29510\
tools/dist_test.sh\
configs/<config>.py\
work_dirs/<ckp>.pth\
1\
<optional-options>
- Activate environment.
- Use GPU 0.
- Use port 29510 for communication.
- Test script.
- Configuration.
- Checkpoint.
- Use one GPU.
- Optional options.
Optional options:
--save-path <save-folder>
: To save output images.
Version | PyTorch | MMEditing | Video approaches |
---|---|---|---|
V3 | V1 | V0 | Supported |
V2 | V1 | V0 | N/A |
V1 | V1 | V0 | N/A |