Skip to content
/ GLAD Public

The official code of "GLAD: Towards Better Reconstruction with Global and Local Adaptive Diffusion Models for Unsupervised Anomaly Detection"

Notifications You must be signed in to change notification settings

hyao1/GLAD

Repository files navigation

GLAD: Towards Better Reconstruction with Global and Local Adaptive Diffusion Models for Unsupervised Anomaly Detection.

[ECCV2024]The official code of "GLAD: Towards Better Reconstruction with Global and Local Adaptive Diffusion Models for Unsupervised Anomaly Detection".

PWC

image

News

  • [07/13/2024] Support for mixed_precision training.
  • [07/13/2024] Release the PCB-Bank dataset we integrated.

Requirements

This repository is implemented and tested on Python 3.10 and PyTorch 2.0.1. To install requirements:

pip install -r requirements.txt

Models Trained by Us

Models (VAE, Unet, DINO) trained by us are here: OneDrive.

Training and Evaluation of the Model for Single-class

First, you should download the pretrained stable diffusion model from pretrained model, and datasets. (In addition, DTD dataset is required for anomaly synthesis). If you can not download pretrained stable diffusion model, we provided it in our OneDrive.

To train the UNet of stable diffusion, modify the settings in the train.sh and train the model on different categories:

bash train.sh

To evaluate and test the model, modify the path of models in the main.py and test.sh, and run:

bash test.sh

In particular, considering the large differences between the VisA and PCB-Bank dataset and the pre-trained model, we fine-tune VAE of stable diffusion and DINO. You can refer to the DiffAD for fine-tuning VAE.

To fine-tune DINO (referring to DDAD), run:

python train_dino.py --dataset VisA

Quantitative results on MVTec-AD, MPDD, VisA and PCB-Bank datasets. Metrics are I-AUROC/I-AP/I-F1-max at first raw (for detection) and P-AUROC/PAP/P-F1-max/PRO at second raw (for localization). image

Training and Evaluation of the Model for Multi-class

We also test our method at multi-class setting. Pretrained stable diffusion model also is required, and models (VAE, Unet, DINO) trained by us can be download from OneDrive.

To train the UNet of stable diffusion, modify the settings in the train.sh and train the model on different categories:

bash train_multi.sh

To evaluate and test the model, modify the path of models in the main.py and test.sh, and run:

bash test_multi.sh

In particular, we fine-tune VAE of stable diffusion for VisA and PCB-Bank referring to DiAD.

To fine-tune DINO (referring to DDAD), run:

python train_dino_multi.py --dataset VisA

Quantitative results of multi-category setting on MVTec-AD, MPDD, VisA and PCB-Bank datasets. Metrics are I-AUROC/I-AP/I-F1-max at first raw (for detection) and P-AUROC/P-AP/P-F1-max/PRO at second raw (for localization). image

Training and Evaluation of the Model for Multi-class

The PCB-Bank dataset of the printing circuit board we integrated can be downloaded from here: PCB-Bank

Citation

@article{yao2024glad,
  title={GLAD: Towards Better Reconstruction with Global and Local Adaptive Diffusion Models for Unsupervised Anomaly Detection},
  author={Yao, Hang and Liu, Ming and Wang, Haolin and Yin, Zhicun and Yan, Zifei and Hong, Xiaopeng and Zuo, Wangmeng},
  journal={arXiv preprint arXiv:2406.07487},
  year={2024}
}

Feedback

For any feedback or inquiries, please contact yaohang_1@outlook.com

About

The official code of "GLAD: Towards Better Reconstruction with Global and Local Adaptive Diffusion Models for Unsupervised Anomaly Detection"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published