Skip to content

pralab/UncertaintyAdversarialRobustness

Repository files navigation

Uncertainty Adversarial Robustness

Open In Colab Arxiv

This repository contains all the code used for running the experiments conducted on our work On the Robustness of Adversarially Trained Models against Uncertainty Attack, submitted at Pattern Recognition, August 2024.

Graphical_Abstract

Quick Tests 🧪

From the colab it is possible to see the over- and under-confidence attacks in action with a quick snippet of code, visualizing the uncertainty span of any sample on any RobustBench model.

Reproducing the Experiments 🔬

The file main_attack.py can be used for running a single experiment on a RobustBench model.

Acknowledgments

eu elsa serics

UncertaintyAdversarialRobustness has been partially developed with the support of the European Union’s Horizon Europe research and innovation program under the project ELSA – European Lighthouse on Secure and Safe AI, grant agreement No 101070617; by Fondazione di Sardegna under the project TrustML: Towards Machine Learning that Humans Can Trust, CUP: F73C22001320007; by EU - NGEU National Sustainable Mobility Center (CN00000023) Italian Ministry of University and Research Decree n. 1033—17/06/2022 (Spoke 10); and by project SERICS (PE00000014) under the NRRP MUR program funded by the EU - NGEU.

About

Public repository for the reprudocibility of the paper

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages