TAIBackdoor is an open adversarial machine learning freamework based on PyTorch. It is a part of the OpenTAI project.
-
Modular Design We decompose the adversarial machine learning framework into different components, and one can easily construct a customized project by combining different modules.
-
Designed for Research We aim at providing highly flexible modules for adversarial machine learning researchers.
-
State of the art We provide implementations of state-of-the-art attack/defence techniques published in different venues.
-
Flexibility Our framework provide flexible modules that can be integrated with other adversarial ML frameworks such as RobustBench
- attacks: implementations of backdoor attacks
- defenses: implementations of backdoor defenses
- datasets: implementation of wrapper for commonly used dataset based on torchvision
- losses: implementations for attacks/defenses training losses
- models: implementations for commonly used models
- training implementations of training pipeline
We appreciate all contributions to improve for TAIBackdoor. Welcome community users to participate in our projects. Please refer to CONTRIBUTING.md for guideline.
TAIBackdoor is an open-source project that is contributed by researchers from the community. Part of the code is based on existing papers, either reimplementation or open-source code provided by authors. For complete list of paper, please see ACKNOWLEDGEMENT.md
- TAIAdv: Adversarial Attack and Defense Toolbox and Benchmark
- TAIXAI: Explainable AI Toolbox
- TAICorruption: Common Corruption Robustness Toolbox and Benchmark
- TAIBackdoor: Backdoor Attack and Defense Toolbox and Benchmark
- TAIFairness: AI Fairness Toolbox and Benchmark
- TAIPrivacy: Privacy Attack and Defense Toolbox and Benchmark
- TAIIP: AI Intellectual Property Protection Toolbox and Benchmark
- TAIDeepfake: Deepfake Detection Toolbox and Benchmark