Version from the Arxiv paper https://arxiv.org/abs/1903.12519
Updates
- Added DSL to specify complex objectives and complex training scheduling.
- Added abstract layers for increasing precision in deeper networks
- Added onyx exporting
- Included examples of trained nets such as ResNet34
Abstract
We present a training system, which can provably defend significantly larger neural networks than previously possible, including ResNet-34 and DenseNet-100. Our approach is based on differentiable abstract interpretation and introduces two novel concepts: (i) abstract layers for fine-tuning the precision and scalability of the abstraction, (ii) a flexible domain specific language (DSL) for describing training objectives that combine abstract and concrete losses with arbitrary specifications. Our training method is implemented in the DiffAI system.