ETH Robustness Analyzer for Neural Networks (ERAN) is a state-of-the-art sound, precise, scalable, and extensible analyzer based on abstract interpretation for the complete and incomplete verification of MNIST, CIFAR10, and ACAS Xu based networks. ERAN produces state-of-the-art precision and performance for both complete and incomplete verification and can be tuned to provide best precision and scalability (see recommended configuration settings at the bottom). ERAN is developed at the SRI Lab, Department of Computer Science, ETH Zurich as part of the Safe AI project. The goal of ERAN is to automatically verify safety properties of neural networks with feedforward, convolutional, and residual layers against input perturbations (e.g., L∞-norm attacks, geometric transformations, etc).
ERAN supports networks with ReLU, Sigmoid and Tanh activations and is sound under floating point arithmetic. It employs custom abstract domains which are specifically designed for the setting of neural networks and which aim to balance scalability and precision. Specifically, ERAN supports the following four analysis:
-
DeepZ [NIPS'18]: contains specialized abstract Zonotope transformers for handling ReLU, Sigmoid and Tanh activation functions.
-
DeepPoly [POPL'19]: based on a domain that combines floating point Polyhedra with Intervals.
-
RefineZono [ICLR'19]: combines DeepZ analysis with MILP and LP solvers for more precision.
-
RefinePoly [NeurIPS'19]: combines DeepPoly analysis with MILP and k-ReLU framework for state-of-the-art precision while maintaining scalability.
All analysis are implemented using the ELINA library for numerical abstractions. More details can be found in the publications below.
For a detailed desciption of the options provided and the implentation of ERAN, you can download the user manual.
GNU C compiler, ELINA, Gurobi's Python interface,
python3.7, virtualenv, tensorflow 1.11 or higher, numpy.
The dependencies for ERAN can be installed step by step as follows (sudo rights might be required):
sudo ./install.sh
source gurobi_setup_path.sh
Note that to run ERAN with Gurobi one needs to obtain an academic license for gurobi from https://user.gurobi.com/download/licenses/free-academic.
To install the remaining python dependencies (numpy and tensorflow), type:
pip3 install -r requirements.txt
ERAN may not be compatible with older versions of tensorflow (we have tested ERAN with versions >= 1.11.0), so if you have an older version and want to keep it, then we recommend using the python virtual environment for installing tensorflow.
We provide results computed on our machines in the folder "tf_verify/competition_results/". The produced files contain the verification result (SAT, UNSAT, or UNKNOWN) and the runtime in seconds. Both the verification results and the runtime are obtained by combining the respective quantities from the verifier and the attack algorithm. In our results, we only report those attacks for which the overall time of running the verifier and attack is less than the specified timeout. For the Sigmoid and Tanh networks, the verifier does not verify anything and has negligible runtime, so we provide the files generated by the PGD attack. Note that while our code for attack can benefit by running on GPUs, we used CPUs for our competition measurements to ensure consistency of mesaurements.
Our results can be reproduced by running the following commands:
cd tf_verify
./run_acasxu.sh
./run_ffn_relu.sh
./run_colt.sh
./run_oval.sh
Note that for parsing the Sigmoid and Tanh based onnx networks the code ERAN needs to be manually adjusted as follows:
- change line 471 of "onnx_translator.py" from
input_name = node.input[0]
to
input_name = node.input[1]
After this change run:
./run_neel.sh
The verifier results produced by the scripts can be found in "tf_verify/competition_results/".
Our code for PGD attacks require tensorflow models so we converted the onnx files into ".pb" type. PGD attacks for the Sigmoid, Tanh, ReLU based fully-connected networks and "mnist_0.3.onnx" can be run as follows:
cd attacks
virtualenv -p python3.7 eran
source eran/bin/activate
pip3 install tensorflow==1.14
./run_attacks_neel.sh
./run_attacks_pat.sh
./run_attacks_colt.sh
The corresponding output is produced in the same folder.
Note that since all the experiments have a timeout, due to the difference in machine speeds, one may get better or worse results than reported in the "competition_results" directory.
-
Certifying Geometric Robustness of Neural Networks
Mislav Balunovic, Maximilian Baader, Gagandeep Singh, Timon Gehr, Martin Vechev
NeurIPS 2019.
-
Beyond the Single Neuron Convex Barrier for Neural Network Certification.
Gagandeep Singh, Rupanshu Ganvir, Markus Püschel, and Martin Vechev.
NeurIPS 2019.
-
Boosting Robustness Certification of Neural Networks.
Gagandeep Singh, Timon Gehr, Markus Püschel, and Martin Vechev.
ICLR 2019.
-
An Abstract Domain for Certifying Neural Networks.
Gagandeep Singh, Timon Gehr, Markus Püschel, and Martin Vechev.
POPL 2019.
-
Fast and Effective Robustness Certification.
Gagandeep Singh, Timon Gehr, Matthew Mirman, Markus Püschel, and Martin Vechev.
NeurIPS 2018.
-
Gagandeep Singh (lead contact) - gsingh@inf.ethz.ch
-
Makarchuk Hleb - hleb.makarchuk@inf.ethz.ch
- Copyright (c) 2020 Secure, Reliable, and Intelligent Systems Lab (SRI), Department of Computer Science ETH Zurich
- Licensed under the Apache License