Code for paper "Adversarial Learning of Robust and Safe Controllers for Cyber-Physical Systems", Luca Bortolussi, Francesca Cairoli, Ginevra Carbone, Francesco Franchina, Enrico Regolin, 2020.
We introduce a novel learning-based approach to synthesize safe and robust con- trollers for autonomous Cyber-Physical Systems and, at the same time, to generate challenging tests. This procedure combines formal methods for model verification with Generative Adversarial Networks. The method learns two Neural Networks: the first one aims at generating troubling scenarios for the controller, while the second one aims at enforcing the safety constraints. We test the proposed method on a variety of case studies.
architecture/
contains the general GAN architecture and training / testing proceduresutils/diffquantitative.py
provides the logic to write, parse and check STL requirementsutils/misc.py
groups some minor helper functionsmodel/
contains specific models for the different experimental setups, i.e. the attacker, the defender and the differential equations for the evolution of the systemsettings/
contains the initial configuration for each case studytrain_*
,tester_*
andplotter_*
scripts execute, store and plot the simulations
Once the repository has been cloned, create a python3 virtual environment and install the specified requirements.
pip3 install virtualenv
virtualenv -p python3 venv
source venv/bin/activate
pip install -r requirements.txt
cd src/
Code runs with Python 3.7.4. on Ubuntu 18.10.
Change model settings in src/settings/*
.
python train_*.py
python tester_*.py -r=N_SIMULATIONS
python plotter_*.py -r=N_SIMULATIONS
Models and plots are saved in experiments/
.
python train_cartpole_target.py
python tester_cartpole_target.py -r=1000
python plotter_cartpole_target.py -r=1000