For more detailed documentation and parameters, see the API documentation at https://scalib.readthedocs.io/.
The examples in this directory attack simulated leakage from an unprotected AES implementation leaking the Hamming weight of the state bytes, with additive Gaussian noise.
This example demonstrates a key-recovery attack on the simulated leakage.
python3 aes_attack.py
The attack goes in multiple steps:
- Generate the simulated traces
- One set of profiling traces with random keys.
- One set of attack traces with a fixed key.
- Select POIs for attacking the Sbox outputs x_i.
- Compute the
scalib.metrics.SNR
for all the 16 x. - Keep as POI the 2 points with the largest SNR.
- Compute the
- Profile the variables at the outputs of the Sboxes.
1. Fit a
scalib.models.LDAClassifier
to model the leakage PDF. Note: We could also use the convenience wrapper ``scalib.models.MultiLDA`` here: it provides a concise API (with automatic parallelization), but it is less flexible. 2. Extract the probabilities of x_i using the attack traces and the models.
- Recover the key k.
- Create a factor graph describing the inference problem with
scalib.attacks.FactorGraph
. - Create a belief propagation object (
scalib.attacks.BPState
) with the prior distributions of x and the values of the public variables p. - Run belief propagation to map the information from x to k.
Our factor graph here is acyclic, so we can to exact inference. SCALib also supports approximate inference with loopy belief propagation for more complex cases.
- Create a factor graph describing the inference problem with
- Evaluate the attack results
- Show the rank for each key byte.
- Show the overall key rank with
scalib.postprocessing.rank_accuracy
.
This example runs a fixed-vs-random first- and second-order univariate TVLA using scalib.metrics.Ttest
.
python3 aes_tvla.py
SCALib also supports multivariate T-test in ``scalib.metrics.MTtest``, and allows you to arbitrarily choose your sets of points of interest.
The quality of a model can be quantified using the Perceived Information (PI) and Training Information (TI) (see https://eprint.iacr.org/2022/490). In the example, we do this for pooled Gaussian Templates
python3 aes_info.py
See https://github.com/cassiersg/ASCAD-5minutes
This attack is fairly similar to aes_attack,py
, but attacks a real-world
protected implementation with first-order masking.