-
Notifications
You must be signed in to change notification settings - Fork 5
Conversation
from my substra-mnist repo, Forked from SubstraFoundation/substra
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work @Fabien-GELUS 👍 !
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome! We tested it with a 0.8038 perf!
Hey there - is there anything preventing from merging? @natct10 @RomainGoussault |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really nice job @Fabien-GELUS 💪 It feels really great so see an example, even very simple, of remotely training a model via Substra with using differential privacy to enhance the privacy-preserving promise!
(cc @RomainBey @RomainGoussault @natct10 @mattthieu @camillemarini @ClementMayer)
This example is a Substra implementation of on the Classification_Privacy tutorial from Tensorflow_Privacy.
The structure of this example is inspired from my previous MNIST example, I simply tweaked the model and the train method.
This tutorial uses tf.keras to train a CNN to recognize handwritten digits with the DP-SGD optimizer provided by the TensorFlow Privacy library. TensorFlow Privacy provides code that wraps an existing TensorFlow optimizer to create a variant that implements DP-SGD.
The algorithm also measure the differential privacy guarantee after training the model: You will see in the console the value Epsilon (ϵ) - This is the privacy budget.