In this documentation, we provide an instruction how to run our code as follows.
-
Anaconda
Download and install anaconda from www.anaconda.com. Set it to the environment path.
-
Python v3.6
- Create the python virtual environment
conda create -n [environment_name] python=3.7
- Activate the python environment
conda activate [environment_name]
- Create the python virtual environment
-
scikit-learn (v. 1.0.2)
conda install scikit-learn==1.0.2
-
numpy (v. 1.21.5)
conda install numpy==1.21.5
-
pandas (v. 1.1.5)
conda install pandas==1.1.5
-
pywavelets (v. 1.3.0)
conda install pywavelets==1.3.0
-
Rust environment (v. 1.56.0-nightly)
-
install rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
-
set rust==nightly as your default toolchain. The stable toolchain will not work with some dependencies.
rustup default nightly && rustup update
-
- Go to the correct directory for the dataset to be tested
- For UCR-ECG dataset
cd UCR
- For LFW dataset
cd LFW
- For Cifar-100 dataset
- Since github only permits 100 MB maximum file size, we had to store our test samples on Google Drive. So, please download them from (https://drive.google.com/drive/folders/1vj81b2qCxCfQ1k6tmX8ImcB7YAXkA5s6?usp=sharing)
- Copy the downloaded files (meta, test, train) to our CIFAR folder
cp meta test train CIFAR/cifar-100-python/
- Then go to CIFAR folder
cd CIFAR
- For UCR-ECG dataset
- Run the following command to obtain the result
-
For decision tree
python train_dt.py {#class}
where {#class} is the number of classes. For UCR-ECG, replace {#class} with {8, 16, 32, or 42}. For Cifar-100, replace with {8, 16, 32, 64, 100}. For LFW, replace with {8, 16, 32, 64, 128}.
-
For DWT+PCA+DT
python train_dwt_pca_dt.py {#class}
-
For SVM only
python train_svm.py {#class}
-
For DWT+PCA+SVM (ours)
python train_dwt_pca_svm.py {#class}
-
-
From the project root directory (i.e., 2023.2.78/src), build the rust codes
cargo build
If you see a bunch of errors due to the private modules in libspartan, you need to go to spartan libary and change such modules to public (by adding prefix
pub
). We will provide the updated library, where we make these changes soon. -
Run the example on ECG dataset
cargo run main.rs
-
The output from the terminal should look like as below
Proof generated! Proving time: 3 proof verification successful! Verification time: 171
If the code is found useful, we would be appreciated if our paper can be cited with the following bibtex format
@inproceedings{wang2023ezDPS,
title={ezDPS: An Efficient and Zero-Knowledge Machine Learning Inference Pipeline},
author={Haodi, Wang and Hoang, Thang},
booktitle={Proceedings on Privacy Enhancing Technologies},
issue={2},
pages={430--448},
year={2023}
}
For any inquiries, bugs, and assistance on building and running the code, please contact me at whd@mail.bnu.edu.cn