Skip to content

In this project, we focus on developing an efficient Machine Learning model to process neural data in real time, with low power consumption, small on-chip area and fast inference.

Notifications You must be signed in to change notification settings

sMamooler/Resource-Efficient-ML-Algorithm-for-On-Implant-Neurological-Symptom-Detection

 
 

Repository files navigation

Resource-Efficient Machine Learning Algorithm Design for On-Implant Neurological Symptom Detection

In this project, we focus on developing an efficient Machine Learning model to process neural data in real time, with low power consumption, small on-chip area and fast inference.

Installation

make install  # install the Python requirements 

Download trajectories from ECoG, and the pre-trained weights. The folder structure should look like:

$drive
|-- data
`-- checkpoints
|   
`-- figures

The drive directory should be in the same directory where you run the code. The figures will be generated by run.py and be stored in drive/figures.

Usage

In the baseline model and Fixed Point Quantization you can use pre-trained weights by adding --pre-trained=True to the command.

Baseline Model

For using the baseline model with no compression run:

python run.py

Fixed Point Quantization

For applying Fixed Point Quantization run:

python run.py --fixed_pt_quantization=True

Pruning

For applying Pruning run:

python run.py --pruning=True

Trained Quantization and Weight Sharing:

For applying Trained Quantization and Weight Sharing run:

python run.py --trained_quantization=True

Note that for Pruning and Trained Quantization and Weight Sharing you need the pre-trained baseline model. Moreover, you cannot use Fixed Point Quantization, Prunning, and Trained Qunatization at the same time.

You can adjust the number of epochs for training in run.py The training can be interupted by ctrl+c and the weights will be saved in checkpoints directory.

In fpoint_quantization.ipynb you can reproduce the results published in the report for the Binarization and Fixed-Point Quantization method.

In pruning.ipynb you can reproduce the results published in the report for the Pruning method.

In trained_quantization.ipynb you can reproduce the results published in the report for the Trained Quantization and Weight Sharing method. Note that as kmeans clustering is not deterministic you might get slightly different esults that the ones in the report.

Authors

Chabenat Eugénie : eugenie.chabenat@epfl.ch

Djambazovska Sara : sara.djambazovska@epfl.ch

Mamooler Sepideh : sepideh.mamooler@epfl.ch

About

In this project, we focus on developing an efficient Machine Learning model to process neural data in real time, with low power consumption, small on-chip area and fast inference.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 98.4%
  • Python 1.6%