Classification of EEG signals from the brain through OpenBCI hardware and Tensorflow-Keras API.
- Disclaimer
- Data Acquisition
- Prerequisites
- Installation
- Usage
- Confusion Matrix
- A look at the samples
- Best NN so far
This is a boilerplate work-in-progress project for motor imagery classification with deep learning using OpenBCI Cyton board.
Feel free to take inspiration and use the code.
Don't forget to cite me and the articles that have had a huge impact on this project if you will use them.
Please let me know if you find any improvements.
The personal_dataset
folder provides the current EEG samples taken following this protocol:
- The person sits in a comfortable position on a chair and follows the
acquire_eeg.py
protocol. - When the program tells to think "hands" the subject imagines opening and closing both hands.
- If "none" is presented the subject can wonder, and think at something else.
- If "feet" is presented the subject imagines moving the feet up and down.
The subject does not blink during acquisitions.
Each sample is stored as a numpy 2D array in an .npy file that has the following shape:
(8, 250)
To get a local copy up and running follow these simple steps.
The project provides a Pipfile
file that can be managed with pipenv.
pipenv
installation is strongly encouraged in order to avoid dependency/reproducibility problems.
- pipenv
pip install pipenv
- Clone the repo
git clone https://github.com/CrisSherban/BrainPad
- Enter in the project directory and install Python dependencies
cd BrainPad
pipenv install
-
This script allows to connect to OpenBCI Cyton board through BrainFlow and acquire data in form of raw EEG.
For a Cyton board and a Linux machine the setup is the following:- Connect the Ultracortex Helmet with the Cyton Board to your machine
- Run the script and follow the acquisition protocol
-
This Python module gives the user a live testing environment of the system.
For a Cyton board and a Linux machine the setup is the following:- Connect the Ultracortex Helmet with the Cyton Board to your machine
- Open OpenBCI GUI
- Set this script in the OpenBCI GUI Working Directory
- Mimic the motor imagery tasks you did in the acquisition protocol and check on screen what happens.
-
This module provides functionalities for splitting a dataset, loading a dataset visualizing data, and handles all the necessary preprocessing.
-
Allows to check how well a model is doing on some unseen set of data.
-
Provides three different architectures used in this project.
- A very deep architecture: ResNet
- A simplistic architecture based upon the knowledge from:
https://iopscience.iop.org/article/10.1088/1741-2552/ab0ab5/meta - TA-CSPNN made for motor imagery classification tasks, all credits to:
https://github.com/mahtamsv/TA-CSPNN/blob/master/TA_CSPNN.py
https://ieeexplore.ieee.org/document/8857423 - EEGNet: A Compact Convolutional Network for EEG-based Brain-Computer Interfaces:
https://arxiv.org/abs/1611.08024
-
Provides several functions to train the networks in Keras.
-
Gives a sketch on how to use Keras tuner and GridSearch to tune the hyperparameters.