This repository contains code and pre-trained model versions of the models used in the ICASSP 2023 submission "Self-Supervised Enhancement of Stimulus-Evoked Brain Response Data" by Bernd Accou, Hugo Van hamme, and Tom Francart.
The code/models in this repository use Tensorflow version >= 2.3.0 and Python >= 3.6.0. which can be installed via conda or pip (see also the Tensorflow installation guide).
Example installation using pip
pip install tensorflow
Code for all models is stored in models.py.
Pre-trained models can be found in the pretrained_models folder:
- The shift detection model contains the weights for the full shift detection model, including the enhancement module (multi-view CNN based architecture) and the comparison model.
- The subject-independent linear decoder contains the weights for the subject-independent linear decoder, used in the paper for the downstream speech envelope decoding task.
Both models are saved in tensorflow SavedModel format.
Example code for loading the models:
import tensorflow as tf
# Load the model
shift_detection_model = tf.keras.models.load_model('pretrained_models/shift_detection_model')
# Extract the multi-view CNN based enhancement module
enhancement_module = shift_detection_model.get_layer('multiview_cnn')
# Extract the simple comparison model
simple_comparison_model = shift_detection_model.get_layer('simple_comparison_model')
# Load the subject-independent linear decoder
subject_independent_decoder = tf.keras.models.load_model('pretrained_models/subject_independent_decoder')