This app provides a BCI speller based on code-modulated visual evoked potential (c-VEP) under the circular-shifting paradigm. The app allows to develop high-speed, reliable BCIs for communication and control by encoding the application commands using shifted versions of pseudorandom sequences.
Note
If you don't want to edit the paradigm, download the built app at https://medusabci.com/market/cvep_speller/
It is recommended to download the latest release of MEDUSA© and download the matching version of this app at https://medusabci.com/market/cvep_speller/. If you want to edit this paradigm via sorce code, then:
- Create a new app in MEDUSA© via "Developer tools/Create app"
- Indicate a ".cvep" extension and make sure "Empty project" is marked. Identifier and name can be freely chosen (e.g. "cvep_speller" as identifier, and "c-VEP Speller" as name)
- After pressing OK, a new folder at "src/accounts/{username}/{identifier}" is created. Then, delete it.
- Clone this repository or download the source code of the release you want to install in "src/accounts/{username}/"
- Rename the cloned repository to match the identifier ("src/accounts/{username}/{identifier}")
- Done! Now you can edit the app in MEDUSA©.
The c-VEPs are visual evoked potentials generated by looking at a flickering source that follows a pseudorandom sequence. Usually, this sequence is binary (i.e., it only has values 0 or 1), and thus the flickering is encoded with black and white flashes. However, it is rare to see c-VEP-based BCIs that employ random sequences, but they normally use sequences that have special characteristics.
Although each command could be modulated by a different code, finding a family of codes with suitable cross-correlation properties is not trivial. Thus, the classical approach relies on finding a pseudorandom binary sequence that presents low auto-correlation values for non-zero circular shifts, then encoding each command with time-delayed versions of the original sequence. This is known as the “circular shifting” paradigm.
Maximal length sequences (i.e. m-sequences), easily generated by linear feedback shift registers (LFSR), are often employed in c-VEP-based BCIs due to their excellent autocorrelation properties; i.e. 1 for a null shift, and −1/N otherwise, where N is the length of the m-sequence. Although the stimuli of different commands will be uncorrelated, it cannot be claimed that the EEG responses will be uncorrelated as well. This effect can happen when brain is modeled as a linear system, and even more when a nonlinear dynamic system is assumed. For the case of time-shifted stimuli, despite responses not being completely uncorrelated for certain lags like in the underlying bit-sequence, usually there is enough distinction to identify the time-shift of the EEG responses. This is achieved by creating templates for each command, circularly shifting the main template according to their lags. In online sessions, whenever an EEG response to several test cycles arrives, it is pre-processed and compared with all the templates. Hence, the selected command is identified as the one whose template reaches the maximal correlation with the processed EEG response.
Important
If you want to know more about the paradigm, the signal processing pipeline or the state-of-the-art methods that are used in c-VEP-based BCIs, we recommend to read the following paper: Martínez-Cagigal, Víctor, et al. "Brain–computer interfaces based on code-modulated visual evoked potentials (c-VEP): a literature review." Journal of Neural Engineering (2021). https://iopscience.iop.org/article/10.1088/1741-2552/ac38cf
Run Settings:
- The app has two functioning modes: “Train” and “Online”. Use “Train” to record the data for calibrating the system. Use “Online” when you already has a decoding model to select speller commands freely.
- The number of trials and the number of cycles can be configured for the “Train” mode. See the recommendations section below to know more.
- This app does not implement an early stopping approach, so it is required to specify the number of cycles per trial for the “Online” mode.
- The app allows you to specify the presentation rate of the m-sequence. Use 60 Hz if you are using a standard monitor, read the recommendations section to know more.
Encoding and matrix
- This app supports the use of binary m-sequences of length 31, 63, 127 and 255 bits, select the one that you want to use!
- Configure the number of commands in the online mode as you wish. Please note that the length of the m-sequence must be enough to encode all commands! To make sure of this, press the button “Visualize encoding” to check the lags associated to each command. The app will try to space the lags as much as possible.
Colors:
- The colors functionality allows you to use different colors for the GUI and the encoding of the binary m-sequences. Try it out!
Model training:
- Use the button “Train model” to train the decoding model. Signal processing is entirely based on the “reference processing pipeline” for c-VEP-BCIs based on circular shifting (check this out to know more).
- Use a filter bank to improve the decoding accuracy for 120 Hz monitor rates!
Tip
C-VEPs are exogenous signals generated naturally by our brains in response to stimuli. For that reason, c-VEP-based BCIs do not require users to be trained, but just a small calibration. In calibration stage, user is asked to pay attention to a flickering command encoded with the original m-sequence. We recommend to user, at least, 100 entire cycles (i.e., a full stimulation of the m-sequence) to train the model. That is, two runs of 5 trials each, in which trials are composed of 10 cycles. It is important to avoid blinking when trials are being displayed. Users can freely blink in the inter-trial time window.
Tip
If your monitor is capable to refresh at 120 Hz, we recommend using a “Target FPS (Hz)” that matches the monitor refresh rate. Imagine that you are using a 63-bit m-sequence. For a 60 Hz presentation rate, each cycle will last 1.05 s (i.e., 63/60). You can reduce that duration by half using 120 Hz, lasting 0.525 s (i.e., 63/120). If you are using a 120 Hz presentation rate, we recommend you use more than a single filter. For instance, a filter bank composed of 3 IIR filters: (1, 60), (12, 60) and (30, 60) usually gives good results.