Deep Learning in Brain-Computer
Interface
Leverage the power of Deep Learning in Neuroscience.
Allen Wu
yanshiun.wu@sjsu.edu
Introduction
• The Concept of the Brain-Computer Interface is usually shown in Sci-
Fi movies, and we are always thinking it is far away from us.
• However, by leveraging the progress in the computer industry, this
concept is not an unrealistic dream but a promising future.
• Next Slide, Neuralink has posted an interesting video about their
experiment on monkey motor imagery.
Neuralink Monkey
Neuroscience Terminology
• Motor Imagery (MI)
• Brain-Computer Interface (BCI)
• Common Types of BCI
• Exoskeleton
Motor Imagery (MI)
• Motor Imagery is the thinking of
any movement, it can be
explained that using the mind to
control.
• For example, “The Neuralink
Monkey is thinking he is playing
Pong to control his paddle” is
motor imagery.
From “Rehabilitation Procedures
in the Management of
Parkinson’s Disease” Figure 1(a)
Brain-Computer Interface (BCI)
• BCI is a device that records the signal from the brain and translates it
to a computer and vice versa.
• MI BCI is usually containing the following four components[2]:
1. Brain signal recording device.
2. Feature extraction step.
3. A decoder that translates features into actions understandable by a
computer.
4. A device that executes the commands from the decoder.
Common Types of BCI
1. Invasive:
• Intracortical microelectrode arrays (MEA)
• Location: inside the cortex
2. Semi-invasive:
• Electrocorticography (ECoG)
• Location: on the surface of the cortex
3. Non-invasive:
• Electroencephalography (EEG)
• Location: on the scalp
• Accuracy: MEA > ECoG > EEG
• Risk: MEA > ECoG > EEG
From “Cortical neuroprosthetics from a clinical perspective”
Figure 2
Exoskeleton
• It is external equipment to support the subject’s body and perform
what the subject wants to replace or enhance its body performance.
From “An exoskeleton controlled by an epidural wireless brain-machine interface in a tetraplegic patient: a
proof-of-concept demonstration” Figure 1(c)
Tetraplegia with the ECoG BCI
Objective
• This research used an offline ECoG dataset to 3D translate the
subject’s hands movement using deep learning models.
Challenges
1. The patient’s attention level, tiredness, or inexactness of
imagination will affect the result of the experiments.
2. This is a closed-loop experiment, in which a patient has to correct
the erroneous movements, which complicates the trajectory.
3. This research has only one participant. It is hard to generalize these
results to other patients.
4. This research has only a few samples, the model may overfitting.
Cosine Similarity and Loss Function
• The research used Cosine Similarity
(CS) to create a loss function to
evaluate the performance of the
model prediction.
From “Decoding ECoG signal into 3D hand translation using
deep learning” Figure 3
From “Decoding ECoG signal into 3D hand translation using
deep learning” Formula
Models
From “Decoding ECoG signal into 3D hand translation using
deep learning” Figure 6
Result
• They found CNN2D+LSTM+MT is the best model for 3D head
translation as the result shown below.
From “Decoding ECoG signal into 3D hand translation using deep learning”
Table 5
Interesting Findings
• Dropout Rate is a very important method in this dataset and Accuracy
reply on Higher Dropout Rate
• It may be caused by Low Signal Noise Ratio (SNR) due to a lot of noise in the
dataset.
From “Decoding ECoG signal into 3D hand translation using deep learning” Figure 12
• More Layer, Less accuracy
• Using more than two
convolutional layers can
decrease the models’
accuracy, it may be caused by
a relatively small dataset and
more layers mean more
weights need to be tuned as
well as need more data.
From “Decoding ECoG signal into 3D hand translation using
deep learning” Figure 7
Next
• This research needs more new patients, so they can solve some of the
problems and challenges and generalize the results.
• Also, they can deploy the model in online training and prediction to
see if there will be more exciting findings.
Reference
• M. Śliwowski, M. Martin, A. Souloumiac, P. Blanchart, and T. Aksenova, “Decoding ECOG signal
into 3D hand translation using Deep Learning,” Journal of Neural Engineering, vol. 19, no. 2, Mar.
2022.
• J. J. Shih, D. J. Krusienski, and J. R. Wolpaw, “Brain-computer interfaces in medicine,” Mayo Clinic
Proceedings, vol. 87, no. 3, pp. 268–279, 2012.
• G. Abbruzzese, L. Avanzino, R. Marchese, and E. Pelosin, “Action observation and motor imagery:
Innovative cognitive tools in the rehabilitation of parkinson’s disease,” Parkinson’s Disease, vol.
2015, pp. 1–9, Oct. 2015.
• A. P. Tsu, M. J. Burish, J. GodLove, and K. Ganguly, “Cortical neuroprosthetics from a clinical
perspective,” Neurobiology of Disease, vol. 83, pp. 154–160, Nov. 2015.
• A. L. Benabid, T. Costecalde, A. Eliseyev, G. Charvet, A. Verney, S. Karakas, M. Foerster, A. Lambert,
B. Morinière, N. Abroug, M.-C. Schaeffer, A. Moly, F. Sauter-Starace, D. Ratel, C. Moro, N. Torres-
Martinez, L. Langar, M. Oddoux, M. Polosan, S. Pezzani, V. Auboiroux, T. Aksenova, C. Mestais, and
S. Chabardes, “An exoskeleton controlled by an epidural wireless brain–machine interface in a
tetraplegic patient: A proof-of-concept demonstration,” The Lancet Neurology, vol. 18, no. 12, pp.
1112–1122, Oct. 2019.

Deep Learning in Brain-Computer Interface

  • 1.
    Deep Learning inBrain-Computer Interface Leverage the power of Deep Learning in Neuroscience. Allen Wu yanshiun.wu@sjsu.edu
  • 2.
    Introduction • The Conceptof the Brain-Computer Interface is usually shown in Sci- Fi movies, and we are always thinking it is far away from us. • However, by leveraging the progress in the computer industry, this concept is not an unrealistic dream but a promising future. • Next Slide, Neuralink has posted an interesting video about their experiment on monkey motor imagery.
  • 3.
  • 4.
    Neuroscience Terminology • MotorImagery (MI) • Brain-Computer Interface (BCI) • Common Types of BCI • Exoskeleton
  • 5.
    Motor Imagery (MI) •Motor Imagery is the thinking of any movement, it can be explained that using the mind to control. • For example, “The Neuralink Monkey is thinking he is playing Pong to control his paddle” is motor imagery. From “Rehabilitation Procedures in the Management of Parkinson’s Disease” Figure 1(a)
  • 6.
    Brain-Computer Interface (BCI) •BCI is a device that records the signal from the brain and translates it to a computer and vice versa. • MI BCI is usually containing the following four components[2]: 1. Brain signal recording device. 2. Feature extraction step. 3. A decoder that translates features into actions understandable by a computer. 4. A device that executes the commands from the decoder.
  • 7.
    Common Types ofBCI 1. Invasive: • Intracortical microelectrode arrays (MEA) • Location: inside the cortex 2. Semi-invasive: • Electrocorticography (ECoG) • Location: on the surface of the cortex 3. Non-invasive: • Electroencephalography (EEG) • Location: on the scalp • Accuracy: MEA > ECoG > EEG • Risk: MEA > ECoG > EEG From “Cortical neuroprosthetics from a clinical perspective” Figure 2
  • 8.
    Exoskeleton • It isexternal equipment to support the subject’s body and perform what the subject wants to replace or enhance its body performance. From “An exoskeleton controlled by an epidural wireless brain-machine interface in a tetraplegic patient: a proof-of-concept demonstration” Figure 1(c)
  • 9.
  • 10.
    Objective • This researchused an offline ECoG dataset to 3D translate the subject’s hands movement using deep learning models.
  • 11.
    Challenges 1. The patient’sattention level, tiredness, or inexactness of imagination will affect the result of the experiments. 2. This is a closed-loop experiment, in which a patient has to correct the erroneous movements, which complicates the trajectory. 3. This research has only one participant. It is hard to generalize these results to other patients. 4. This research has only a few samples, the model may overfitting.
  • 12.
    Cosine Similarity andLoss Function • The research used Cosine Similarity (CS) to create a loss function to evaluate the performance of the model prediction. From “Decoding ECoG signal into 3D hand translation using deep learning” Figure 3 From “Decoding ECoG signal into 3D hand translation using deep learning” Formula
  • 13.
    Models From “Decoding ECoGsignal into 3D hand translation using deep learning” Figure 6
  • 14.
    Result • They foundCNN2D+LSTM+MT is the best model for 3D head translation as the result shown below. From “Decoding ECoG signal into 3D hand translation using deep learning” Table 5
  • 15.
    Interesting Findings • DropoutRate is a very important method in this dataset and Accuracy reply on Higher Dropout Rate • It may be caused by Low Signal Noise Ratio (SNR) due to a lot of noise in the dataset. From “Decoding ECoG signal into 3D hand translation using deep learning” Figure 12
  • 16.
    • More Layer,Less accuracy • Using more than two convolutional layers can decrease the models’ accuracy, it may be caused by a relatively small dataset and more layers mean more weights need to be tuned as well as need more data. From “Decoding ECoG signal into 3D hand translation using deep learning” Figure 7
  • 17.
    Next • This researchneeds more new patients, so they can solve some of the problems and challenges and generalize the results. • Also, they can deploy the model in online training and prediction to see if there will be more exciting findings.
  • 18.
    Reference • M. Śliwowski,M. Martin, A. Souloumiac, P. Blanchart, and T. Aksenova, “Decoding ECOG signal into 3D hand translation using Deep Learning,” Journal of Neural Engineering, vol. 19, no. 2, Mar. 2022. • J. J. Shih, D. J. Krusienski, and J. R. Wolpaw, “Brain-computer interfaces in medicine,” Mayo Clinic Proceedings, vol. 87, no. 3, pp. 268–279, 2012. • G. Abbruzzese, L. Avanzino, R. Marchese, and E. Pelosin, “Action observation and motor imagery: Innovative cognitive tools in the rehabilitation of parkinson’s disease,” Parkinson’s Disease, vol. 2015, pp. 1–9, Oct. 2015. • A. P. Tsu, M. J. Burish, J. GodLove, and K. Ganguly, “Cortical neuroprosthetics from a clinical perspective,” Neurobiology of Disease, vol. 83, pp. 154–160, Nov. 2015. • A. L. Benabid, T. Costecalde, A. Eliseyev, G. Charvet, A. Verney, S. Karakas, M. Foerster, A. Lambert, B. Morinière, N. Abroug, M.-C. Schaeffer, A. Moly, F. Sauter-Starace, D. Ratel, C. Moro, N. Torres- Martinez, L. Langar, M. Oddoux, M. Polosan, S. Pezzani, V. Auboiroux, T. Aksenova, C. Mestais, and S. Chabardes, “An exoskeleton controlled by an epidural wireless brain–machine interface in a tetraplegic patient: A proof-of-concept demonstration,” The Lancet Neurology, vol. 18, no. 12, pp. 1112–1122, Oct. 2019.