SlideShare a Scribd company logo
1 of 4
Download to read offline
Brain-Computer Interface for control of an on-screen
keyboard
Magani, Pablo Sebastián; Iatzky, Pedro Gastón
Bioengineering Laboratory, Faculty of Engineering, Mar del Plata National University
maganipablo@hotmail.com
gastiatzky@hotmail.com
Abstract—Brain-computer interfaces (BCIs) are technological
devices based on the acquisition of brain signals and their
processing and classification, with the purpose of providing the
user with control over external devices or applications. BCIs are
an attractive option to improve the quality of life of people with
severe motor impairments, since they allow them to communicate
and to send commands to other devices by using their EEG signals.
For this project, a small, low cost circuit was designed for the
acquisition of EEG signals, and it was implemented in a BCI that
allowed the user to control an on-screen keyboard using SSVEPs.
Both the circuit and the software developed yielded excellent
results, proving the project to be an appropriate non-invasive
system for the aid of severely disabled people.
Keywords—BCI, EEG acquisition circuit, speller, SSVEP
I. INTRODUCTION
Many different disorders can disrupt the neuromuscular
channels through which the brain communicates with and
controls its external environment. Amyotrophic lateral sclerosis
(ALS), brainstem stroke, spinal cord injury, multiple sclerosis,
and numerous other diseases impair the neural pathways that
control muscles or impair the muscles themselves. Those most
severely affected may lose almost all (or all) voluntary muscle
control, leaving them unable to communicate in any way.
Brain-computer interfaces (BCIs) provide the user with an
alternative way to send commands and messages to the external
world by using only their electroencephalographic (EEG)
signals. These systems record, process and classify brain
signals in order to translate them into commands, according to
the user’s intentions. This way, a severely disabled subject can
communicate with the external world. For this work, a circuit
for the acquisition of EEG signals was designed and
manufactured, in order to be used in a BCI based on Steady
State Visually Evoked Potentials (SSVEPs) that allows the user
to spell words with an on-screen keyboard. Processing,
classification and graphical interface were developed using
MatLab®
.
II. METHODOLOGY
A. EEG Acquisition Circuit
Our objective was to design and manufacture a small (less
tan 10x10 cm), low cost EEG acquisition circuit with at least 8
input channels. We used an integrated circuit called ADS1299,
manufactured by Texas Instruments Inc., as the main
component of our circuit [1]. Altium Designer®
was used to
design the printed circuit board. In Fig.1 a picture of the printed
circuit board is shown, with components already populated. All
components were hand soldered, using lead paste. The
dimensions of the finished circuit are 5x5 cm. Table 1 shows
the cost of one acquisition circuit, not taking into account the
soldering process. The circuit requires an external digital
controller to be able to communicate with a PC through an USB
port. We used a ChipKit Uno32 board for this purpose.
Fig. 1. EEG acquisition circuit.
TABLE 1
COSTS
Item Cost (USD)
ADS1299 66.00
Other ICs (supply,
etc.)
8.62
Passive components 9.83
PCB 7.35
TOTAL 91.80
B. SSVEP-based speller
SSVEP appear mainly over the primary visual cortex when
a person focuses on a repetitive visual stimulus, such as a
blinking light at a fixed frequency [2]. These evoked potentials
have a greater amplitude on electrode locations Oz, O1, O2 and
POz, according to the extended 10-20 standard [3]. The
fundamental frequency of a SSVEP is the same as the
frequency of the visual stimulus that caused it.
The on-screen keyboard mentioned in this work was
developed using a free MatLab toolbox called Psychophysics
Toolbox. In Fig 2. the keyboard is shown. It includes several
special characters, digits from 0 to 9 and the “<” character
which actually works as the backspace key. The numbers from
1 to 4 that are on the sides of and over and below the 40
characters represent the numbering used for four blinking
rectangles. Each of these blinking rectangles blink at a unique
frequency and their purpose is to provide the user with the
stimulus necessary for generating SSVEPs. In order to send
commands to the keyboard, the user must focus on one of the
blinking rectangles and the program must correctly identify on
which one the user was focusing.
Fig. 2. On-screen keyboard. On the right upper corner the text being written by
the user is shown (“QUE”). Numbers 1, 2, 3 y 4 on the sides indicate the
numbering of the blinking rectangles, which are not shown in this image.
The steps required for writing a single character are as
follows:
1. The keyboard is presented on the screen for two
seconds. The user must find the character he/she
wants to select and must memorize on which row
that character is. This step is portrayed by Fig 2.
2. The keyboard disappears. The rectangles start
blinking and the user should focus on the rectangle
that corresponds with the row in which the desired
character was located. After 4 seconds, the
rectangles stop blinking and the keyboard reappears.
3. The classification algorithm analyzes the EEG
signals and determines on which rectangle the user
was focusing, which results in the program
knowing on which row the desired character is.
Then, the first three characters of that row are
highlighted.
4. If the user wants to write one of those three
highlighted characters, he/she must focus on the
corresponding rectangle from the subset {1,2,3}. If
the user wants to write a character that is in that row
but further to the right (i.e., not highlighted), he/she
must focus on the fourth blinking rectangle.
5. The keyboard disappears and the rectangles start
blinking again for four seconds. Then, the keyboard
reappears.
6. The classification algorithm determines on which
rectangle the user was focusing. If the user focused
on a rectangle from the subset {1,2,3}, the selected
character is added to the ones already written on the
right upper corner and the process ends..
7. If the user focused on the fourth blinking rectangle,
the next three characters of the selected row are
highlighted and steps 4, 5 and 6 are repeated.
8. If the user focuses on the fourth blinking rectangle
again, the last four characters of the selected row
are highlighted and the user must now focus on one
of the blinking rectangles, since each one now
corresponds with a highlighted character. The
keyboard disappears and the rectangles start
blinking one last time. The classification algorithm
identifies on which rectangle the user was focusing
and the corresponding character is added to the
ones already written in the right upper corner.
The placement of each character is such that the most used
in the Spanish language are on the leftmost side of the keyboard
[4]. This way the writing speed is maximized, since most of the
time the character that the user wants will be among the first
three of the selected row and they will be highlighted on step 3.
The “<” character was placed on the left side too, so that the
user can rapidly correct a spelling error.
C. Processing
Let a trial be each instance of the program in which the
rectangles blink, EEG signals are recorded and then processed
and classified. Input signals come from four channels,
corresponding to four electrodes placed on Oz, O1, O2 and POz,
all referred to the left earlobe. The sampling frequency is 250
Hz. On each trial, 1024 samples are taken from each channel
for processing and classification. The result of each trial is a
single number from 1 to 4, which corresponds to one of the four
blinking rectangles.
EEG data is digitally filtered using a combined filter,
resulting from the discrete convolution of two other FIR filters.
One is a passband filter with a passband from 5 to 38 Hz, the
other one is an optimum notch filter at 50 Hz (which is the line
frequency in Argentina) [5]. The rectangles blinked at
frequencies between 9 to 20 Hz so as to evoke SSVEPs of
greater amplitude.
Since the classification algorithm’s function is to determine
on which of the rectangles the user was focusing and since
SSVEPs have a fundamental frequency that is equal to the
stimulus that evoked them, it is logical to think that the spectral
densities of the EEG signals carry the required information. The
Fast Fourier Transform (FFT) was tried first to calculate the
periodogram of each channel. However, we found that
computing the FFT using all 1024 samples of a trial did not
reveal the spectral peak that is characteristic of SSVEPs, but
computing it with, for example, the first 512 samples of the
same trial would then show a spectral peak. This could happen
because the oscillation in the EEG signal evoked by the
blinking rectangle is not present during the whole duration of
the trial. The cause of this could be attributed to factors that are
out of our control and cannot be predicted, such as brief
interferences, artifacts or momentary distractions of the user.
A different method of estimating the spectral density was
tried, called Welch’s method [6]. When using Welch’s method
to estimate a spectral density, the input samples are divided into
segments with a defined overlap. A Hamming windowing
function is applied on each segment (in the temporal domain)
and then the FFT of each segment is calculated. The resulting
values are squared to obtain various periodogramas, and these
are then averaged. This reduces the variance of the individual
periodograms and the aforementioned problem disappears. In
Fig. 3 the result of a spectral density estimation using Welch’s
Method is shown. The user was focusing on a 10 Hz blinking
rectangle and the EEG signal was recorded from an electrode
placed on location Oz. An inherent drawback when using
Welch’s method is a lower frequency resolution, since the
individual FFTs are computed using less data points. As a
consequence, the frequency of each of the blinking squares
must differ from the others by at least twice the frequency
resolution obtained.
Fig. 3. Spectral density estimation using Welch’s method. The peak at 10 Hz is
consistent with the frequency of the blinking rectangle the user was focusing
on during this trial.
D. Classification Algorithm
As mentioned before, the purpose of classification is to
determine, for each trial, on which blinking rectangle the user
was focusing. The inputs to the classification block of the
program are four spectral density estimations, computed using
Welch’s method on the samples taken from the four electrodes
placed over the visual cortex. These spectral density
estimations will be named𝑃1(𝑓), 𝑃2(𝑓), 𝑃3(𝑓) and 𝑃4(𝑓).
Given a set of frequencies 𝐹 = {𝑓1, 𝑓2, 𝑓3, 𝑓4}, in which 𝑓𝑛 is
the frequency at which the rectangle n blinks, a score vector is
defined as 𝑆 = [𝑠1, 𝑠2, 𝑠3, 𝑠4], where 𝑠 𝑛 is the classification
score for the blinking rectangle n. This vector is set to all zeroes
in the beginning of each trial. Then, [𝑖, 𝐴𝑗] = max
𝑖
{𝑃𝑗(𝑓𝑖)} is
computed for each 𝑃𝑗, where 𝑖 is an integer number from 1 to 4
that indicates which of the frequencies 𝑓𝑖 in set F gives the
highest amplitude 𝐴𝑗 in spectral density estimation 𝑃𝑗. Let ∆𝑓
be the frequency range of interest (from 8 to 20 Hz) so that the
SNR (Signal to Noise Ratio) of each estimation j is defined as
𝑆𝑁𝑅𝑗 =
𝐴𝑗
𝑎𝑣𝑒𝑟𝑎𝑔𝑒 𝑜𝑓 𝑃𝑗 𝑖𝑛 𝑡ℎ𝑒 ∆𝑓 𝑟𝑎𝑛𝑔𝑒
If the 𝑆𝑁𝑅𝑗 value is greater than a certain threshold Ω 𝑠𝑖𝑛𝑔𝑙𝑒 .
it is then added to the corresponding score 𝑠𝑖 from score vector
S. After calculating the four 𝑆𝑁𝑅𝑗 and assigning the
appropriate scores to the S vector, the classification algorithm
determines that the user was focusing on the blinking square
that had the highest score, but only if this final score is also
greater than another threshold, Ω 𝑔𝑟𝑜𝑢𝑝. These two thresholds
were tuned to arbitrary values that yielded the best empirical
results. If none of the elements of S exceed the Ω 𝑔𝑟𝑜𝑢𝑝
threshold, the user is requested to repeat the trial by means of
an on-screen message that reads “REPEAT”.
III. RESULTS
A. Performance of the acquisition circuit
To measure the performance of the acquisition circuit
designed, a self-noise measurement was made. This
measurement involves creating a short-circuit between the
differential input pints and short them to ground. Since the
input signal is zero, the output of the system will be noise
generated by the circuit itself.
In Fig. 4 a measurement of self-noise is shown. Gain was set
to 24 (higher gain lowers the output noise in the ADS1299) and
data rate was set to 250 samples per second. The standard
deviation 𝜎 𝑁 and maximum peak to peak value µ 𝑉𝑝𝑝
were
computed using the following equations:
𝜎 𝑁 = √
1
𝑀
∑(𝑥𝑖 − 𝐸[𝑥])2
𝑀
𝑖=1
µ 𝑉𝑝𝑝
= 𝑚𝑎𝑥(𝑥) − 𝑚𝑖𝑛(𝑥)
where M is the number of samples taken (M=9000 in this case)
and 𝐸[𝑥] is the mean of the input signal. The results obtained
were 𝜎 𝑁 = 0.0454 µ𝑉 and µ 𝑉𝑝𝑝
= 0.35 µ𝑉. By comparing these
values to the self-noise measurements shown on the ADS1299
datasheet provided by Texas Instruments Inc. ( 𝜎 𝑁 =
0.14 µ𝑉 𝑎𝑛𝑑 µ 𝑉 𝑝𝑝
= 0.98 µ𝑉), we conclude that the acquisition
circuit design and the components chosen are appropriate.
Fig 4. Self-noise measurement on the acquisition system built.
B. Performance of the classifier
To quantify the performance of the classifier itself a number
of trials was run on two 24 year old healthy male subjects. In
each of these trials, the program indicated the user on which
blinking light he should focus. The trial’s results were then
categorized into three groups: Correct, Incorrect and Repeat.
Trials were placed in the Repeat category if the classifying
algorithm did not throw a result because no element in the score
vector S was greater than Ω 𝑔𝑟𝑜𝑢𝑝. Trials placed in the Incorrect
category threw a result that did not match the expected one,
according to the indication given to the user. Trials placed in
the Correct category threw a result that matched the indication
given to the user on that trial. Out of 66 trials, 59 were correctly
classified (Correct) and the remaining 7 were in the Repeat
category. Thus, 0 trials gave us an incorrect result. If we define
the algorithm’s accuracy as the ratio between Correct trials over
total number of Trials minus the number of trials in the Repeat
group, then the accuracy for this set of trials was 100%. On
various further tests in which users were free to spell any word
they wanted to accuracy was found to be over 98% in most
cases. This shows that the classifying algorithm is very robust
if the correct configuration parameters are chosen (filter attn.,
cutoff frequencies, blinking rectangle’s frequencies, threshold
levels, samples per trial, etc.). In most cases, however, reducing
trial duration (by taking less samples, for example) causes a
drop in accuracy and writing speed. Likewise, raising the
thresholds causes an increase in accuracy, but a drop in writing
speed. Low accuracy translates to slower writing because the
user spends two trials just to erase the incorrect character. It is
best to find a set of parameters that yields high accuracy and a
reasonable writing speed.
IV.CONCLUSION
A small, low cost EEG acquisition circuit was designed,
built and implemented in a BCI in order to control an on-screen
keyboard. The performance of the circuit and the software
designed was excellent, achieving high precision and writing
speed. The circuit designed can be used to measure other
bioelectrical signals, such as ECG (electrocardiology).
Improvement of the shielding of the circuit, in order to
minimize external noise and interference, and implementation
of algorithms for detection and correction of artifacts are
proposed as future works for improving the presented BCI. We
hope this work promotes local research on the fields of EEG
and BCI so that these types of devices become more easily
accessible to severely disabled people, so that they can
ultimately benefit from them and improve their quality of life.
ACKNOWLEDGEMENTS
The authors of this work would like to thank the following
people for their unconditional help and support during this
project: our families and friends, the members of the
Bioengineering Lab in the Faculty of Engineering of the Mar
del Plata National University, Alejandro Uriz, Gustavo Uicich,
Jonatan Fischer, Ariel Nieto and our directors (Gustavo
Meschino and Isabel Passoni).
REFERENCES
[1] Texas Instruments Inc. (2012, August). ADS1299 Datasheet. Retrieved
from http://www.ti.com/lit/ds/symlink/ads1299.pdf
[2] Beverina, F., Palmas, G., Silvoni, S., Piccione, F., & Giove, S. (2003).
User adaptive BCIs: SSVEP and P300 based interfaces. PsychNology
Journal, Vol. 1(3), p. 230-242.
[3] Towle, V. L., Bolaños, J., Suarez, D., Tan, K., Grzeszczuk, R., Levin, D.
N., . . . Spire, S.-P. (1993, January). The spatial location of EEG
electrodes: Locating the best-fitting sphere relative to cortical anatomy.
Electroencephalography and Clinical Neurophysiology, Vol. 86(1), p. 1-
6. doi:10.1016/0013-4694(93)90061-Y.
[4] Pratt, F. (1942). Secret and Urgent: the Story of Codes and Ciphers.
Garden City, NY, United States: Blue Ribbon Books.
[5] Zahradnik, P., & Vlcek, M. (2013, April). Notch Filtering Suitable for
Real Time Removal of Power Line Interference. Radioengineering, Vol.
22(1), p. 186-193.
[6] Welch, P. D. (1967). The Use of Fast Fourier Transform for the
Estimation of Power Spectra: A Method Based on Time Averaging Over
Short, Modified Periodograms. IEEE Transactions on Audio and
Electroacoustics, Vol. 15(2), p. 70-73. doi:10.1109/TAU.1967.1161901.

More Related Content

What's hot

Emotion Recognition based on audio signal using GFCC Extraction and BPNN Clas...
Emotion Recognition based on audio signal using GFCC Extraction and BPNN Clas...Emotion Recognition based on audio signal using GFCC Extraction and BPNN Clas...
Emotion Recognition based on audio signal using GFCC Extraction and BPNN Clas...ijceronline
 
Automatic speech recognition
Automatic speech recognitionAutomatic speech recognition
Automatic speech recognitionRichie
 
Emotiv Epoc/EEG/BCI
Emotiv Epoc/EEG/BCIEmotiv Epoc/EEG/BCI
Emotiv Epoc/EEG/BCISuhail Khan
 
Speaker identification
Speaker identificationSpeaker identification
Speaker identificationTriloki Gupta
 
International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)ijceronline
 

What's hot (7)

Emotion Recognition based on audio signal using GFCC Extraction and BPNN Clas...
Emotion Recognition based on audio signal using GFCC Extraction and BPNN Clas...Emotion Recognition based on audio signal using GFCC Extraction and BPNN Clas...
Emotion Recognition based on audio signal using GFCC Extraction and BPNN Clas...
 
Automatic speech recognition
Automatic speech recognitionAutomatic speech recognition
Automatic speech recognition
 
SPEAKER VERIFICATION
SPEAKER VERIFICATIONSPEAKER VERIFICATION
SPEAKER VERIFICATION
 
Emotiv Epoc/EEG/BCI
Emotiv Epoc/EEG/BCIEmotiv Epoc/EEG/BCI
Emotiv Epoc/EEG/BCI
 
Speaker identification
Speaker identificationSpeaker identification
Speaker identification
 
International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)
 
573 248-259
573 248-259573 248-259
573 248-259
 

Viewers also liked

Pedro ,Pedro Gazola e Gabriel
Pedro ,Pedro Gazola e GabrielPedro ,Pedro Gazola e Gabriel
Pedro ,Pedro Gazola e GabrielGabrieladami
 
Brain-computer interface based on SSVEP
Brain-computer interface based on SSVEPBrain-computer interface based on SSVEP
Brain-computer interface based on SSVEPringoring
 
BRAIN COMPUTER INTERFACE
BRAIN COMPUTER INTERFACEBRAIN COMPUTER INTERFACE
BRAIN COMPUTER INTERFACERushi Prajapati
 
Brain maps from machine learning? Spatial regularizations
Brain maps from machine learning? Spatial regularizationsBrain maps from machine learning? Spatial regularizations
Brain maps from machine learning? Spatial regularizationsGael Varoquaux
 
Brain Computer Interface
Brain Computer InterfaceBrain Computer Interface
Brain Computer InterfaceSonal Patil
 

Viewers also liked (8)

Pedro ,Pedro Gazola e Gabriel
Pedro ,Pedro Gazola e GabrielPedro ,Pedro Gazola e Gabriel
Pedro ,Pedro Gazola e Gabriel
 
Condro2010 thesis slide_v3
Condro2010 thesis slide_v3Condro2010 thesis slide_v3
Condro2010 thesis slide_v3
 
SSVEP-BCI
SSVEP-BCISSVEP-BCI
SSVEP-BCI
 
Brain-computer interface based on SSVEP
Brain-computer interface based on SSVEPBrain-computer interface based on SSVEP
Brain-computer interface based on SSVEP
 
BRAIN COMPUTER INTERFACE
BRAIN COMPUTER INTERFACEBRAIN COMPUTER INTERFACE
BRAIN COMPUTER INTERFACE
 
Brain maps from machine learning? Spatial regularizations
Brain maps from machine learning? Spatial regularizationsBrain maps from machine learning? Spatial regularizations
Brain maps from machine learning? Spatial regularizations
 
Perception
PerceptionPerception
Perception
 
Brain Computer Interface
Brain Computer InterfaceBrain Computer Interface
Brain Computer Interface
 

Similar to Pablo Magani - BCI SSVEP Speller

Modelling and Analysis of Brainwaves for Real World Interaction
Modelling and Analysis of Brainwaves for Real World InteractionModelling and Analysis of Brainwaves for Real World Interaction
Modelling and Analysis of Brainwaves for Real World InteractionPavan Kumar
 
Design of single channel portable eeg
Design of single channel portable eegDesign of single channel portable eeg
Design of single channel portable eegijbesjournal
 
DSP Based Speech Operated Home Appliances UsingZero Crossing Features
DSP Based Speech Operated Home Appliances UsingZero Crossing FeaturesDSP Based Speech Operated Home Appliances UsingZero Crossing Features
DSP Based Speech Operated Home Appliances UsingZero Crossing FeaturesCSCJournals
 
AN ANALYSIS OF SPEECH RECOGNITION PERFORMANCE BASED UPON NETWORK LAYERS AND T...
AN ANALYSIS OF SPEECH RECOGNITION PERFORMANCE BASED UPON NETWORK LAYERS AND T...AN ANALYSIS OF SPEECH RECOGNITION PERFORMANCE BASED UPON NETWORK LAYERS AND T...
AN ANALYSIS OF SPEECH RECOGNITION PERFORMANCE BASED UPON NETWORK LAYERS AND T...IJCSEA Journal
 
Intelligent Arabic letters speech recognition system based on mel frequency c...
Intelligent Arabic letters speech recognition system based on mel frequency c...Intelligent Arabic letters speech recognition system based on mel frequency c...
Intelligent Arabic letters speech recognition system based on mel frequency c...IJECEIAES
 
Biomedical Signals Classification With Transformer Based Model.pptx
Biomedical Signals Classification With Transformer Based Model.pptxBiomedical Signals Classification With Transformer Based Model.pptx
Biomedical Signals Classification With Transformer Based Model.pptxSandeep Kumar
 
BCI FOR PARALYSES PATIENT CONVERTING AUDIO TO VIDEO
BCI FOR PARALYSES PATIENT CONVERTING AUDIO TO VIDEOBCI FOR PARALYSES PATIENT CONVERTING AUDIO TO VIDEO
BCI FOR PARALYSES PATIENT CONVERTING AUDIO TO VIDEOHarathi Devi Nalla
 
Surface Electromyography (SEMG) Based Fuzzy Logic Controller for Footballer b...
Surface Electromyography (SEMG) Based Fuzzy Logic Controller for Footballer b...Surface Electromyography (SEMG) Based Fuzzy Logic Controller for Footballer b...
Surface Electromyography (SEMG) Based Fuzzy Logic Controller for Footballer b...IRJET Journal
 
Analysis of Microstrip Finger on Bandwidth of Interdigital Band Pass Filter u...
Analysis of Microstrip Finger on Bandwidth of Interdigital Band Pass Filter u...Analysis of Microstrip Finger on Bandwidth of Interdigital Band Pass Filter u...
Analysis of Microstrip Finger on Bandwidth of Interdigital Band Pass Filter u...IJREST
 
Lab 6 Neural Network
Lab 6 Neural NetworkLab 6 Neural Network
Lab 6 Neural NetworkKyle Villano
 
Wavelet Based Feature Extraction Scheme Of Eeg Waveform
Wavelet Based Feature Extraction Scheme Of Eeg WaveformWavelet Based Feature Extraction Scheme Of Eeg Waveform
Wavelet Based Feature Extraction Scheme Of Eeg Waveformshan pri
 
Isolated words recognition using mfcc, lpc and neural network
Isolated words recognition using mfcc, lpc and neural networkIsolated words recognition using mfcc, lpc and neural network
Isolated words recognition using mfcc, lpc and neural networkeSAT Journals
 
IRJET- Emotion recognition using Speech Signal: A Review
IRJET-  	  Emotion recognition using Speech Signal: A ReviewIRJET-  	  Emotion recognition using Speech Signal: A Review
IRJET- Emotion recognition using Speech Signal: A ReviewIRJET Journal
 
Denoising Techniques for EEG Signals: A Review
Denoising Techniques for EEG Signals: A ReviewDenoising Techniques for EEG Signals: A Review
Denoising Techniques for EEG Signals: A ReviewIRJET Journal
 
Motor Imagery based Brain Computer Interface for Windows Operating System
Motor Imagery based Brain Computer Interface for Windows Operating SystemMotor Imagery based Brain Computer Interface for Windows Operating System
Motor Imagery based Brain Computer Interface for Windows Operating SystemIRJET Journal
 
IRJET- Analysis of Electroencephalogram (EEG) Signals
IRJET- Analysis of Electroencephalogram (EEG) SignalsIRJET- Analysis of Electroencephalogram (EEG) Signals
IRJET- Analysis of Electroencephalogram (EEG) SignalsIRJET Journal
 
Ac lab final_report
Ac lab final_reportAc lab final_report
Ac lab final_reportGeorge Cibi
 
EEG Mouse:A Machine Learning-Based Brain Computer Interface_interface
EEG Mouse:A Machine Learning-Based Brain Computer Interface_interfaceEEG Mouse:A Machine Learning-Based Brain Computer Interface_interface
EEG Mouse:A Machine Learning-Based Brain Computer Interface_interfaceWilly Marroquin (WillyDevNET)
 

Similar to Pablo Magani - BCI SSVEP Speller (20)

Modelling and Analysis of Brainwaves for Real World Interaction
Modelling and Analysis of Brainwaves for Real World InteractionModelling and Analysis of Brainwaves for Real World Interaction
Modelling and Analysis of Brainwaves for Real World Interaction
 
Design of single channel portable eeg
Design of single channel portable eegDesign of single channel portable eeg
Design of single channel portable eeg
 
DSP Based Speech Operated Home Appliances UsingZero Crossing Features
DSP Based Speech Operated Home Appliances UsingZero Crossing FeaturesDSP Based Speech Operated Home Appliances UsingZero Crossing Features
DSP Based Speech Operated Home Appliances UsingZero Crossing Features
 
AN ANALYSIS OF SPEECH RECOGNITION PERFORMANCE BASED UPON NETWORK LAYERS AND T...
AN ANALYSIS OF SPEECH RECOGNITION PERFORMANCE BASED UPON NETWORK LAYERS AND T...AN ANALYSIS OF SPEECH RECOGNITION PERFORMANCE BASED UPON NETWORK LAYERS AND T...
AN ANALYSIS OF SPEECH RECOGNITION PERFORMANCE BASED UPON NETWORK LAYERS AND T...
 
Intelligent Arabic letters speech recognition system based on mel frequency c...
Intelligent Arabic letters speech recognition system based on mel frequency c...Intelligent Arabic letters speech recognition system based on mel frequency c...
Intelligent Arabic letters speech recognition system based on mel frequency c...
 
E44082429
E44082429E44082429
E44082429
 
Biomedical Signals Classification With Transformer Based Model.pptx
Biomedical Signals Classification With Transformer Based Model.pptxBiomedical Signals Classification With Transformer Based Model.pptx
Biomedical Signals Classification With Transformer Based Model.pptx
 
PC based oscilloscope
PC based oscilloscopePC based oscilloscope
PC based oscilloscope
 
BCI FOR PARALYSES PATIENT CONVERTING AUDIO TO VIDEO
BCI FOR PARALYSES PATIENT CONVERTING AUDIO TO VIDEOBCI FOR PARALYSES PATIENT CONVERTING AUDIO TO VIDEO
BCI FOR PARALYSES PATIENT CONVERTING AUDIO TO VIDEO
 
Surface Electromyography (SEMG) Based Fuzzy Logic Controller for Footballer b...
Surface Electromyography (SEMG) Based Fuzzy Logic Controller for Footballer b...Surface Electromyography (SEMG) Based Fuzzy Logic Controller for Footballer b...
Surface Electromyography (SEMG) Based Fuzzy Logic Controller for Footballer b...
 
Analysis of Microstrip Finger on Bandwidth of Interdigital Band Pass Filter u...
Analysis of Microstrip Finger on Bandwidth of Interdigital Band Pass Filter u...Analysis of Microstrip Finger on Bandwidth of Interdigital Band Pass Filter u...
Analysis of Microstrip Finger on Bandwidth of Interdigital Band Pass Filter u...
 
Lab 6 Neural Network
Lab 6 Neural NetworkLab 6 Neural Network
Lab 6 Neural Network
 
Wavelet Based Feature Extraction Scheme Of Eeg Waveform
Wavelet Based Feature Extraction Scheme Of Eeg WaveformWavelet Based Feature Extraction Scheme Of Eeg Waveform
Wavelet Based Feature Extraction Scheme Of Eeg Waveform
 
Isolated words recognition using mfcc, lpc and neural network
Isolated words recognition using mfcc, lpc and neural networkIsolated words recognition using mfcc, lpc and neural network
Isolated words recognition using mfcc, lpc and neural network
 
IRJET- Emotion recognition using Speech Signal: A Review
IRJET-  	  Emotion recognition using Speech Signal: A ReviewIRJET-  	  Emotion recognition using Speech Signal: A Review
IRJET- Emotion recognition using Speech Signal: A Review
 
Denoising Techniques for EEG Signals: A Review
Denoising Techniques for EEG Signals: A ReviewDenoising Techniques for EEG Signals: A Review
Denoising Techniques for EEG Signals: A Review
 
Motor Imagery based Brain Computer Interface for Windows Operating System
Motor Imagery based Brain Computer Interface for Windows Operating SystemMotor Imagery based Brain Computer Interface for Windows Operating System
Motor Imagery based Brain Computer Interface for Windows Operating System
 
IRJET- Analysis of Electroencephalogram (EEG) Signals
IRJET- Analysis of Electroencephalogram (EEG) SignalsIRJET- Analysis of Electroencephalogram (EEG) Signals
IRJET- Analysis of Electroencephalogram (EEG) Signals
 
Ac lab final_report
Ac lab final_reportAc lab final_report
Ac lab final_report
 
EEG Mouse:A Machine Learning-Based Brain Computer Interface_interface
EEG Mouse:A Machine Learning-Based Brain Computer Interface_interfaceEEG Mouse:A Machine Learning-Based Brain Computer Interface_interface
EEG Mouse:A Machine Learning-Based Brain Computer Interface_interface
 

Pablo Magani - BCI SSVEP Speller

  • 1. Brain-Computer Interface for control of an on-screen keyboard Magani, Pablo Sebastián; Iatzky, Pedro Gastón Bioengineering Laboratory, Faculty of Engineering, Mar del Plata National University maganipablo@hotmail.com gastiatzky@hotmail.com Abstract—Brain-computer interfaces (BCIs) are technological devices based on the acquisition of brain signals and their processing and classification, with the purpose of providing the user with control over external devices or applications. BCIs are an attractive option to improve the quality of life of people with severe motor impairments, since they allow them to communicate and to send commands to other devices by using their EEG signals. For this project, a small, low cost circuit was designed for the acquisition of EEG signals, and it was implemented in a BCI that allowed the user to control an on-screen keyboard using SSVEPs. Both the circuit and the software developed yielded excellent results, proving the project to be an appropriate non-invasive system for the aid of severely disabled people. Keywords—BCI, EEG acquisition circuit, speller, SSVEP I. INTRODUCTION Many different disorders can disrupt the neuromuscular channels through which the brain communicates with and controls its external environment. Amyotrophic lateral sclerosis (ALS), brainstem stroke, spinal cord injury, multiple sclerosis, and numerous other diseases impair the neural pathways that control muscles or impair the muscles themselves. Those most severely affected may lose almost all (or all) voluntary muscle control, leaving them unable to communicate in any way. Brain-computer interfaces (BCIs) provide the user with an alternative way to send commands and messages to the external world by using only their electroencephalographic (EEG) signals. These systems record, process and classify brain signals in order to translate them into commands, according to the user’s intentions. This way, a severely disabled subject can communicate with the external world. For this work, a circuit for the acquisition of EEG signals was designed and manufactured, in order to be used in a BCI based on Steady State Visually Evoked Potentials (SSVEPs) that allows the user to spell words with an on-screen keyboard. Processing, classification and graphical interface were developed using MatLab® . II. METHODOLOGY A. EEG Acquisition Circuit Our objective was to design and manufacture a small (less tan 10x10 cm), low cost EEG acquisition circuit with at least 8 input channels. We used an integrated circuit called ADS1299, manufactured by Texas Instruments Inc., as the main component of our circuit [1]. Altium Designer® was used to design the printed circuit board. In Fig.1 a picture of the printed circuit board is shown, with components already populated. All components were hand soldered, using lead paste. The dimensions of the finished circuit are 5x5 cm. Table 1 shows the cost of one acquisition circuit, not taking into account the soldering process. The circuit requires an external digital controller to be able to communicate with a PC through an USB port. We used a ChipKit Uno32 board for this purpose. Fig. 1. EEG acquisition circuit. TABLE 1 COSTS Item Cost (USD) ADS1299 66.00 Other ICs (supply, etc.) 8.62 Passive components 9.83 PCB 7.35 TOTAL 91.80 B. SSVEP-based speller SSVEP appear mainly over the primary visual cortex when a person focuses on a repetitive visual stimulus, such as a blinking light at a fixed frequency [2]. These evoked potentials have a greater amplitude on electrode locations Oz, O1, O2 and
  • 2. POz, according to the extended 10-20 standard [3]. The fundamental frequency of a SSVEP is the same as the frequency of the visual stimulus that caused it. The on-screen keyboard mentioned in this work was developed using a free MatLab toolbox called Psychophysics Toolbox. In Fig 2. the keyboard is shown. It includes several special characters, digits from 0 to 9 and the “<” character which actually works as the backspace key. The numbers from 1 to 4 that are on the sides of and over and below the 40 characters represent the numbering used for four blinking rectangles. Each of these blinking rectangles blink at a unique frequency and their purpose is to provide the user with the stimulus necessary for generating SSVEPs. In order to send commands to the keyboard, the user must focus on one of the blinking rectangles and the program must correctly identify on which one the user was focusing. Fig. 2. On-screen keyboard. On the right upper corner the text being written by the user is shown (“QUE”). Numbers 1, 2, 3 y 4 on the sides indicate the numbering of the blinking rectangles, which are not shown in this image. The steps required for writing a single character are as follows: 1. The keyboard is presented on the screen for two seconds. The user must find the character he/she wants to select and must memorize on which row that character is. This step is portrayed by Fig 2. 2. The keyboard disappears. The rectangles start blinking and the user should focus on the rectangle that corresponds with the row in which the desired character was located. After 4 seconds, the rectangles stop blinking and the keyboard reappears. 3. The classification algorithm analyzes the EEG signals and determines on which rectangle the user was focusing, which results in the program knowing on which row the desired character is. Then, the first three characters of that row are highlighted. 4. If the user wants to write one of those three highlighted characters, he/she must focus on the corresponding rectangle from the subset {1,2,3}. If the user wants to write a character that is in that row but further to the right (i.e., not highlighted), he/she must focus on the fourth blinking rectangle. 5. The keyboard disappears and the rectangles start blinking again for four seconds. Then, the keyboard reappears. 6. The classification algorithm determines on which rectangle the user was focusing. If the user focused on a rectangle from the subset {1,2,3}, the selected character is added to the ones already written on the right upper corner and the process ends.. 7. If the user focused on the fourth blinking rectangle, the next three characters of the selected row are highlighted and steps 4, 5 and 6 are repeated. 8. If the user focuses on the fourth blinking rectangle again, the last four characters of the selected row are highlighted and the user must now focus on one of the blinking rectangles, since each one now corresponds with a highlighted character. The keyboard disappears and the rectangles start blinking one last time. The classification algorithm identifies on which rectangle the user was focusing and the corresponding character is added to the ones already written in the right upper corner. The placement of each character is such that the most used in the Spanish language are on the leftmost side of the keyboard [4]. This way the writing speed is maximized, since most of the time the character that the user wants will be among the first three of the selected row and they will be highlighted on step 3. The “<” character was placed on the left side too, so that the user can rapidly correct a spelling error. C. Processing Let a trial be each instance of the program in which the rectangles blink, EEG signals are recorded and then processed and classified. Input signals come from four channels, corresponding to four electrodes placed on Oz, O1, O2 and POz, all referred to the left earlobe. The sampling frequency is 250 Hz. On each trial, 1024 samples are taken from each channel for processing and classification. The result of each trial is a single number from 1 to 4, which corresponds to one of the four blinking rectangles. EEG data is digitally filtered using a combined filter, resulting from the discrete convolution of two other FIR filters. One is a passband filter with a passband from 5 to 38 Hz, the other one is an optimum notch filter at 50 Hz (which is the line frequency in Argentina) [5]. The rectangles blinked at frequencies between 9 to 20 Hz so as to evoke SSVEPs of greater amplitude. Since the classification algorithm’s function is to determine on which of the rectangles the user was focusing and since SSVEPs have a fundamental frequency that is equal to the stimulus that evoked them, it is logical to think that the spectral densities of the EEG signals carry the required information. The Fast Fourier Transform (FFT) was tried first to calculate the periodogram of each channel. However, we found that computing the FFT using all 1024 samples of a trial did not reveal the spectral peak that is characteristic of SSVEPs, but computing it with, for example, the first 512 samples of the same trial would then show a spectral peak. This could happen because the oscillation in the EEG signal evoked by the blinking rectangle is not present during the whole duration of the trial. The cause of this could be attributed to factors that are out of our control and cannot be predicted, such as brief interferences, artifacts or momentary distractions of the user. A different method of estimating the spectral density was tried, called Welch’s method [6]. When using Welch’s method to estimate a spectral density, the input samples are divided into segments with a defined overlap. A Hamming windowing
  • 3. function is applied on each segment (in the temporal domain) and then the FFT of each segment is calculated. The resulting values are squared to obtain various periodogramas, and these are then averaged. This reduces the variance of the individual periodograms and the aforementioned problem disappears. In Fig. 3 the result of a spectral density estimation using Welch’s Method is shown. The user was focusing on a 10 Hz blinking rectangle and the EEG signal was recorded from an electrode placed on location Oz. An inherent drawback when using Welch’s method is a lower frequency resolution, since the individual FFTs are computed using less data points. As a consequence, the frequency of each of the blinking squares must differ from the others by at least twice the frequency resolution obtained. Fig. 3. Spectral density estimation using Welch’s method. The peak at 10 Hz is consistent with the frequency of the blinking rectangle the user was focusing on during this trial. D. Classification Algorithm As mentioned before, the purpose of classification is to determine, for each trial, on which blinking rectangle the user was focusing. The inputs to the classification block of the program are four spectral density estimations, computed using Welch’s method on the samples taken from the four electrodes placed over the visual cortex. These spectral density estimations will be named𝑃1(𝑓), 𝑃2(𝑓), 𝑃3(𝑓) and 𝑃4(𝑓). Given a set of frequencies 𝐹 = {𝑓1, 𝑓2, 𝑓3, 𝑓4}, in which 𝑓𝑛 is the frequency at which the rectangle n blinks, a score vector is defined as 𝑆 = [𝑠1, 𝑠2, 𝑠3, 𝑠4], where 𝑠 𝑛 is the classification score for the blinking rectangle n. This vector is set to all zeroes in the beginning of each trial. Then, [𝑖, 𝐴𝑗] = max 𝑖 {𝑃𝑗(𝑓𝑖)} is computed for each 𝑃𝑗, where 𝑖 is an integer number from 1 to 4 that indicates which of the frequencies 𝑓𝑖 in set F gives the highest amplitude 𝐴𝑗 in spectral density estimation 𝑃𝑗. Let ∆𝑓 be the frequency range of interest (from 8 to 20 Hz) so that the SNR (Signal to Noise Ratio) of each estimation j is defined as 𝑆𝑁𝑅𝑗 = 𝐴𝑗 𝑎𝑣𝑒𝑟𝑎𝑔𝑒 𝑜𝑓 𝑃𝑗 𝑖𝑛 𝑡ℎ𝑒 ∆𝑓 𝑟𝑎𝑛𝑔𝑒 If the 𝑆𝑁𝑅𝑗 value is greater than a certain threshold Ω 𝑠𝑖𝑛𝑔𝑙𝑒 . it is then added to the corresponding score 𝑠𝑖 from score vector S. After calculating the four 𝑆𝑁𝑅𝑗 and assigning the appropriate scores to the S vector, the classification algorithm determines that the user was focusing on the blinking square that had the highest score, but only if this final score is also greater than another threshold, Ω 𝑔𝑟𝑜𝑢𝑝. These two thresholds were tuned to arbitrary values that yielded the best empirical results. If none of the elements of S exceed the Ω 𝑔𝑟𝑜𝑢𝑝 threshold, the user is requested to repeat the trial by means of an on-screen message that reads “REPEAT”. III. RESULTS A. Performance of the acquisition circuit To measure the performance of the acquisition circuit designed, a self-noise measurement was made. This measurement involves creating a short-circuit between the differential input pints and short them to ground. Since the input signal is zero, the output of the system will be noise generated by the circuit itself. In Fig. 4 a measurement of self-noise is shown. Gain was set to 24 (higher gain lowers the output noise in the ADS1299) and data rate was set to 250 samples per second. The standard deviation 𝜎 𝑁 and maximum peak to peak value µ 𝑉𝑝𝑝 were computed using the following equations: 𝜎 𝑁 = √ 1 𝑀 ∑(𝑥𝑖 − 𝐸[𝑥])2 𝑀 𝑖=1 µ 𝑉𝑝𝑝 = 𝑚𝑎𝑥(𝑥) − 𝑚𝑖𝑛(𝑥) where M is the number of samples taken (M=9000 in this case) and 𝐸[𝑥] is the mean of the input signal. The results obtained were 𝜎 𝑁 = 0.0454 µ𝑉 and µ 𝑉𝑝𝑝 = 0.35 µ𝑉. By comparing these values to the self-noise measurements shown on the ADS1299 datasheet provided by Texas Instruments Inc. ( 𝜎 𝑁 = 0.14 µ𝑉 𝑎𝑛𝑑 µ 𝑉 𝑝𝑝 = 0.98 µ𝑉), we conclude that the acquisition circuit design and the components chosen are appropriate. Fig 4. Self-noise measurement on the acquisition system built. B. Performance of the classifier To quantify the performance of the classifier itself a number of trials was run on two 24 year old healthy male subjects. In each of these trials, the program indicated the user on which blinking light he should focus. The trial’s results were then categorized into three groups: Correct, Incorrect and Repeat. Trials were placed in the Repeat category if the classifying algorithm did not throw a result because no element in the score vector S was greater than Ω 𝑔𝑟𝑜𝑢𝑝. Trials placed in the Incorrect category threw a result that did not match the expected one, according to the indication given to the user. Trials placed in the Correct category threw a result that matched the indication
  • 4. given to the user on that trial. Out of 66 trials, 59 were correctly classified (Correct) and the remaining 7 were in the Repeat category. Thus, 0 trials gave us an incorrect result. If we define the algorithm’s accuracy as the ratio between Correct trials over total number of Trials minus the number of trials in the Repeat group, then the accuracy for this set of trials was 100%. On various further tests in which users were free to spell any word they wanted to accuracy was found to be over 98% in most cases. This shows that the classifying algorithm is very robust if the correct configuration parameters are chosen (filter attn., cutoff frequencies, blinking rectangle’s frequencies, threshold levels, samples per trial, etc.). In most cases, however, reducing trial duration (by taking less samples, for example) causes a drop in accuracy and writing speed. Likewise, raising the thresholds causes an increase in accuracy, but a drop in writing speed. Low accuracy translates to slower writing because the user spends two trials just to erase the incorrect character. It is best to find a set of parameters that yields high accuracy and a reasonable writing speed. IV.CONCLUSION A small, low cost EEG acquisition circuit was designed, built and implemented in a BCI in order to control an on-screen keyboard. The performance of the circuit and the software designed was excellent, achieving high precision and writing speed. The circuit designed can be used to measure other bioelectrical signals, such as ECG (electrocardiology). Improvement of the shielding of the circuit, in order to minimize external noise and interference, and implementation of algorithms for detection and correction of artifacts are proposed as future works for improving the presented BCI. We hope this work promotes local research on the fields of EEG and BCI so that these types of devices become more easily accessible to severely disabled people, so that they can ultimately benefit from them and improve their quality of life. ACKNOWLEDGEMENTS The authors of this work would like to thank the following people for their unconditional help and support during this project: our families and friends, the members of the Bioengineering Lab in the Faculty of Engineering of the Mar del Plata National University, Alejandro Uriz, Gustavo Uicich, Jonatan Fischer, Ariel Nieto and our directors (Gustavo Meschino and Isabel Passoni). REFERENCES [1] Texas Instruments Inc. (2012, August). ADS1299 Datasheet. Retrieved from http://www.ti.com/lit/ds/symlink/ads1299.pdf [2] Beverina, F., Palmas, G., Silvoni, S., Piccione, F., & Giove, S. (2003). User adaptive BCIs: SSVEP and P300 based interfaces. PsychNology Journal, Vol. 1(3), p. 230-242. [3] Towle, V. L., Bolaños, J., Suarez, D., Tan, K., Grzeszczuk, R., Levin, D. N., . . . Spire, S.-P. (1993, January). The spatial location of EEG electrodes: Locating the best-fitting sphere relative to cortical anatomy. Electroencephalography and Clinical Neurophysiology, Vol. 86(1), p. 1- 6. doi:10.1016/0013-4694(93)90061-Y. [4] Pratt, F. (1942). Secret and Urgent: the Story of Codes and Ciphers. Garden City, NY, United States: Blue Ribbon Books. [5] Zahradnik, P., & Vlcek, M. (2013, April). Notch Filtering Suitable for Real Time Removal of Power Line Interference. Radioengineering, Vol. 22(1), p. 186-193. [6] Welch, P. D. (1967). The Use of Fast Fourier Transform for the Estimation of Power Spectra: A Method Based on Time Averaging Over Short, Modified Periodograms. IEEE Transactions on Audio and Electroacoustics, Vol. 15(2), p. 70-73. doi:10.1109/TAU.1967.1161901.