SlideShare a Scribd company logo
1 of 50
Download to read offline
i
ACKNOWLEDGEMENTS
It is a great pleasure to present our final year undergraduate project report under the subject
code EE 549 as undergraduates to every personnel who helped us in different ways throughout
the semester.
First we would like to express our great gratitude to Department of Electrical and
Electronic Engineering, Faculty of Engineering, University of Peradeniya . Next, we would like
to show our great gratitude to DR.R.D.Ranaweera who is our main supervisor he guide us
through the whole project ,and our special thanks for our project supervisors
DR.J.Wijayakulasooriya, DR.G.M.R.I.Godaliyadda, DR.M.P.B.Ekanayaka supervisors,
Department of Electrical and Electronic Engineering, Faculty of Engineering, University of
Peradeniya for guiding us throughout this semester. In addition, we thank all others who helped
us in various ways to do our project successfully.
Nishanth.A E/09/239
Rishikesan.S E/09/297
Withana W.K.G.M. E/09/408
ii
CONTENTS
Acknowledgement i
Contents ii
List of Figures iii
List of Tables v
Chapter 1 INTRODUCTION 1
1.1 Introduction 1
1.2 Objective 2
1.3 History of EEG based BCI 2
Chapter 2 BACKGROUND 5
2.1 Brain Monitoring Methods 5
2.2 BCIs Based on the Modulation of Brain Rhythms 5
2.3 Practical Considerations for Motor Imagery Based BCI 6
2.4 Common Spatial Patterns (CSP)
For EEG feature extraction 8
2.5 Linear discriminant analysis (LDA) for classification 10
Chapter 3 METHODS 14
3.1 Model 14
3.2 Experimental design & data acquisition 14
3.3 Data format 19
iii
3.4 Signal processing 22
Chapter 4 RESULTS 26
4.1 Frequency spectrum of the recorded signal 26
4.2 Frequency spectrum
After 7-30Hz with 50Hz stop filter applied 26
4.3 Before applying CSP algorithm 27
4.4 After applying CSP algorithm 27
4.5 Spatial distribution of CSP filter 28
4.6 LDA classification accuracy plots 29
Chapter 5 DISCUSSION 31
5.1 Initial Filtering 31
5.2 Artifacts and experimental protocol 31
5.3 On the results of the offline analysis 32
5.4 Problems encountered 32
Chapter 6 CONCLUTION AND FUTURE PROPOSAL 34
Reference 36
Appendix 37
iv
LIST OF FIGURES
Figure 2.1 Mappings of ERD/ERS of mu rhythms during motor imagery 7
Figure 2.2 Projection of x samples onto a line 10
Figure 2.3 Projected class samples without variance consider 11
Figure 2.4 Projected class samples with variance consider 11
Figure 3.1 Model of online BCI 14
Figure 3.2 Electrode placement diagram 15
Figure 3.3 Left- BioRadio in data collection
Right- resources used for data collection 15
Figure 3.4 Skin preparation and electrode placement 16
Figure 3.5 Timing diagram of the data collection protocol 17
Figure 3.6 Left-New protocol chair setup Right- event marker generator 17
Figure 3.7 BioCapture GU Interface 17
Figure 3.8 BioCapture configuration window 18
Figure 3.9 BioRadio Matlab SDK funtions 19
Figure 3.10 Matlab data array for signal 20
Figure 3.11 Structure of the variable mrk 20
Figure 3.12 Contents of the variable nfo 21
Figure 3.13 Structure of the variable nfo 21
Figure 3.14 Filter Pipeline 24
Figure 3.15 Predictor diagram 25
Figure 3.16 Predictor in online 25
Figure 4.1 Frequency spectrum of the signal recorded 26
v
Figure 4.2 Frequency spectrum of the signal after applying the filters 26
Figure 4.3 Feature plot without apply CSP 27
Figure 4.3 Feature plot with applied CSP 27
Figure 4.4 Spatial distribution of CSP filter for sub2_dataset5 28
Figure 4.5 Spatial distribution of CSP filter for sub2_dataset3 28
Figure 4.6 Accuracy plot with epoch’s numbers 29
Figure 4.7 Accuracy for subject 2’s datasets 29
TABLES
Table 4.1 Accuracy for subject 2 datasets 30
1
CHAPTER 1
INTRODUCTION
1.1 Introduction
Some diseases can actually lead to a severe paralyzed condition called the
locked-in syndrome, where the patient loses all voluntary muscle control.
Amyotrophic lateral sclerosis (ALS) is an example of such a disease .The exact cause
of ALS is unknown, and there is no cure. ALS starts with muscle weakness and
atrophy. Usually, all voluntary movement, such as walking, speaking, swallowing,
and breathing, deteriorates over several years, and eventually is lost completely. The
disease, however, does not affect cognitive functions or sensations. People can still
see, hear, and understand what is happening around them, but cannot control their
muscles. This is because ALS only affects special neurons, the large alpha motor
neurons, which are an integral part of the motor pathways[1] .
Once the motor pathway is lost, any natural way of interaction with the
environment is lost as well. Brain–computer interface (BCI) offer the only option
for communication in such cases. BCI is a communication channel which does
not depend on the brain’s normal output pathways of peripheral nerves and muscles .
So it gives an opportunity to supply paralyzed patients with a new approach to
interact with the environment.[1]
The imagination of a limb (arm or leg of a person) movement can modify
brain electrical activity similar to actual limb motion [9]. Depending on the type of
motor imagery, different EEG patterns can be obtained. Activation of hand area
neurons either by preparation for a real movement or by imagination of the
movement is accompanied by an circumscribed event-related desynchronization
(ERD) [1] focused at the hand area of the brain [10].
One possibility to open a communication channel for these patients is to use
electroencephalograph (EEG) signals to control an assistive device that allows, for
example, the selection of letters on a screen brain–computer interface (BCI)[2]. Some
level of communication can be achieved if at least a binary decision can be made
using an EEG based BCI. To make the binary decision, we used actual left and right
hand motion to represent yes (right) and no (left). If it works well on healthy
2
subjects, we plan to improve it further by using imagination of the movement for
healthy subjects and then in the patient population.
1.2 Objective
The objective of this study is developing an algorithm to classify the left and
right hand real movement of a healthy person correctly using multiple channels of
EEG data. The final goal is to develop an algorithm to detect and classify left and
right hand movement in-real time (online classifier).
1.3 History of EEG based BCI.
In our society there are some people who have severe motor disabilities from
birth or due to accident. So it is highly required a suitable communication method to
communicate with them. The EEG based communication is very important,
when we communicate with a spinal code damaged patient. For that first we
decided to study about the EEG signals. So we designed an experimental protocol
to take EEG signals from a normal person. We used motor imagery area of the brain
to take signals.
1. The discoverer of the existence of human EEG signals was Hens Berger (1873-
1941).He began his study of human EEGs in 1920.Berger started working with
a string galvanometer in 1910, and then migrated to a smaller Edelmann
model, and after 1924,to a larger Edelmann model. In 1926 , Berger started to
usethe more powerful Siemens double coil galvanometer (attaining a
sensitivity of 130V/cm).His first report of human EEG recordings of one to
three minutes duration on photographic paper was in 1929[11].
2. The first BCI (Brain Computer Interface) was described by Dr. Grey Walter
in 1964. Ironically, this was shortly before the first Star Trek episode aired.
Dr. Walter connected electrodes directly to the motor areas of a patient’s
brain. (The patient was undergoing surgery for other reasons.) The patient
was asked to press a button to advance a slide projector while Dr.Walter
recorded the relevant brain activity. Then, Dr.Walter connected the system
3
to the slide projector so that the slide projector advanced whenever the
patient’s brain activity indicated that he wanted to press the button.
Interestingly, Dr. Walter found that he had to introduce a delay from the
detection of the brain activity until the slide projector advanced because the
slide projector would otherwise advance before the patient pressed the
button! Control before the actual movement happens, that is, control without
movement – the first BCI![2].
3. 1976:First Evidence that BCI can be used for communication : Jaceques J.
Vidal, the professor who coined the term BCI, from UCLA's Brain
Computer Interface Laboratory provided evidence that single trial visual
evoked potentials could be used as a communication channel effective
enough to control a cursor through a two dimensional maze. This presented
the first official proof that we can use brain to signal and interface with
outside devices[11].
4. The Neil Squire Foundation is a Canadian non profit organization whose
purpose is to create opportunities for independence for individuals who
have significant physical disabilities. Through direct interaction with these
individuals the Foundation researches, develops and delivers appropriate
innovative services and technology to meet their needs. Part of the
Research and Development activities of the Foundation, in
partnership with the Electrical and Computer Engineering Department at
the University of British Columbia, has been to explore methods to realize
a direct brain–computer interface (BCI) for individuals with severe
motor-related impairments. The ultimate goal of this research is to create an
advanced communication interface that will allow an individual with a
high-level impairment to have effective and sophisticated control of devices
such as wheelchairs, robotic assistive appliances, computers, and neural
prostheses[11].
5. 2003: First BCI Game Exposed to the Public Brain Gate Developed: Brain
Gate, a brain implant system ,was developed by the bio-tech company Cyber
4
kinetics in conjunction with the Department of Neuroscience at Brown
University[11].
6. 2008:First Consumer off-the-shelf, Mass Market Game Input Device. High
Accuracy BCI Wheelchair Developed in Japan. Numenta Founded to
Replicate Human Neocortex ability[11].
7. 2009: Wireless BCI Developed: A Spanish Company, Starlab, developed a
wireless 4-channel system called ENOBIO. Designed for research purposes
the system provides a platform for application development[11].
8. 2011: January 02: The First Thought-Controlled Social Media Network is
utilized by the Neurosky[11].
5
CHAPTER 2
BACKGROUND
2.1 Brain Monitoring Methods
There are three main brains monitoring methods invasive, partially invasive
and non-invasive. In non-invasive there is different neuron signal imaging or
reading techniques available, such as: Magneto encephalography (MEG),
Magnetic resonance imaging (MRI) and functional magnetic resonance imaging
(fMRI) and electroencephalogram (EEG). Among those Electroencephalogram
(EEG) is the main interest due to its advantages of low cost, convenient operation
and non-invasiveness. In present-day EEG-based BCIs, the following signals have
been paid much attention: visual evoked potential (VEP), sensorimotor mu/beta
rhythms, P300 evoked potential, slow cortical potential (SCP), and movement-
related cortical potential (MRCP). These systems offer some practical solutions
(e.g., cursor movement and word processing) for patients with motor
disabilities.[1](page 137)
2.2 BCIs Based on the Modulation of Brain Rhythms
Most of the BCI systems are designed based on the modulation of brain
rhythms. Among those power modulation of mu/beta rhythms is used in the BCI
system based on motor imagery. Besides, phase modulation is another method
which has been employed in a steady- state visual evoked potential (SSVEP) based
BCI. More generally, evoked potentials can be considered to result partially from
a reorganization of the phases of the ongoing EEG rhythms. From the viewpoint
of psychophysiology, EEG signals are divided into five rhythms in different
frequency bands: delta rhythm (0.1–3.5 Hz), theta rhythm (4–7.5 Hz), alpha rhythm
(8–13 Hz), beta rhythm (14–30 Hz), and gamma rhythm (>30 Hz).[1] (page 137)
Although the rhythmic characteristic of EEG has been studied for a long
period of time, many new studies on the mechanisms of brain rhythms emerged after
the 1980s. So far, the cellular bases of EEG rhythms are still under investigation.
6
The knowledge of EEG rhythms is limited; however, numerous neurophysiologic
studies indicate that brain rhythms can reflect changes of brain states caused by
stimuli from the environment or cognitive activities. For example, EEG rhythms
can indicate working state or idling state of the functional areas of the cortex. It is
known that the alpha rhythm recorded over the visual cortex is considered to be an
indicator of activities in the visual cortex. The clear alpha wave while eyes are
closed indicates the idling state of the visual cortex, while the block of the alpha
rhythm when eyes are open reflects the working state of the visual cortex[1]. (page
137)
Another example is mu rhythm, which can be recorded over the sensorimotor
cortex. A significant mu rhythm only exists during the idling state of the
sensorimotor cortex. The block of mu rhythm accompanies activation of the
sensorimotor cortex[1] (page 137)
Event Related Desynchronization and Synchronization (ERD & ERS)
Under the study of the brain rhythms ERD &ERS were found to be the effect
of power modulation caused by the motor areas of the brain. ERD is the power in
the mu band get reduce likewise ERS is the power in that band get increased than
normal states
2.3 Practical Considerations for Motor Imagery Based BCI
In the study of motor imagery movement EEG-based BCI, the system based on
imagery movement is another active theme due to its relatively robust performance
for communication and intrinsic neurophysiological significance for studying the
mechanism of motor imagery. Moreover, system based on imagery movement is
totally independent BCI system which is likely to be more useful for completely
paralyzed patients than the SSVEP-based BCI. Most of the current motor imagery
based BCIs are based on characteristic ERD/ERS spatial distributions corresponding
to different motor imagery states because it give good accuracy. Figure 2.1 displays
characteristic mappings of ERD/ERS for one subject corresponding to three motor
imagery states, i.e., imagining movements of left/right hands and foot. Due to the
7
widespread distribution of ERD/ERS, techniques of spatial filtering, e.g., common
spatial pattern (CSP), were widely used to obtain a stable system performance.
However, due to the limit of electrodes in a practical system, electrode layout has to
be carefully considered. With only a small number of electrodes, searching for new
features using new information processing methods will contribute significantly to
classifying motor imagery states.[1](page 147 )
Above knowledge has showed that by using a proper algorithm to detect the
ERD/ERS from the EEG signal for example using CSP algorithm, we can classify a
person's intent of motor actions. This is the major tool which can be used to build a
practical BCI systems.
Figure 2.1 - Mappings of ERD/ERS of mu rhythms during motor
imagery. ERD over hand areas has a distribution with contra lateral
dominance during hand movement imagination. During foot movement
imagination, an obvious ERS appears in central and frontal areas
8
2.4 Common Spatial Patterns (CSP) for EEG feature extraction
CSP was used by Ramoser et al. [2] to create features for classification of the
event-related desynchronization (ERD) in EEG caused by imagined movements. The
first and last few CSP components (the spatial filters that maximize the difference in
variance) are used to classify the trials with a high accuracy.
In this design spatial filters that lead to new time series whose variances are
optimal for the discrimination of two populations of EEG related to left and right
motor imagery. The method used to design such spatial filters is based on the
simultaneous diagonalization of two covariance matrices[3].
Here the theory is got from [2],
Classification of movement related EEG. For the analysis, the raw EEG data
of a single trial is represented as an matrix .
EEG data of a single trail is represented as an N x T matrix E
Where
N – Number of channels (i.e recording electrodes)
T – Number of samples per channel
The normalized spatial covariance of the EEG can be obtained from
C=EET
/ trace(EET
)
where T
denotes the transpose operator and trace(x) is the sum of the diagonal
elements of x. For each of the two distributions (separate classes) to be separated
(i.e., left and right motor imagery.
Averaged normalized covariance RR , RL are calculated by averaging over the trials
of each group (left and right motor imagery).
The composite spatial covariance is given as
Cc= RR + RL
Cc can be factored as Cc=Ucε Uc
T
(Singular value decomposition).
where Uc - matrix of eigenvectors
ε - diagonal matrix of eigenvalues.
9
SR+ SL = BεRBT
+ BεLBT
= B(εR + εL )BT
Note that throughout this section, the eigenvalues are assumed to be sorted in
descending order. The whitening transformation
P= ε(-1/2)
Uc
T
equalizes the variances in the space spanned by Uc.
Here it is done by equalizes the variance in space that created by Cc .In both
movements(classes) we do not need variance in space that created by Cc that make
the feature extraction hard.
PCcPT
= ε(-1/2)
Uc
T
(Ucε Uc
T
)( ε(-1/2)
Uc
T
)T
= ε(-1/2)
Uc
T
(Ucε Uc
T
) Uc ε(-1/2)
= I
i.e., all eigenvalues of PCcPT
are equal to one. If RR and RL are transformed
as
SR = PRRPT
and SL= PRLPT
then SR and SL share common eigenvectors, i.e.,
if SR = BεRBT
, SL = BεLBT
Then εR + εL = I
where is the I identity matrix . Since the sum of two corresponding eigenvalues is
always one, the eigenvector with largest eigenvalue for SR has the smallest
eigenvalue for SL and vice versa. This property makes the eigenvectors useful for
classification of the two distributions. The projection of whitened EEG onto the first
and last eigenvectors in (i.e., the eigenvectors corresponding to the largest and ) will
give feature vectors that are optimal for discriminating two populations of EEG in the
least squares sense.
With the projection matrix W = (BT
P)T
, the decomposition (mapping) of a
trial E is given as
Z = WE.
SR+ SL = PRRPT
+ PRLPT
= P(RR + RL )PT
= PCcPT
= I
10
2.5 Linear discriminant analysis (LDA) for classification
The original LDA formulation, known as the Fisher Linear Discriminant Analysis
(FLDA) (Fisher, 1936) deals with binary-class classifications .The key idea in FLDA
is to look for a direction that separates the class means well (when projected onto that
direction) while achieving a small variance around these means[4].
[8]Assume we have a set of D-dimensional samples {(x1
, x2
, … xN
}, N1 of which
belong to class w1, and N2 to class W2
We seek to obtain a scalar y by projecting the samples x onto a line y = wT
x
Here all the possible lines it would select the one that maximizes the separability of
the scalars as mentioned in Fig 2.2(2).
Fig 2.2 Projection of x samples onto a line by randomly and maximizes the
separability of the scalars
In order to find a good projection vector, we need to define a measure of
separation the mean vector of each class in -space and -space is
Then choose the distance between the projected means as objective function
However, the distance between projected means is not a good measure since it does
not account for the standard deviation within classes. So it does not make the
projection well as shown in Fig 2.3
21
11
Fig 2.3 Projected class samples separated by the considering only the
difference between projected means of the corresponding classes
Fisher suggested maximizing the difference between the means, normalized
by a measure of the within-class scatter .For each class we define the scatter, an
equivalent of the variance, as
The Fisher linear discriminant is defined as the linear function that
maximizes the criterion function
Therefore, looking for a projection where examples from the same class are
projected very close to each other and, at the same time, the projected means are as
farther apart as possible as shown in Fig 2.4.
Fig 2.4. Projection of class samples separated by the difference between projected
means with standard deviation within classes.
12
First, he define a measure of the scatter in feature space
where is called the within-class scatter matrix
The scatter of the projection can then be expressed as a function of the scatter
matrix in feature space
Similarly, the difference between the projected means can be expressed in
terms of the means in the original feature space
The matrix is called the between-class scatter. Note that, since is the
outer product of two vectors, its rank is at most one
Finally express the Fisher criterion in terms of W and as
To find the maximum of ( ) we derive and equate to zero
Dividing by
13
Solving the generalized eigenvalue problem ( −1 = ) yields
This is known as Fisher’s linear discriminant (1936), although it is not a
discriminant but rather a specific choice of direction for the projection of the data
down to one dimension
In order to classify the test point we still need to divide the space into regions
which belong to one class. The easiest possibility is to pick the cluster with smallest
Mahalonobis distance:
where µc
α
and σc
α
represent the class mean and standard deviation in the 1-D
projected space respectively[7].
14
CHAPTER 3
METHODS
3.1 MODEL
This model is to measure EEG signal from the skull and extract the intention
of the persons to some external stimulus, and producing a visual output like
displaying the result in a computer screen.
DIGITALIZED Device
SIGNAL Commands
Experimental Protocol for Data Acquisition
Fig 3.1 - Model of online BCI
3.2 EXPERIMENTAL DESIGN & DATA ACQUISITION
First, we want a proper place, because extra sounds and the sights can interfere
with the mental state of the subject who participates in the experiment. We used a silent
room in our lab. Then we need to identify the necessary electrode points accurately.
Electrode points were selected under the international 10-20 system. After that we
prepared the skin of the subject and then connected all the electrodes. Here we use 6
channels under the international 10-20 system(Hardware limitation by BioRadio) ,
Y N
SIGNAL
ACQUISITION
SIGNAL PROCESSING
Feature
Extraction
Translation
Algorithm
Ask question
Is 4>3
15
reference as left mastoid & ground as FPz also we used two separate electrodes to
measure the EOG artifacts .
Electrodes Locations
 Electrode places were selected under the international 10-20 system and
we used two separate electrodes to measure the EOG artifacts in the new
protocol ,because we saw the more artifacts were created by eye moments
and the eye blinks.
Fig 3.2 - Electrode placement diagram
Instrument used for data acquisition –Bio Radio
Fig 3.3 Left- Bio Radio in data collection Right- resources used for data collection
16
3.2.1 Pre preparation
Here we use wet electrodes so that we have to maintain some standard procedures,
1. Finding the electrode position on the subjects head as mentioned in Fig 3.2.
2. Remove some hair on that place only and then clean with “curity” (Alcohol
preperation) paper & clean with “Nuroprep” to remove dry cells.
3. Then apply “Ten20 condutive” to make more contact with the surface of the
head ,to get rigid from resistance created by low contact(air resistance).
Fig 3.4 Skin preparation and electrode placement
3.2.2 EEG recording:
The recording was made with a 8-channel EEG amplifier from Bio
Radio(hardware limitation 8 channels as shown in the Fig 3.3), using the left
mastoid for reference and the FPz as ground also two electrodes were used to record
EOG artifacts . The EEG was sampled with 256Hz. The data of all runs was saved as
comma-separated-values (“CSV”) format by Bio Capture software.
3.2.3 Paradigm
The subject sat in a relaxing chair with arm rests. The task was to perform left
hand, right hand movements (subject himself has to lift his hand and do a specific
action until the stimulus disappear) according to an external cue. The order of
presentation of cues was random. The experiment consists of 300 trials; after the trial
begins, the first few seconds (20 seconds) will be quiet, at t=0s an acoustic stimulus
indicate the start of the secession, a cross (“+”) is displayed on the computer screen;
then from the t=20s L or R letter was displayed for 3.5s. At the same time the subject
17
Figure 3.5 Timing diagram of the data collection protocol
Figure 3.6 Left-New protocol chair setup Right- event marker
generator
was asked to do a left/right hand movement and hold according to the cue, until the
cross appear at t=23. 5s. This will continue until 300 cues were displayed.
3.2.4 EEG Signal Acquisition Using BioRadio Amplifier.
Signal Acquisition for offline analysis
In calibration, data recorded by BioRadio device is collected into the
computer by using the software "BioCapture" which was provided with BioRadio
product.
Figure 3.7 BioCapture GU Interface
18
The data recorded by the BioCapture (Figure 3.7) can be saved as comma separated
values (.CSV format) and then it has to be imported into Matlab.
Configuration of Inputs in Bio Capture
In bio radio we can configure the
input types and channels were selected via
the bio radio configuration window(Figure
3.8 ).
Event marker generation
For recording of the calibration data we have to give the event markers to
identify where the event has actually happened in time. To do so we have to select
two keys as the markers and have to have a mechanism to press the keys, but letting
the subject to do that task will lead to some very serious artifacts.
In order to avoid that problem we have developed an arrangement to
automatically generate the event marker while the subject lift his hand the setup we
used to generate the event marker is explained in the Figure 3.6. In Figure 3.6, the
setup we used the keyboard internal circuit with only connections to produce the 2
key (“A”, “Z” ) options and then placed under the hands separately. When the
stimulus is presented the subject can press the buttons (finger movement) without
having to look at the keyboard.
Signal Acquisition for Online analysis
In online (real-time classification) process the data need to be streamed into
Matlab. The BioRadio_Matlab_SDK software development kit was used to feed the
Figure 3.8 BioCapture configuration
window
19
data into Matlab environment.
Figure 3.9 BioRadio Matlab SDK funtions
The BioRadio_Matlab_SDK functions can be used to control the BioRadio system
and to get the data into the Matlab environment.
3.3 Data format
Load the matched data into BCILAB
BCILAB and EEGLAB are MATLAB toolboxes used in the design,
prototyping, testing, experimentation with, and evaluation of Brain-Computer
Interfaces (BCIs), and other systems in the same computational framework. The
toolbox has been developed by Swartz Centre for computational neuroscience,
California University.[13]
When load the data for calibration of the model, the data have to be in a
proper format which can be supported by the BCILAB. In that case .MAT
format can be used.
Inside the data the variables have to be in a specific way in order to extract the
details like data array, sampled frequency, target markers, task classes, the time
points for the target classes and so on.
Data content
Data has to contain 3 variables
1. cnt
2. mrk
3. nfo
20
Cnt - this is just a matrix class- int16
channels
Time points
mrk - this is a structure class - "Struct"
mrk contains three fields inside
1. pos - <1x280 double>
2. y - <1x280 double>
3. className- <1x2 cell>
These three fields how contain the information is shown in Figure 3.11
pos
Time points – which time event occurs
Y have that time point is belongs to what class correspondingly
Class name (mention which number for which class)
Figure 3.10 Matlab data array for signal
Figure 3.11 structure of the variable mrk
21
Nfo
The variables of nfo is shown below
variables of 'nfo'
How variables are contain in nfo is shown in Figure 3.13
name - Name of the file
fs- Frequency of sampled
clab - is a cell array channel details
xpos - column vector ypos - column vector
x,y coordinates
of the channels
correspondingly
Figure 3.12 contents of the variable nfo
Figure 3.13 structure of the variable nfo
22
3.4 SIGNAL PROCESSING
3.4.1 Data with artifacts
In electroencephalographic (EEG) recordings the presence of artifacts create
uncertain in the underlying processes and makes analysis difficult. Large amounts of
data must often be discarded because of contamination by these artifacts[5].
To increase the effectiveness of BCI systems it is necessary to find methods of
increasing the signal-to-noise ratio (SNR) of the observed EEG signals. In the context
of EEG driven BCIs, the signal is endogenous brain activity measured as voltage
changes at the scalp while noise is any voltage change generated by other sources.
These noise, or artifact, sources include: line noise from the power grid, eye blinks, eye
movements, heartbeat, breathing, and other muscle activity. Some artifacts, such as eye
blinks, produce voltage changes of much higher amplitude (often have voltages of over
100 µV) than the endogenous brain activity. In this situation the data must be discarded
unless the artifact can be removed from the data[5].
Removal of artifacts created by eye movement
Often regression in the time or frequency domain is performed on parallel EEG
and electrooculography (EOG) recordings to derive parameters characterizing the
appearance and spread of EOG artifacts in the EEG channels[6]. Because EEG and
ocular activity mix directionally, regressing out eye artifacts inevitably involves
subtracting relevant EEG signals from each record as well[6].
Further steps in pre-processing
In order to remove the 50Hz line signal we have used a stop band filter and then
the signal is further filtered using a band pass filter with cut off 7-30Hz. The frequency
band(7-30 Hz) was chosen because it encompasses the alpha and beta frequency bands,
which have been shown to be most important for movement classification [12] .
Also because of the application of CSP algorithm in the feature extraction
part, we have the advantage of getting rid of some parts of the artifacts also. By
23
whitening process the CSP algorithm removes common parts in both classes. If both
classes contain the same signal it is considered as an artefact (for separation of the
classes).
Feature Extraction
CSP algorithm finds the weight matrix (W) of the channels for spatial spread
of information. In this design, spatial filters that lead to new time series whose
variances are optimal for the discrimination of two populations of EEG related to
left and right motor imagery.
This method is a supervised method; Its use to design such spatial filters is
based on the simultaneous diagonalization of two covariance matrices of left and
right motor imagery data. These data sets are collected according to the mentioned
protocol & data includes class information also (use event markers to get witch
class it belongs).
The data is multiplied by the spatial filters (W matrix) to get the features
extracted.
Classification
Here we use LDA algorithm to put the feature space projected (by using
projection matrix w) into another space with separating each class by their mean and
variance of the provided feature class.
Then we get the data to check which class it belongs. To do that map the data
using w matrix and then pick the cluster with smallest Mahalonobis distance.
Mahalonobis distance calculated as
dx1, dx2 are Mahalonobis distance from class1 & class2 respectively
M1=mean of projected class1 data M2=mean of projected class2 data
24
V1=variance of projected class1 data V2=variance of projected class2 data
x = projected data set for check which class it belongs
Here we use 7 channels, so we use 7 dimension data then apply LDA and
project in 1D and classify them.
Online processing
Creating a predictor
Before going for the online prediction of the data , A prediction model has to
be created from a calibration(Training) data . Creating a model consist of taking the
calibration data and filtering it according to the filter pipeline.
Filtered signal
Following the pre-processing of the raw signal, epoch extraction should be
done. Then the extracted classes should be fed through the function which calculate
the spatial filter matrix W, which can make the variance high between the two classes
in a certain data stream.
Then a classifier parameter has to be calculated so that it can correctly classify
the data stream which belongs to certain class. With these parameters, we shall have
a complete predictor model which can be used to classify the oncoming data stream
in real-time (online classification).
Raw signal 50Hz filter 7-30Hz Band
Pass filter
Figure 3.14 Filter Pipeline
25
After calculating the predictor parameters it can be applicable for the Online data
stream
Above results can be reached by include some additional feedback setups to
this system. With that, the results can be reached with high accuracy. However, there
will be a drawback of time delay introduced to the system.
Recorded
signal
Feature
Extraction
using CSP
Epoch
Extraction
Classe1, Class2
W matrix
Classe1, Class2
Classifier LDA
[w,V1,V2,M1,M2]
PredictorOnline data
stream
Result
W
Power
calculation
Figure 3.15 Predictor diagram
Predictor
parameters
Figure 3.16 Predictor in online
26
CHAPTER 4
RESULTS
4.1 Frequency spectrum of the recorded signal
The signal contains 50Hz noise as high that’s why the other signal powers are
negligible compare to that power.
4.2 Frequency spectrum After 7-30Hz with 50Hz stop filter applied
Figure 4.1 Frequency spectrum of the signal recorded, this was before applying
any filters to the signal
Figure 4.2 Frequency spectrum of the signal after applying the 7-30Hz band pass
& 50Hz stop band filters; this is the amplified version of Fig 4.1 red portion
27
4.3 Before apply CSP algorithm
4.4 After applied CSP algorithm
Figure 4.3 A scatter plot which shows the power in the 7-30Hz band in two specific
cases for C4 Vs C3 and so on as their coordinates, and the same plot for the two classes
(LEFT - Blue, RIGHT-Red).
Figure 4.3 A scatter plot which shows the power in the 7-30Hz band in two specific cases for C4 Vs C3
and so on as their coordinates, and the same plot for the two classes (LEFT - Blue, RIGHT-Red) .
28
4.5 Spatial distribution of CSP filter
Figure 4.4 Spatial distribution of CSP filter for sub2_dataset5 (plotted using BCI lab toolbox)
Figure 4.5 Spatial distribution of CSP filter for sub2_dataset3 (plotted using BCI lab toolbox)
29
4.6 LDA classification accuracy
Figure 4.7 Accuracy for subject 2’s datasets without CSP applied & with CSP
applied
In the above plot the weight matrix calculated by CSP is created by different
calibration data sets & then applied the weight matrix (W) separately for each data
and the accuracies calculated.
Figure 4.6 Accuracy plot with epoch’s number without CSP applied subject 2-dataset-5
and the with epoch’s number with CSP applied subject 2-dataset-5
30
Table 4.1 Acurracy for subject 2 datasets with CSP applied & without CSP applied
Data
set
Without
CSP applied
CSP
weight W
with
dataset 1
CSP
weight W
with
dataset 2
CSP
weight W
with
dataset 3
CSP
weight W
with
dataset 4
CSP
weight W
with
dataset 5
1 46.01 71.48 62.93 49.00 46.50 57.80
2 50.00 44.11 71.09 56.67 47.50 54.91
3 47.33 58.17 52.38 73.33 54.50 68.50
4 46.00 57.41 60.54 55.33 72.00 74.86
5 51.16 58.56 61.56 54.00 62.00 80.06
31
CHAPTER 5
DISCUSSION
5.1 Initial Filtering
In the results of Fourier spectrum diagram we can see much of the signal
power is on 50Hz line frequency, the other frequency powers are very low compare
to the 50Hz frequency. Our wanted frequency range is 7-30Hz for find classify the
left and right hand real movement of a healthy person correctly.
To resolve this we use band stop filter 50Hz.
5.2 Artifacts and experimental protocol
In these above analysis we haven’t got any good expected results, because we
only consider about line noise but we haven’t consider about other artifacts (mainly:
eye movement, eye blink).Because of that the signal we interest is collapsed by these
artifacts.
The experimental protocol was modified to reduce the artifacts. In our earlier
protocol to put event markers subject had to see the keyboard for pressing the key
without any error. So here when he want to press the key he move his eyes below it
make an and then see the display it make voltage difference, and the subject’s hands
always had to lie on the table which made the subject uncomfortable.
To counter this problem, we used keyboard’s circuit board with only two
keys as switches placed in the arm chair where the subject sit, It avoids miss-pressing
the key by the subject. So the subject no need to look at the keyboard. It removed
most of the eye movement artifacts. We also used EOG electrodes to capture
artifacts created by eye movements.
Following those changes to the protocol of the data collection , when we
observe our results we could not observe any considerable improvement. In the
accuracy that may be because of the reduction of one channel in our recording than
previous recordings due to some technical problem, further more we have the EOG
recordings separately so with that we can do some artifact removal that will lead to a
good accuracy.
32
5.3 On the results of the offline analysis
As shown in the table 4.1 five data sets have been discussed among that data
set accuracies were 71.48%,71.09%,73.33%,72.00%,80.06% these accuracies were
calculated by applying the classifier model which was created by the same data itself
, but when it comes to applying a classifier created using the different data set on
another data set the classification accuracies were not good enough. This different
may be the problem for the unsuccessful of the online system development. This is
the major problem which has to be overcome to reach the final goal, to do that we
have to analyse the signal variations between different sessions. By considering the
accuracies in Figure 4.9 the calibration data used as dataset5 give comparably good
results when applying on other datasets also, so we took dataset5 as calibration data
for subject 2.
5.4 Problems encountered
To the best analysis we need more data, so we have to spend much time for
recording with no noise environment, but in our recording the room availability &
the noise free environment is always problem. Our hardware used for data
recording(Bio radio) has hardware limitation for eeg recording as it only has 8
channels it makes calibration data inefficient(Because CSP works better in more eeg
channels at least 32 eeg channels).
The Bio radio has to placed line of sight for proper data recording, but in our
case during the data recording the device has not found by our bio capture (software
used for eeg recording by Bio radio).
In our early recordings we have “9999999” error is caused by missing packet
error ,we haven’t know it earlier that’s why we have low classification accuracy. But
when we encountered that and asked from the device providers in their recording also
they have same problem.
For online recording it collects the data through Matlab SDK so the
device(Bio radio) should be configured by Matlab written software. In our case the
device configured correctly as they mentioned but in the data range eeg changes
33
every time in recordings. So we checked with the test pack it gave the correct value
range so no problem in the device but we have not found the exact problem for this
error. This made our online prediction system into an uncertain situation.
BioRadio device is battery powered device sometime during the recording it
power run outs it make our recording breaks. This recordings are human based (we
use subjects for recording)so after some time they get tired or their concentration is
somewhere outside than the screen also make impact in the recordings.
During the last recording it made by new protocol but we have not used 6
channels (we only use 5 channels) it make our recording inefficient.
34
CHAPTER 6
CONCLUTION AND FUTURE PROPOSAL
For the person who can’t do muscle moments (severely paralyzed). They
want to interact with the environment they must need a tool for that. BCI interface
give this opportunity. For design such device we first tried to develop a device which
can classify the different finger moments (identify Left or Right correctly) on healthy
subjects, we plan to improve it further by using imagination of the movement for
healthy subjects and then in the patient population. To do this we have developed the
data recording protocol and then we have analysed the data offline (not real time)
with the use of implemented CSP algorithm for feature extraction and LDA
algorithm for classification we have got accuracy up to 80% (this was elaborated in
Table 4.1). Then we have tried further by creating an online system using our
implemented signal processing algorithm for classifying (find the classes left or
right) in online(real time). But it hasn't gave us the expected results due to some
practical difficulties. this problem can be solved by get rid of the major practical
issues like getting the error by packet loss, and also by analysing the performance of
the signal processing algorithm in online data.
Future work to be done
Even though the 60% accuracy of the model is not sufficient for an online
BCI communication system, it has the capability of performing as online BCI
system by adding feedback techniques. However it introduces a time delay for the
prediction.
In classifier Knn algorithm or Kohonon's self-organizing map can be used
.Then proper algorithm for classifier can find by ROC map.
In Knn algorithm, whenever we have a new point(the data to predict) to
classify, we find its K nearest neighbours from the training data(classes).The distance
is calculated using one of the Euclidean Distance or Mahalanobis Distance etc.And
then take the nearest one as the class.
In Kohonon's Self organizing map it uses the principle of brain. In brain
35
neurons tend to cluster in groups. The connections within the group are much greater
than the connections with the neurons outside of the group. Kohonen's network tries
to mimic this in a simple way.
36
References
[1] “(The Frontiers Collection ) Bernhard Graimann, Brendan Allison, 2.pdf.” .
[2] H. Ramoser, J. Muller-Gerking, and G. Pfurtscheller, “Optimal spatial filtering of
single trial EEG during imagined hand movement,” Rehabil. Eng. IEEE Trans.
On, vol. 8, no. 4, pp. 441–446, 2000.
[3] M. Akcakaya, B. Peters, M. Moghadamfalahi, A. R. Mooney, U. Orhan, B.
Oken, D. Erdogmus, and M. Fried-Oken, “Noninvasive
Brain&amp;#x2013;Computer Interfaces for Augmentative and Alternative
Communication,” IEEE Rev. Biomed. Eng., vol. 7, pp. 31–49, 2014.
[4] J. Ye, “Least squares linear discriminant analysis,” in Proceedings of the 24th
international conference on Machine learning, 2007, pp. 1087–1093.
[5] J. N. Knight, “Signal fraction analysis and artifact removal in EEG,” Colorado
State University, 2003.
[6] T.-P. Jung, S. Makeig, C. Humphries, T.-W. Lee, M. J. Mckeown, V. Iragui, and
T. J. Sejnowski, “Removing electroencephalographic artifacts by blind source
separation,” Psychophysiology, vol. 37, no. 02, pp. 163–178, 2000.
[7] MaxWelling, “Fisher Linear Discriminant Analysis”, note to explain Fisher
linear discriminant analysis,welling@cs.toronto.edu
[8] Ricardo Gutierrez-Osuna, “L10: Linear discriminants analysis” , CSCE 666
Pattern Analysis | CSE@TAMU
[9] S. C. Gandevia and J. C. Rothwell, “Knowledge of motor commands and the
recruitment of human motoneurons,” Brain, vol. 110, no. 5, pp. 1117–1130, 1987
[10] G. Pfurtscheller, C. Neuper, D. Flotzinger, and M. Pregenzer, “EEGbased
discrimination between imagination of right and left hand movement,” Electroenc.
Clin. Neurophys., vol. 103, no. 5, pp. 1–10, 1997.
[11] Ibrahim Arafat, “Brain–Computer Interface: Past, Present & Future”
,http://www.academia.edu/1365518/Brain-Computer_Interface_Past_Pr
[12] G. Pfurtscheller, C. Neuper, D. Flotzinger, and M. Pregenzer, “EEGbased
discrimination between imagination of right and left hand movement,” Electronic. Clin.
Neurophys., vol. 103, no. 5, pp. 1–10, 1997.
[13] “Swardcenter for computational neuroscience California
University”http://www.sccn.ucsd.edu;(14-10-14)
37
APPENDIX
Function for filter
function [sig_fil2]=filtr(Fs,signal)
%FiR filter 7-30 minimum phase
% sig is the raw signal to be pre-processed it has to be
in time x channels
% format
figure
a1=fir1(100,[7/(Fs/2) 30/(Fs/2)],'band'); % 7-30
normalized by nyquist freq 128 [0.0547 0.2344]
sig_fil=filter(a1,1,signal);
plot(sig_fil)
figure
Y=abs(fft(sig_fil));
f=Fs/2*linspace(0,1,length(Y)/2);
plot(f,Y([1:length(Y)/2],:))
if Fs/2> 60
%Fir stop band filter 45-55
a2=fir1(100,[45/(Fs/2) 55/(Fs/2)],'stop');
%Fs=256 [0.3516 0.4297]
sig_fil2=filter(a2,1,sig_fil);
Y=abs(fft(sig_fil2));
figure
f=Fs/2*linspace(0,1,length(Y)/2);
plot(f,Y([1:length(Y)/2],:))
else
sig_fil2=sig_fil;
end
end
38
CREAT MATCH DATA
DataSet_name = 'sub2_p1_256_without_6_with_eog';
%name of the data set we are going to creat
cnt_new=data(:,1:6); %extract the signal data only
add_y=data(:,7)+data(:,8)*2; % 2 is for 'Right' 1 is
for 'Left'
pos=[]; % to creat
positions(timePoints) of the events in the form [233 244
2334...23322]
y_new=[]; % to creat event vector
in the form [1 2 1 2 1 1...1]
for i=1:1:length(add_y)
if add_y(i)==1
pos(length(pos)+1)=i;
y_new(length(y_new)+1)=1;
elseif add_y(i)==2
pos(length(pos)+1)=i;
y_new(length(y_new)+1)=2;
end
end
cnt=cnt_new;
mrk=struct('pos',pos,'y',y_new,'className',{{'left','Righ
t'}});
nfo=struct('name',DataSet_name,'fs',256,'clab',{{'F3','C3
','P3','F4','C4','P4'}},...
'xpos',[-0.296077061122885;-0.384615384615385;-
0.296077061122885;0.296077061122885;0.384615384615385;0.2
96077061122885],...
'ypos',[0.418716195347552;0;-
0.418716195347552;0.418716195347552;0;-
0.418716195347552]);
save sub2_p1_256_without_6_with_eog.mat cnt mrk nfo
39
Function for epoch extraction
function [class1 class2 class0]=Epoch_extract(cnt,mrk)
%########################################################
%inputs-
% cnt - filtered signal dim - time x chann
% mrk - pos y classname
%Outputs-
% class1=Left dim chann x tim
% class2=Right
% class0=Rest
%########################################################
if length(cnt)==size(cnt,1)
Ecnt=double(cnt);
else
error('dimention error cnt - time x chann')
end
% Epos is a row vector containig the time points for the
event markers
Epos=mrk.pos;
% Ey is a row vector containing events
Ey=mrk.y;
for i=1:1:length(Ey)
if Ey(i)==1
cl1=[cl1;Ecnt(Epos(i)-128 : Epos(i)+896,:)];
if i+1 < length(Ey)
cl0=[cl0;Ecnt(Epos(i)+897:Epos(i+1)-129,:)];
end
elseif Ey(i)==2
cl2=[cl2;Ecnt(Epos(i)-128 : Epos(i)+896,:)];
if i+1 < length(Ey)
cl0=[cl0;Ecnt(Epos(i)+897:Epos(i+1)-129,:)];
end
end
end
end
40
Function for CSP algorithm
function [W] = mycsp(dat1, dat2)
%########################################################
%inputs-
% dat1 - dim- chann X time
% dat2 -
%Outputs-
% W - weight filter matrix dim- chann X chann
%########################################################
% CSP Common spatial pattern decomposition
% Use as [unmixing] = csp(dat1, dat2)function [unmixing] =
csp(dat1, dat2)
% CSP Common spatial pattern decomposition
% This implements Ramoser H., Gerking M. spatial filtering of
single trial EEG during imagined hand movement."
% IEEE Trans. Rehab. Eng 8 (2000), 446, 441
R1 = dat1*dat1';
R1 = R1/trace(R1);
R2 = dat2*dat2';
R2 = R2/trace(R2);
% Ramoser equation (2)
Rsum = R1+R2;
% Find Eigenvalues and Eigenvectors of RC
[EVecsum,EValsum] = eig(Rsum);
% Sort eigenvalues in descending order
[EValsum,ind] = sort(diag(EValsum),'descend');
EVecsum = EVecsum(:,ind);
% Find Whitening Transformation Matrix - Ramoser Equation (3)
%to find principal diagonal matrix :::: pinv(diag(EValsum))
P = sqrt(pinv(diag(EValsum))) * EVecsum';
% Whiten Data Using Whiting Transform - Ramoser Equation (4)
S1 = P * R1 * P';
S2 = P * R2 * P';
% Ramoser equation (5)
%generalized eigenvectors/values
[B,D] = eig(S1,S2);
[D,ind]=sort(diag(D));
B=B(:,ind);
% Resulting Projection Matrix-these are the spatial filter
coefficients
W = (B'*P);
end
41
Power extract function
function [class_pow]=pow_ex(class1)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%input class1 Data wanted to extract power
dim=chno x points
% output class_pow ;wanted channel number output
% dim=trails
x specified channel
%
power
%
%########################################################
##################
class_pow =[];
for epoch=1:1:fix(length(class1)/1024)
l=epoch-1;
epoch_i=class1(:,l*1024+1:1024*epoch);
pwr=sum((epoch_i.^2)')/length(epoch_i);
class_pow =[class_pow ;pwr];
end
end
42
Fuctions for LDA
function [w,V1,V2,M1,M2]=ldaini(x1,x2)
%############################################################
% input x1 class1 data
% x2 class2 data dim=trial x
channel pow
%
% output w projection matrix
dim=1 x channel
% V1 Variance of projected data belong to class1
% V2 Variance of projected data belong to class2
% M1 mean of projected data belong to class1
% M2 mean of projected data belong to class2
%########################################################
m1=mean(x1);
m2=mean(x2);
s1=[];
s1=[x1(1,:)-m1]'*[x1(1,:)-m1];
for i=2:1:size(x1,1)
s1=s1+[x1(i,:)-m1]'*[x1(i,:)-m1];
end
s1=s1/size(x1,1);
%
s2=[];
s2=[x2(1,:)-m2]'*[x2(1,:)-m2];
for i=2:1:size(x2,1)
s2=s2+[x2(i,:)-m2]'*[x2(i,:)-m2];
end
s2=s2/size(x2,1);
%to find Sb
Sb=[m1-m2]'*[m1-m2];
Sw=s1+s2;
w=inv(Sw)*[m1-m2]';
% to define which class that belongs
% take our wanted data to check the class where it belongs
%consider x as the input
w=w';
x1=x1';
x2=x2';
px1=w*x1; %project the x1 to px1
px2=w*x2; %project the x2 to px2
M1=sum(px1)/length(px1);%calclate the mean M1 for projected x1
as px1
M2=sum(px2)/length(px2);%calclate the mean M2 for projected x2
as px2
V1=var(px1); %calculate variance z1 for projected x1 as px1
V2=var(px2); %calclate the variance z2 for projected x2 as px2
End
43
function [c]=ldaout(w,V1,V2,M1,M2,x)
%#############################################################
#############
% input x data want to chk the class
dim=channel X 1
% w projection matrix
dim=1 X channel
% V1 Variance of projected data belong to class1
% V2 Variance of projected data belong to class2
% M1 mean of projected data belong to class1
% M2 mean of projected data belong to class2
%
% Output c which class it belongs
% 1 for class1
% 2 for class2
% 3 cannot define class
%#############################################################
#############
%then project the input to our classifier space
x=w*x;
%then pick the cluster with smallest Mahalonobis distance
dx1=((x-M1)^2)/V1;
dx2=((x-M2)^2)/V2;
if dx1<dx2
disp('class 1')
c=1;
elseif dx1>dx2
disp('class 2')
c=2;
else
disp('cannot define class1 or class 2')
c=3;
end
end
44
To check our offline data analyser accuracy script
clear;
clc;
%% to check the preprocessed filterd signal
cnt=filtrd_sig;
%to extract class1 class2 class0
%% to extract class1 , class1_0 class2 class2_0
[class1 class2 class0]=Epoch_extract(cnt,mrk);
%to extract the W matrix of CSP
[w] = mycsp(class1, class2);
%% apply w matrices to seperate class1 & class2 with
rests
class1_w12=w*class1;
class2_w12=w*class2;
%% find the power for seperate them
[class2_w12_pow]=pow_ex(class2_w12);
[class1_w12_pow]=pow_ex(class1_w12);
%% classify the with they correspond for class1 or class2
x1=class2_w12_pow;
x2=class1_w12_pow;
%%
[w,V1,V2,M1,M2]=ldaini(x1,x2);
%%
%for with out csp applied only we need to take A
A=[1 1 1 1 1 1];
45
w_12=[A;A;A;A;A;A];
[pow,Ey]=power_Epoch_extract(cnt,mrk,w_12);
d=0;
ac=[];
for i=1:1:length(pow)
x=[pow(i,:)]';
[c]=ldaout(w,V1,V2,M1,M2,x);
if c==Ey(i)
else
d=d+1;
end
ac=[ac,(d/i)*100];
end
plot(ac,'.-')
accuracy=d/(length(x1)+length(x2))

More Related Content

What's hot

BRAIN MACHINE INTERFACE
BRAIN MACHINE INTERFACEBRAIN MACHINE INTERFACE
BRAIN MACHINE INTERFACESruthi Kumar
 
Brain Computer Interface
Brain Computer InterfaceBrain Computer Interface
Brain Computer Interface1222shyamkumar
 
Alireza Ghahari and John Enderle_Part 4, A Multiscale Neuron and Muscle Fiber...
Alireza Ghahari and John Enderle_Part 4, A Multiscale Neuron and Muscle Fiber...Alireza Ghahari and John Enderle_Part 4, A Multiscale Neuron and Muscle Fiber...
Alireza Ghahari and John Enderle_Part 4, A Multiscale Neuron and Muscle Fiber...Alireza Ghahari
 
BRAIN COMPUTER INTERFACE(BCI)
BRAIN COMPUTER INTERFACE(BCI)BRAIN COMPUTER INTERFACE(BCI)
BRAIN COMPUTER INTERFACE(BCI)josnapv
 
Brain controlled-interfaces
Brain controlled-interfaces Brain controlled-interfaces
Brain controlled-interfaces N0696142
 
A Review on Motor Imagery Signal Classification for BCI
A Review on Motor Imagery Signal Classification for BCIA Review on Motor Imagery Signal Classification for BCI
A Review on Motor Imagery Signal Classification for BCICSCJournals
 
The Emerging World of Neuroprosthetics
The Emerging World of NeuroprostheticsThe Emerging World of Neuroprosthetics
The Emerging World of NeuroprostheticsPratik Jain
 
Brain–machine interfaces: past, present and future - WWW.OLOSCIENCE.COM
Brain–machine interfaces: past, present and future - WWW.OLOSCIENCE.COMBrain–machine interfaces: past, present and future - WWW.OLOSCIENCE.COM
Brain–machine interfaces: past, present and future - WWW.OLOSCIENCE.COMFausto Intilla
 
Brain Machine Interface
Brain Machine InterfaceBrain Machine Interface
Brain Machine InterfaceRehan Fazal
 
Global Brain Computer Interface Market - Size, Share, Global Trends, Analysis...
Global Brain Computer Interface Market - Size, Share, Global Trends, Analysis...Global Brain Computer Interface Market - Size, Share, Global Trends, Analysis...
Global Brain Computer Interface Market - Size, Share, Global Trends, Analysis...Allied Market Research
 
Brain-Computer Interfaces
Brain-Computer InterfacesBrain-Computer Interfaces
Brain-Computer Interfacesguest9006ab
 
fNIRS and Brain Computer Interface for Communication
fNIRS and Brain Computer Interface for CommunicationfNIRS and Brain Computer Interface for Communication
fNIRS and Brain Computer Interface for CommunicationInsideScientific
 
brain machin interfacing
brain machin interfacingbrain machin interfacing
brain machin interfacingvishnu2kh
 
Brainwave Controlled Wheelchair (BCW)
Brainwave Controlled Wheelchair (BCW)Brainwave Controlled Wheelchair (BCW)
Brainwave Controlled Wheelchair (BCW)vivatechijri
 
Brain-computer interface
Brain-computer interfaceBrain-computer interface
Brain-computer interfaceSri Neela
 

What's hot (19)

BRAIN MACHINE INTERFACE
BRAIN MACHINE INTERFACEBRAIN MACHINE INTERFACE
BRAIN MACHINE INTERFACE
 
Brain Computer Interface
Brain Computer InterfaceBrain Computer Interface
Brain Computer Interface
 
Alireza Ghahari and John Enderle_Part 4, A Multiscale Neuron and Muscle Fiber...
Alireza Ghahari and John Enderle_Part 4, A Multiscale Neuron and Muscle Fiber...Alireza Ghahari and John Enderle_Part 4, A Multiscale Neuron and Muscle Fiber...
Alireza Ghahari and John Enderle_Part 4, A Multiscale Neuron and Muscle Fiber...
 
BRAIN COMPUTER INTERFACE(BCI)
BRAIN COMPUTER INTERFACE(BCI)BRAIN COMPUTER INTERFACE(BCI)
BRAIN COMPUTER INTERFACE(BCI)
 
Brain interface
Brain interfaceBrain interface
Brain interface
 
Brain controlled-interfaces
Brain controlled-interfaces Brain controlled-interfaces
Brain controlled-interfaces
 
A Review on Motor Imagery Signal Classification for BCI
A Review on Motor Imagery Signal Classification for BCIA Review on Motor Imagery Signal Classification for BCI
A Review on Motor Imagery Signal Classification for BCI
 
The Emerging World of Neuroprosthetics
The Emerging World of NeuroprostheticsThe Emerging World of Neuroprosthetics
The Emerging World of Neuroprosthetics
 
Brain–machine interfaces: past, present and future - WWW.OLOSCIENCE.COM
Brain–machine interfaces: past, present and future - WWW.OLOSCIENCE.COMBrain–machine interfaces: past, present and future - WWW.OLOSCIENCE.COM
Brain–machine interfaces: past, present and future - WWW.OLOSCIENCE.COM
 
Artificial eye
Artificial eyeArtificial eye
Artificial eye
 
Brain Machine Interface
Brain Machine InterfaceBrain Machine Interface
Brain Machine Interface
 
Global Brain Computer Interface Market - Size, Share, Global Trends, Analysis...
Global Brain Computer Interface Market - Size, Share, Global Trends, Analysis...Global Brain Computer Interface Market - Size, Share, Global Trends, Analysis...
Global Brain Computer Interface Market - Size, Share, Global Trends, Analysis...
 
Brain-Computer Interfaces
Brain-Computer InterfacesBrain-Computer Interfaces
Brain-Computer Interfaces
 
Open BCI
Open BCIOpen BCI
Open BCI
 
Nanomaterials
NanomaterialsNanomaterials
Nanomaterials
 
fNIRS and Brain Computer Interface for Communication
fNIRS and Brain Computer Interface for CommunicationfNIRS and Brain Computer Interface for Communication
fNIRS and Brain Computer Interface for Communication
 
brain machin interfacing
brain machin interfacingbrain machin interfacing
brain machin interfacing
 
Brainwave Controlled Wheelchair (BCW)
Brainwave Controlled Wheelchair (BCW)Brainwave Controlled Wheelchair (BCW)
Brainwave Controlled Wheelchair (BCW)
 
Brain-computer interface
Brain-computer interfaceBrain-computer interface
Brain-computer interface
 

Similar to Final report-1

Final Project Report
Final Project ReportFinal Project Report
Final Project ReportAvinash Pawar
 
A SEMINAR REPORT ON BRAIN COMPUTER INTERFACE By Rahul Sharma
A SEMINAR REPORT ON BRAIN COMPUTER INTERFACE By Rahul SharmaA SEMINAR REPORT ON BRAIN COMPUTER INTERFACE By Rahul Sharma
A SEMINAR REPORT ON BRAIN COMPUTER INTERFACE By Rahul SharmaRenee Lewis
 
Brain-Computer Interface (BCI)-Seminar Report
Brain-Computer Interface (BCI)-Seminar ReportBrain-Computer Interface (BCI)-Seminar Report
Brain-Computer Interface (BCI)-Seminar Reportjosnapv
 
Brain computer interface
Brain computer interfaceBrain computer interface
Brain computer interfacerahulnale175
 
Technology Development for Unblessed people using BCI: A Survey
Technology Development for Unblessed people using BCI: A SurveyTechnology Development for Unblessed people using BCI: A Survey
Technology Development for Unblessed people using BCI: A SurveyMandeep Kaur
 
Brain machine interface
Brain machine interface Brain machine interface
Brain machine interface ShaikYusuf5
 
STUDY OF BRAIN MACHINE INTERFACE SYSTEM
STUDY OF BRAIN MACHINE INTERFACE SYSTEMSTUDY OF BRAIN MACHINE INTERFACE SYSTEM
STUDY OF BRAIN MACHINE INTERFACE SYSTEMijsrd.com
 
Brain Computer Interface WORD FILE
Brain Computer Interface WORD FILEBrain Computer Interface WORD FILE
Brain Computer Interface WORD FILEDevendra Singh Tomar
 
A LOW COST EEG BASED BCI PROSTHETIC USING MOTOR IMAGERY
A LOW COST EEG BASED BCI PROSTHETIC USING MOTOR IMAGERY A LOW COST EEG BASED BCI PROSTHETIC USING MOTOR IMAGERY
A LOW COST EEG BASED BCI PROSTHETIC USING MOTOR IMAGERY ijitcs
 
BRAIN COMPUTER INTERFACE......
BRAIN COMPUTER INTERFACE......BRAIN COMPUTER INTERFACE......
BRAIN COMPUTER INTERFACE......AGGARWAL0522
 
Dr. Frankenstein’s Dream Made Possible: Implanted Electronic Devices
Dr. Frankenstein’s Dream Made Possible:  Implanted Electronic Devices Dr. Frankenstein’s Dream Made Possible:  Implanted Electronic Devices
Dr. Frankenstein’s Dream Made Possible: Implanted Electronic Devices The Innovation Group
 
Iisrt ramkumar
Iisrt ramkumarIisrt ramkumar
Iisrt ramkumarIISRT
 
Brain computer interfacing for controlling wheelchair movement
Brain computer interfacing for controlling wheelchair movementBrain computer interfacing for controlling wheelchair movement
Brain computer interfacing for controlling wheelchair movementIRJET Journal
 
System Architecture for Brain-Computer Interface based on Machine Learning an...
System Architecture for Brain-Computer Interface based on Machine Learning an...System Architecture for Brain-Computer Interface based on Machine Learning an...
System Architecture for Brain-Computer Interface based on Machine Learning an...ShahanawajAhamad1
 
Brain Computer Interface
Brain Computer InterfaceBrain Computer Interface
Brain Computer Interfacedeaneal
 
Brain Computer Interface Next Generation of Human Computer Interaction
Brain Computer Interface Next Generation of Human Computer InteractionBrain Computer Interface Next Generation of Human Computer Interaction
Brain Computer Interface Next Generation of Human Computer InteractionSaurabh Giratkar
 
Brain%20 computer%20interface
Brain%20 computer%20interfaceBrain%20 computer%20interface
Brain%20 computer%20interfacegurs
 
EEG Mouse:A Machine Learning-Based Brain Computer Interface_interface
EEG Mouse:A Machine Learning-Based Brain Computer Interface_interfaceEEG Mouse:A Machine Learning-Based Brain Computer Interface_interface
EEG Mouse:A Machine Learning-Based Brain Computer Interface_interfaceWilly Marroquin (WillyDevNET)
 
Detection Of Saccadic Eye Movements to Switch the Devices For Disables
Detection Of Saccadic Eye Movements to Switch the Devices For DisablesDetection Of Saccadic Eye Movements to Switch the Devices For Disables
Detection Of Saccadic Eye Movements to Switch the Devices For Disablesijsrd.com
 

Similar to Final report-1 (20)

Final Project Report
Final Project ReportFinal Project Report
Final Project Report
 
A SEMINAR REPORT ON BRAIN COMPUTER INTERFACE By Rahul Sharma
A SEMINAR REPORT ON BRAIN COMPUTER INTERFACE By Rahul SharmaA SEMINAR REPORT ON BRAIN COMPUTER INTERFACE By Rahul Sharma
A SEMINAR REPORT ON BRAIN COMPUTER INTERFACE By Rahul Sharma
 
Brain-Computer Interface (BCI)-Seminar Report
Brain-Computer Interface (BCI)-Seminar ReportBrain-Computer Interface (BCI)-Seminar Report
Brain-Computer Interface (BCI)-Seminar Report
 
Brain computer interface
Brain computer interfaceBrain computer interface
Brain computer interface
 
Technology Development for Unblessed people using BCI: A Survey
Technology Development for Unblessed people using BCI: A SurveyTechnology Development for Unblessed people using BCI: A Survey
Technology Development for Unblessed people using BCI: A Survey
 
Brain machine interface
Brain machine interface Brain machine interface
Brain machine interface
 
STUDY OF BRAIN MACHINE INTERFACE SYSTEM
STUDY OF BRAIN MACHINE INTERFACE SYSTEMSTUDY OF BRAIN MACHINE INTERFACE SYSTEM
STUDY OF BRAIN MACHINE INTERFACE SYSTEM
 
Ab044195198
Ab044195198Ab044195198
Ab044195198
 
Brain Computer Interface WORD FILE
Brain Computer Interface WORD FILEBrain Computer Interface WORD FILE
Brain Computer Interface WORD FILE
 
A LOW COST EEG BASED BCI PROSTHETIC USING MOTOR IMAGERY
A LOW COST EEG BASED BCI PROSTHETIC USING MOTOR IMAGERY A LOW COST EEG BASED BCI PROSTHETIC USING MOTOR IMAGERY
A LOW COST EEG BASED BCI PROSTHETIC USING MOTOR IMAGERY
 
BRAIN COMPUTER INTERFACE......
BRAIN COMPUTER INTERFACE......BRAIN COMPUTER INTERFACE......
BRAIN COMPUTER INTERFACE......
 
Dr. Frankenstein’s Dream Made Possible: Implanted Electronic Devices
Dr. Frankenstein’s Dream Made Possible:  Implanted Electronic Devices Dr. Frankenstein’s Dream Made Possible:  Implanted Electronic Devices
Dr. Frankenstein’s Dream Made Possible: Implanted Electronic Devices
 
Iisrt ramkumar
Iisrt ramkumarIisrt ramkumar
Iisrt ramkumar
 
Brain computer interfacing for controlling wheelchair movement
Brain computer interfacing for controlling wheelchair movementBrain computer interfacing for controlling wheelchair movement
Brain computer interfacing for controlling wheelchair movement
 
System Architecture for Brain-Computer Interface based on Machine Learning an...
System Architecture for Brain-Computer Interface based on Machine Learning an...System Architecture for Brain-Computer Interface based on Machine Learning an...
System Architecture for Brain-Computer Interface based on Machine Learning an...
 
Brain Computer Interface
Brain Computer InterfaceBrain Computer Interface
Brain Computer Interface
 
Brain Computer Interface Next Generation of Human Computer Interaction
Brain Computer Interface Next Generation of Human Computer InteractionBrain Computer Interface Next Generation of Human Computer Interaction
Brain Computer Interface Next Generation of Human Computer Interaction
 
Brain%20 computer%20interface
Brain%20 computer%20interfaceBrain%20 computer%20interface
Brain%20 computer%20interface
 
EEG Mouse:A Machine Learning-Based Brain Computer Interface_interface
EEG Mouse:A Machine Learning-Based Brain Computer Interface_interfaceEEG Mouse:A Machine Learning-Based Brain Computer Interface_interface
EEG Mouse:A Machine Learning-Based Brain Computer Interface_interface
 
Detection Of Saccadic Eye Movements to Switch the Devices For Disables
Detection Of Saccadic Eye Movements to Switch the Devices For DisablesDetection Of Saccadic Eye Movements to Switch the Devices For Disables
Detection Of Saccadic Eye Movements to Switch the Devices For Disables
 

Final report-1

  • 1. i ACKNOWLEDGEMENTS It is a great pleasure to present our final year undergraduate project report under the subject code EE 549 as undergraduates to every personnel who helped us in different ways throughout the semester. First we would like to express our great gratitude to Department of Electrical and Electronic Engineering, Faculty of Engineering, University of Peradeniya . Next, we would like to show our great gratitude to DR.R.D.Ranaweera who is our main supervisor he guide us through the whole project ,and our special thanks for our project supervisors DR.J.Wijayakulasooriya, DR.G.M.R.I.Godaliyadda, DR.M.P.B.Ekanayaka supervisors, Department of Electrical and Electronic Engineering, Faculty of Engineering, University of Peradeniya for guiding us throughout this semester. In addition, we thank all others who helped us in various ways to do our project successfully. Nishanth.A E/09/239 Rishikesan.S E/09/297 Withana W.K.G.M. E/09/408
  • 2. ii CONTENTS Acknowledgement i Contents ii List of Figures iii List of Tables v Chapter 1 INTRODUCTION 1 1.1 Introduction 1 1.2 Objective 2 1.3 History of EEG based BCI 2 Chapter 2 BACKGROUND 5 2.1 Brain Monitoring Methods 5 2.2 BCIs Based on the Modulation of Brain Rhythms 5 2.3 Practical Considerations for Motor Imagery Based BCI 6 2.4 Common Spatial Patterns (CSP) For EEG feature extraction 8 2.5 Linear discriminant analysis (LDA) for classification 10 Chapter 3 METHODS 14 3.1 Model 14 3.2 Experimental design & data acquisition 14 3.3 Data format 19
  • 3. iii 3.4 Signal processing 22 Chapter 4 RESULTS 26 4.1 Frequency spectrum of the recorded signal 26 4.2 Frequency spectrum After 7-30Hz with 50Hz stop filter applied 26 4.3 Before applying CSP algorithm 27 4.4 After applying CSP algorithm 27 4.5 Spatial distribution of CSP filter 28 4.6 LDA classification accuracy plots 29 Chapter 5 DISCUSSION 31 5.1 Initial Filtering 31 5.2 Artifacts and experimental protocol 31 5.3 On the results of the offline analysis 32 5.4 Problems encountered 32 Chapter 6 CONCLUTION AND FUTURE PROPOSAL 34 Reference 36 Appendix 37
  • 4. iv LIST OF FIGURES Figure 2.1 Mappings of ERD/ERS of mu rhythms during motor imagery 7 Figure 2.2 Projection of x samples onto a line 10 Figure 2.3 Projected class samples without variance consider 11 Figure 2.4 Projected class samples with variance consider 11 Figure 3.1 Model of online BCI 14 Figure 3.2 Electrode placement diagram 15 Figure 3.3 Left- BioRadio in data collection Right- resources used for data collection 15 Figure 3.4 Skin preparation and electrode placement 16 Figure 3.5 Timing diagram of the data collection protocol 17 Figure 3.6 Left-New protocol chair setup Right- event marker generator 17 Figure 3.7 BioCapture GU Interface 17 Figure 3.8 BioCapture configuration window 18 Figure 3.9 BioRadio Matlab SDK funtions 19 Figure 3.10 Matlab data array for signal 20 Figure 3.11 Structure of the variable mrk 20 Figure 3.12 Contents of the variable nfo 21 Figure 3.13 Structure of the variable nfo 21 Figure 3.14 Filter Pipeline 24 Figure 3.15 Predictor diagram 25 Figure 3.16 Predictor in online 25 Figure 4.1 Frequency spectrum of the signal recorded 26
  • 5. v Figure 4.2 Frequency spectrum of the signal after applying the filters 26 Figure 4.3 Feature plot without apply CSP 27 Figure 4.3 Feature plot with applied CSP 27 Figure 4.4 Spatial distribution of CSP filter for sub2_dataset5 28 Figure 4.5 Spatial distribution of CSP filter for sub2_dataset3 28 Figure 4.6 Accuracy plot with epoch’s numbers 29 Figure 4.7 Accuracy for subject 2’s datasets 29 TABLES Table 4.1 Accuracy for subject 2 datasets 30
  • 6. 1 CHAPTER 1 INTRODUCTION 1.1 Introduction Some diseases can actually lead to a severe paralyzed condition called the locked-in syndrome, where the patient loses all voluntary muscle control. Amyotrophic lateral sclerosis (ALS) is an example of such a disease .The exact cause of ALS is unknown, and there is no cure. ALS starts with muscle weakness and atrophy. Usually, all voluntary movement, such as walking, speaking, swallowing, and breathing, deteriorates over several years, and eventually is lost completely. The disease, however, does not affect cognitive functions or sensations. People can still see, hear, and understand what is happening around them, but cannot control their muscles. This is because ALS only affects special neurons, the large alpha motor neurons, which are an integral part of the motor pathways[1] . Once the motor pathway is lost, any natural way of interaction with the environment is lost as well. Brain–computer interface (BCI) offer the only option for communication in such cases. BCI is a communication channel which does not depend on the brain’s normal output pathways of peripheral nerves and muscles . So it gives an opportunity to supply paralyzed patients with a new approach to interact with the environment.[1] The imagination of a limb (arm or leg of a person) movement can modify brain electrical activity similar to actual limb motion [9]. Depending on the type of motor imagery, different EEG patterns can be obtained. Activation of hand area neurons either by preparation for a real movement or by imagination of the movement is accompanied by an circumscribed event-related desynchronization (ERD) [1] focused at the hand area of the brain [10]. One possibility to open a communication channel for these patients is to use electroencephalograph (EEG) signals to control an assistive device that allows, for example, the selection of letters on a screen brain–computer interface (BCI)[2]. Some level of communication can be achieved if at least a binary decision can be made using an EEG based BCI. To make the binary decision, we used actual left and right hand motion to represent yes (right) and no (left). If it works well on healthy
  • 7. 2 subjects, we plan to improve it further by using imagination of the movement for healthy subjects and then in the patient population. 1.2 Objective The objective of this study is developing an algorithm to classify the left and right hand real movement of a healthy person correctly using multiple channels of EEG data. The final goal is to develop an algorithm to detect and classify left and right hand movement in-real time (online classifier). 1.3 History of EEG based BCI. In our society there are some people who have severe motor disabilities from birth or due to accident. So it is highly required a suitable communication method to communicate with them. The EEG based communication is very important, when we communicate with a spinal code damaged patient. For that first we decided to study about the EEG signals. So we designed an experimental protocol to take EEG signals from a normal person. We used motor imagery area of the brain to take signals. 1. The discoverer of the existence of human EEG signals was Hens Berger (1873- 1941).He began his study of human EEGs in 1920.Berger started working with a string galvanometer in 1910, and then migrated to a smaller Edelmann model, and after 1924,to a larger Edelmann model. In 1926 , Berger started to usethe more powerful Siemens double coil galvanometer (attaining a sensitivity of 130V/cm).His first report of human EEG recordings of one to three minutes duration on photographic paper was in 1929[11]. 2. The first BCI (Brain Computer Interface) was described by Dr. Grey Walter in 1964. Ironically, this was shortly before the first Star Trek episode aired. Dr. Walter connected electrodes directly to the motor areas of a patient’s brain. (The patient was undergoing surgery for other reasons.) The patient was asked to press a button to advance a slide projector while Dr.Walter recorded the relevant brain activity. Then, Dr.Walter connected the system
  • 8. 3 to the slide projector so that the slide projector advanced whenever the patient’s brain activity indicated that he wanted to press the button. Interestingly, Dr. Walter found that he had to introduce a delay from the detection of the brain activity until the slide projector advanced because the slide projector would otherwise advance before the patient pressed the button! Control before the actual movement happens, that is, control without movement – the first BCI![2]. 3. 1976:First Evidence that BCI can be used for communication : Jaceques J. Vidal, the professor who coined the term BCI, from UCLA's Brain Computer Interface Laboratory provided evidence that single trial visual evoked potentials could be used as a communication channel effective enough to control a cursor through a two dimensional maze. This presented the first official proof that we can use brain to signal and interface with outside devices[11]. 4. The Neil Squire Foundation is a Canadian non profit organization whose purpose is to create opportunities for independence for individuals who have significant physical disabilities. Through direct interaction with these individuals the Foundation researches, develops and delivers appropriate innovative services and technology to meet their needs. Part of the Research and Development activities of the Foundation, in partnership with the Electrical and Computer Engineering Department at the University of British Columbia, has been to explore methods to realize a direct brain–computer interface (BCI) for individuals with severe motor-related impairments. The ultimate goal of this research is to create an advanced communication interface that will allow an individual with a high-level impairment to have effective and sophisticated control of devices such as wheelchairs, robotic assistive appliances, computers, and neural prostheses[11]. 5. 2003: First BCI Game Exposed to the Public Brain Gate Developed: Brain Gate, a brain implant system ,was developed by the bio-tech company Cyber
  • 9. 4 kinetics in conjunction with the Department of Neuroscience at Brown University[11]. 6. 2008:First Consumer off-the-shelf, Mass Market Game Input Device. High Accuracy BCI Wheelchair Developed in Japan. Numenta Founded to Replicate Human Neocortex ability[11]. 7. 2009: Wireless BCI Developed: A Spanish Company, Starlab, developed a wireless 4-channel system called ENOBIO. Designed for research purposes the system provides a platform for application development[11]. 8. 2011: January 02: The First Thought-Controlled Social Media Network is utilized by the Neurosky[11].
  • 10. 5 CHAPTER 2 BACKGROUND 2.1 Brain Monitoring Methods There are three main brains monitoring methods invasive, partially invasive and non-invasive. In non-invasive there is different neuron signal imaging or reading techniques available, such as: Magneto encephalography (MEG), Magnetic resonance imaging (MRI) and functional magnetic resonance imaging (fMRI) and electroencephalogram (EEG). Among those Electroencephalogram (EEG) is the main interest due to its advantages of low cost, convenient operation and non-invasiveness. In present-day EEG-based BCIs, the following signals have been paid much attention: visual evoked potential (VEP), sensorimotor mu/beta rhythms, P300 evoked potential, slow cortical potential (SCP), and movement- related cortical potential (MRCP). These systems offer some practical solutions (e.g., cursor movement and word processing) for patients with motor disabilities.[1](page 137) 2.2 BCIs Based on the Modulation of Brain Rhythms Most of the BCI systems are designed based on the modulation of brain rhythms. Among those power modulation of mu/beta rhythms is used in the BCI system based on motor imagery. Besides, phase modulation is another method which has been employed in a steady- state visual evoked potential (SSVEP) based BCI. More generally, evoked potentials can be considered to result partially from a reorganization of the phases of the ongoing EEG rhythms. From the viewpoint of psychophysiology, EEG signals are divided into five rhythms in different frequency bands: delta rhythm (0.1–3.5 Hz), theta rhythm (4–7.5 Hz), alpha rhythm (8–13 Hz), beta rhythm (14–30 Hz), and gamma rhythm (>30 Hz).[1] (page 137) Although the rhythmic characteristic of EEG has been studied for a long period of time, many new studies on the mechanisms of brain rhythms emerged after the 1980s. So far, the cellular bases of EEG rhythms are still under investigation.
  • 11. 6 The knowledge of EEG rhythms is limited; however, numerous neurophysiologic studies indicate that brain rhythms can reflect changes of brain states caused by stimuli from the environment or cognitive activities. For example, EEG rhythms can indicate working state or idling state of the functional areas of the cortex. It is known that the alpha rhythm recorded over the visual cortex is considered to be an indicator of activities in the visual cortex. The clear alpha wave while eyes are closed indicates the idling state of the visual cortex, while the block of the alpha rhythm when eyes are open reflects the working state of the visual cortex[1]. (page 137) Another example is mu rhythm, which can be recorded over the sensorimotor cortex. A significant mu rhythm only exists during the idling state of the sensorimotor cortex. The block of mu rhythm accompanies activation of the sensorimotor cortex[1] (page 137) Event Related Desynchronization and Synchronization (ERD & ERS) Under the study of the brain rhythms ERD &ERS were found to be the effect of power modulation caused by the motor areas of the brain. ERD is the power in the mu band get reduce likewise ERS is the power in that band get increased than normal states 2.3 Practical Considerations for Motor Imagery Based BCI In the study of motor imagery movement EEG-based BCI, the system based on imagery movement is another active theme due to its relatively robust performance for communication and intrinsic neurophysiological significance for studying the mechanism of motor imagery. Moreover, system based on imagery movement is totally independent BCI system which is likely to be more useful for completely paralyzed patients than the SSVEP-based BCI. Most of the current motor imagery based BCIs are based on characteristic ERD/ERS spatial distributions corresponding to different motor imagery states because it give good accuracy. Figure 2.1 displays characteristic mappings of ERD/ERS for one subject corresponding to three motor imagery states, i.e., imagining movements of left/right hands and foot. Due to the
  • 12. 7 widespread distribution of ERD/ERS, techniques of spatial filtering, e.g., common spatial pattern (CSP), were widely used to obtain a stable system performance. However, due to the limit of electrodes in a practical system, electrode layout has to be carefully considered. With only a small number of electrodes, searching for new features using new information processing methods will contribute significantly to classifying motor imagery states.[1](page 147 ) Above knowledge has showed that by using a proper algorithm to detect the ERD/ERS from the EEG signal for example using CSP algorithm, we can classify a person's intent of motor actions. This is the major tool which can be used to build a practical BCI systems. Figure 2.1 - Mappings of ERD/ERS of mu rhythms during motor imagery. ERD over hand areas has a distribution with contra lateral dominance during hand movement imagination. During foot movement imagination, an obvious ERS appears in central and frontal areas
  • 13. 8 2.4 Common Spatial Patterns (CSP) for EEG feature extraction CSP was used by Ramoser et al. [2] to create features for classification of the event-related desynchronization (ERD) in EEG caused by imagined movements. The first and last few CSP components (the spatial filters that maximize the difference in variance) are used to classify the trials with a high accuracy. In this design spatial filters that lead to new time series whose variances are optimal for the discrimination of two populations of EEG related to left and right motor imagery. The method used to design such spatial filters is based on the simultaneous diagonalization of two covariance matrices[3]. Here the theory is got from [2], Classification of movement related EEG. For the analysis, the raw EEG data of a single trial is represented as an matrix . EEG data of a single trail is represented as an N x T matrix E Where N – Number of channels (i.e recording electrodes) T – Number of samples per channel The normalized spatial covariance of the EEG can be obtained from C=EET / trace(EET ) where T denotes the transpose operator and trace(x) is the sum of the diagonal elements of x. For each of the two distributions (separate classes) to be separated (i.e., left and right motor imagery. Averaged normalized covariance RR , RL are calculated by averaging over the trials of each group (left and right motor imagery). The composite spatial covariance is given as Cc= RR + RL Cc can be factored as Cc=Ucε Uc T (Singular value decomposition). where Uc - matrix of eigenvectors ε - diagonal matrix of eigenvalues.
  • 14. 9 SR+ SL = BεRBT + BεLBT = B(εR + εL )BT Note that throughout this section, the eigenvalues are assumed to be sorted in descending order. The whitening transformation P= ε(-1/2) Uc T equalizes the variances in the space spanned by Uc. Here it is done by equalizes the variance in space that created by Cc .In both movements(classes) we do not need variance in space that created by Cc that make the feature extraction hard. PCcPT = ε(-1/2) Uc T (Ucε Uc T )( ε(-1/2) Uc T )T = ε(-1/2) Uc T (Ucε Uc T ) Uc ε(-1/2) = I i.e., all eigenvalues of PCcPT are equal to one. If RR and RL are transformed as SR = PRRPT and SL= PRLPT then SR and SL share common eigenvectors, i.e., if SR = BεRBT , SL = BεLBT Then εR + εL = I where is the I identity matrix . Since the sum of two corresponding eigenvalues is always one, the eigenvector with largest eigenvalue for SR has the smallest eigenvalue for SL and vice versa. This property makes the eigenvectors useful for classification of the two distributions. The projection of whitened EEG onto the first and last eigenvectors in (i.e., the eigenvectors corresponding to the largest and ) will give feature vectors that are optimal for discriminating two populations of EEG in the least squares sense. With the projection matrix W = (BT P)T , the decomposition (mapping) of a trial E is given as Z = WE. SR+ SL = PRRPT + PRLPT = P(RR + RL )PT = PCcPT = I
  • 15. 10 2.5 Linear discriminant analysis (LDA) for classification The original LDA formulation, known as the Fisher Linear Discriminant Analysis (FLDA) (Fisher, 1936) deals with binary-class classifications .The key idea in FLDA is to look for a direction that separates the class means well (when projected onto that direction) while achieving a small variance around these means[4]. [8]Assume we have a set of D-dimensional samples {(x1 , x2 , … xN }, N1 of which belong to class w1, and N2 to class W2 We seek to obtain a scalar y by projecting the samples x onto a line y = wT x Here all the possible lines it would select the one that maximizes the separability of the scalars as mentioned in Fig 2.2(2). Fig 2.2 Projection of x samples onto a line by randomly and maximizes the separability of the scalars In order to find a good projection vector, we need to define a measure of separation the mean vector of each class in -space and -space is Then choose the distance between the projected means as objective function However, the distance between projected means is not a good measure since it does not account for the standard deviation within classes. So it does not make the projection well as shown in Fig 2.3 21
  • 16. 11 Fig 2.3 Projected class samples separated by the considering only the difference between projected means of the corresponding classes Fisher suggested maximizing the difference between the means, normalized by a measure of the within-class scatter .For each class we define the scatter, an equivalent of the variance, as The Fisher linear discriminant is defined as the linear function that maximizes the criterion function Therefore, looking for a projection where examples from the same class are projected very close to each other and, at the same time, the projected means are as farther apart as possible as shown in Fig 2.4. Fig 2.4. Projection of class samples separated by the difference between projected means with standard deviation within classes.
  • 17. 12 First, he define a measure of the scatter in feature space where is called the within-class scatter matrix The scatter of the projection can then be expressed as a function of the scatter matrix in feature space Similarly, the difference between the projected means can be expressed in terms of the means in the original feature space The matrix is called the between-class scatter. Note that, since is the outer product of two vectors, its rank is at most one Finally express the Fisher criterion in terms of W and as To find the maximum of ( ) we derive and equate to zero Dividing by
  • 18. 13 Solving the generalized eigenvalue problem ( −1 = ) yields This is known as Fisher’s linear discriminant (1936), although it is not a discriminant but rather a specific choice of direction for the projection of the data down to one dimension In order to classify the test point we still need to divide the space into regions which belong to one class. The easiest possibility is to pick the cluster with smallest Mahalonobis distance: where µc α and σc α represent the class mean and standard deviation in the 1-D projected space respectively[7].
  • 19. 14 CHAPTER 3 METHODS 3.1 MODEL This model is to measure EEG signal from the skull and extract the intention of the persons to some external stimulus, and producing a visual output like displaying the result in a computer screen. DIGITALIZED Device SIGNAL Commands Experimental Protocol for Data Acquisition Fig 3.1 - Model of online BCI 3.2 EXPERIMENTAL DESIGN & DATA ACQUISITION First, we want a proper place, because extra sounds and the sights can interfere with the mental state of the subject who participates in the experiment. We used a silent room in our lab. Then we need to identify the necessary electrode points accurately. Electrode points were selected under the international 10-20 system. After that we prepared the skin of the subject and then connected all the electrodes. Here we use 6 channels under the international 10-20 system(Hardware limitation by BioRadio) , Y N SIGNAL ACQUISITION SIGNAL PROCESSING Feature Extraction Translation Algorithm Ask question Is 4>3
  • 20. 15 reference as left mastoid & ground as FPz also we used two separate electrodes to measure the EOG artifacts . Electrodes Locations  Electrode places were selected under the international 10-20 system and we used two separate electrodes to measure the EOG artifacts in the new protocol ,because we saw the more artifacts were created by eye moments and the eye blinks. Fig 3.2 - Electrode placement diagram Instrument used for data acquisition –Bio Radio Fig 3.3 Left- Bio Radio in data collection Right- resources used for data collection
  • 21. 16 3.2.1 Pre preparation Here we use wet electrodes so that we have to maintain some standard procedures, 1. Finding the electrode position on the subjects head as mentioned in Fig 3.2. 2. Remove some hair on that place only and then clean with “curity” (Alcohol preperation) paper & clean with “Nuroprep” to remove dry cells. 3. Then apply “Ten20 condutive” to make more contact with the surface of the head ,to get rigid from resistance created by low contact(air resistance). Fig 3.4 Skin preparation and electrode placement 3.2.2 EEG recording: The recording was made with a 8-channel EEG amplifier from Bio Radio(hardware limitation 8 channels as shown in the Fig 3.3), using the left mastoid for reference and the FPz as ground also two electrodes were used to record EOG artifacts . The EEG was sampled with 256Hz. The data of all runs was saved as comma-separated-values (“CSV”) format by Bio Capture software. 3.2.3 Paradigm The subject sat in a relaxing chair with arm rests. The task was to perform left hand, right hand movements (subject himself has to lift his hand and do a specific action until the stimulus disappear) according to an external cue. The order of presentation of cues was random. The experiment consists of 300 trials; after the trial begins, the first few seconds (20 seconds) will be quiet, at t=0s an acoustic stimulus indicate the start of the secession, a cross (“+”) is displayed on the computer screen; then from the t=20s L or R letter was displayed for 3.5s. At the same time the subject
  • 22. 17 Figure 3.5 Timing diagram of the data collection protocol Figure 3.6 Left-New protocol chair setup Right- event marker generator was asked to do a left/right hand movement and hold according to the cue, until the cross appear at t=23. 5s. This will continue until 300 cues were displayed. 3.2.4 EEG Signal Acquisition Using BioRadio Amplifier. Signal Acquisition for offline analysis In calibration, data recorded by BioRadio device is collected into the computer by using the software "BioCapture" which was provided with BioRadio product. Figure 3.7 BioCapture GU Interface
  • 23. 18 The data recorded by the BioCapture (Figure 3.7) can be saved as comma separated values (.CSV format) and then it has to be imported into Matlab. Configuration of Inputs in Bio Capture In bio radio we can configure the input types and channels were selected via the bio radio configuration window(Figure 3.8 ). Event marker generation For recording of the calibration data we have to give the event markers to identify where the event has actually happened in time. To do so we have to select two keys as the markers and have to have a mechanism to press the keys, but letting the subject to do that task will lead to some very serious artifacts. In order to avoid that problem we have developed an arrangement to automatically generate the event marker while the subject lift his hand the setup we used to generate the event marker is explained in the Figure 3.6. In Figure 3.6, the setup we used the keyboard internal circuit with only connections to produce the 2 key (“A”, “Z” ) options and then placed under the hands separately. When the stimulus is presented the subject can press the buttons (finger movement) without having to look at the keyboard. Signal Acquisition for Online analysis In online (real-time classification) process the data need to be streamed into Matlab. The BioRadio_Matlab_SDK software development kit was used to feed the Figure 3.8 BioCapture configuration window
  • 24. 19 data into Matlab environment. Figure 3.9 BioRadio Matlab SDK funtions The BioRadio_Matlab_SDK functions can be used to control the BioRadio system and to get the data into the Matlab environment. 3.3 Data format Load the matched data into BCILAB BCILAB and EEGLAB are MATLAB toolboxes used in the design, prototyping, testing, experimentation with, and evaluation of Brain-Computer Interfaces (BCIs), and other systems in the same computational framework. The toolbox has been developed by Swartz Centre for computational neuroscience, California University.[13] When load the data for calibration of the model, the data have to be in a proper format which can be supported by the BCILAB. In that case .MAT format can be used. Inside the data the variables have to be in a specific way in order to extract the details like data array, sampled frequency, target markers, task classes, the time points for the target classes and so on. Data content Data has to contain 3 variables 1. cnt 2. mrk 3. nfo
  • 25. 20 Cnt - this is just a matrix class- int16 channels Time points mrk - this is a structure class - "Struct" mrk contains three fields inside 1. pos - <1x280 double> 2. y - <1x280 double> 3. className- <1x2 cell> These three fields how contain the information is shown in Figure 3.11 pos Time points – which time event occurs Y have that time point is belongs to what class correspondingly Class name (mention which number for which class) Figure 3.10 Matlab data array for signal Figure 3.11 structure of the variable mrk
  • 26. 21 Nfo The variables of nfo is shown below variables of 'nfo' How variables are contain in nfo is shown in Figure 3.13 name - Name of the file fs- Frequency of sampled clab - is a cell array channel details xpos - column vector ypos - column vector x,y coordinates of the channels correspondingly Figure 3.12 contents of the variable nfo Figure 3.13 structure of the variable nfo
  • 27. 22 3.4 SIGNAL PROCESSING 3.4.1 Data with artifacts In electroencephalographic (EEG) recordings the presence of artifacts create uncertain in the underlying processes and makes analysis difficult. Large amounts of data must often be discarded because of contamination by these artifacts[5]. To increase the effectiveness of BCI systems it is necessary to find methods of increasing the signal-to-noise ratio (SNR) of the observed EEG signals. In the context of EEG driven BCIs, the signal is endogenous brain activity measured as voltage changes at the scalp while noise is any voltage change generated by other sources. These noise, or artifact, sources include: line noise from the power grid, eye blinks, eye movements, heartbeat, breathing, and other muscle activity. Some artifacts, such as eye blinks, produce voltage changes of much higher amplitude (often have voltages of over 100 µV) than the endogenous brain activity. In this situation the data must be discarded unless the artifact can be removed from the data[5]. Removal of artifacts created by eye movement Often regression in the time or frequency domain is performed on parallel EEG and electrooculography (EOG) recordings to derive parameters characterizing the appearance and spread of EOG artifacts in the EEG channels[6]. Because EEG and ocular activity mix directionally, regressing out eye artifacts inevitably involves subtracting relevant EEG signals from each record as well[6]. Further steps in pre-processing In order to remove the 50Hz line signal we have used a stop band filter and then the signal is further filtered using a band pass filter with cut off 7-30Hz. The frequency band(7-30 Hz) was chosen because it encompasses the alpha and beta frequency bands, which have been shown to be most important for movement classification [12] . Also because of the application of CSP algorithm in the feature extraction part, we have the advantage of getting rid of some parts of the artifacts also. By
  • 28. 23 whitening process the CSP algorithm removes common parts in both classes. If both classes contain the same signal it is considered as an artefact (for separation of the classes). Feature Extraction CSP algorithm finds the weight matrix (W) of the channels for spatial spread of information. In this design, spatial filters that lead to new time series whose variances are optimal for the discrimination of two populations of EEG related to left and right motor imagery. This method is a supervised method; Its use to design such spatial filters is based on the simultaneous diagonalization of two covariance matrices of left and right motor imagery data. These data sets are collected according to the mentioned protocol & data includes class information also (use event markers to get witch class it belongs). The data is multiplied by the spatial filters (W matrix) to get the features extracted. Classification Here we use LDA algorithm to put the feature space projected (by using projection matrix w) into another space with separating each class by their mean and variance of the provided feature class. Then we get the data to check which class it belongs. To do that map the data using w matrix and then pick the cluster with smallest Mahalonobis distance. Mahalonobis distance calculated as dx1, dx2 are Mahalonobis distance from class1 & class2 respectively M1=mean of projected class1 data M2=mean of projected class2 data
  • 29. 24 V1=variance of projected class1 data V2=variance of projected class2 data x = projected data set for check which class it belongs Here we use 7 channels, so we use 7 dimension data then apply LDA and project in 1D and classify them. Online processing Creating a predictor Before going for the online prediction of the data , A prediction model has to be created from a calibration(Training) data . Creating a model consist of taking the calibration data and filtering it according to the filter pipeline. Filtered signal Following the pre-processing of the raw signal, epoch extraction should be done. Then the extracted classes should be fed through the function which calculate the spatial filter matrix W, which can make the variance high between the two classes in a certain data stream. Then a classifier parameter has to be calculated so that it can correctly classify the data stream which belongs to certain class. With these parameters, we shall have a complete predictor model which can be used to classify the oncoming data stream in real-time (online classification). Raw signal 50Hz filter 7-30Hz Band Pass filter Figure 3.14 Filter Pipeline
  • 30. 25 After calculating the predictor parameters it can be applicable for the Online data stream Above results can be reached by include some additional feedback setups to this system. With that, the results can be reached with high accuracy. However, there will be a drawback of time delay introduced to the system. Recorded signal Feature Extraction using CSP Epoch Extraction Classe1, Class2 W matrix Classe1, Class2 Classifier LDA [w,V1,V2,M1,M2] PredictorOnline data stream Result W Power calculation Figure 3.15 Predictor diagram Predictor parameters Figure 3.16 Predictor in online
  • 31. 26 CHAPTER 4 RESULTS 4.1 Frequency spectrum of the recorded signal The signal contains 50Hz noise as high that’s why the other signal powers are negligible compare to that power. 4.2 Frequency spectrum After 7-30Hz with 50Hz stop filter applied Figure 4.1 Frequency spectrum of the signal recorded, this was before applying any filters to the signal Figure 4.2 Frequency spectrum of the signal after applying the 7-30Hz band pass & 50Hz stop band filters; this is the amplified version of Fig 4.1 red portion
  • 32. 27 4.3 Before apply CSP algorithm 4.4 After applied CSP algorithm Figure 4.3 A scatter plot which shows the power in the 7-30Hz band in two specific cases for C4 Vs C3 and so on as their coordinates, and the same plot for the two classes (LEFT - Blue, RIGHT-Red). Figure 4.3 A scatter plot which shows the power in the 7-30Hz band in two specific cases for C4 Vs C3 and so on as their coordinates, and the same plot for the two classes (LEFT - Blue, RIGHT-Red) .
  • 33. 28 4.5 Spatial distribution of CSP filter Figure 4.4 Spatial distribution of CSP filter for sub2_dataset5 (plotted using BCI lab toolbox) Figure 4.5 Spatial distribution of CSP filter for sub2_dataset3 (plotted using BCI lab toolbox)
  • 34. 29 4.6 LDA classification accuracy Figure 4.7 Accuracy for subject 2’s datasets without CSP applied & with CSP applied In the above plot the weight matrix calculated by CSP is created by different calibration data sets & then applied the weight matrix (W) separately for each data and the accuracies calculated. Figure 4.6 Accuracy plot with epoch’s number without CSP applied subject 2-dataset-5 and the with epoch’s number with CSP applied subject 2-dataset-5
  • 35. 30 Table 4.1 Acurracy for subject 2 datasets with CSP applied & without CSP applied Data set Without CSP applied CSP weight W with dataset 1 CSP weight W with dataset 2 CSP weight W with dataset 3 CSP weight W with dataset 4 CSP weight W with dataset 5 1 46.01 71.48 62.93 49.00 46.50 57.80 2 50.00 44.11 71.09 56.67 47.50 54.91 3 47.33 58.17 52.38 73.33 54.50 68.50 4 46.00 57.41 60.54 55.33 72.00 74.86 5 51.16 58.56 61.56 54.00 62.00 80.06
  • 36. 31 CHAPTER 5 DISCUSSION 5.1 Initial Filtering In the results of Fourier spectrum diagram we can see much of the signal power is on 50Hz line frequency, the other frequency powers are very low compare to the 50Hz frequency. Our wanted frequency range is 7-30Hz for find classify the left and right hand real movement of a healthy person correctly. To resolve this we use band stop filter 50Hz. 5.2 Artifacts and experimental protocol In these above analysis we haven’t got any good expected results, because we only consider about line noise but we haven’t consider about other artifacts (mainly: eye movement, eye blink).Because of that the signal we interest is collapsed by these artifacts. The experimental protocol was modified to reduce the artifacts. In our earlier protocol to put event markers subject had to see the keyboard for pressing the key without any error. So here when he want to press the key he move his eyes below it make an and then see the display it make voltage difference, and the subject’s hands always had to lie on the table which made the subject uncomfortable. To counter this problem, we used keyboard’s circuit board with only two keys as switches placed in the arm chair where the subject sit, It avoids miss-pressing the key by the subject. So the subject no need to look at the keyboard. It removed most of the eye movement artifacts. We also used EOG electrodes to capture artifacts created by eye movements. Following those changes to the protocol of the data collection , when we observe our results we could not observe any considerable improvement. In the accuracy that may be because of the reduction of one channel in our recording than previous recordings due to some technical problem, further more we have the EOG recordings separately so with that we can do some artifact removal that will lead to a good accuracy.
  • 37. 32 5.3 On the results of the offline analysis As shown in the table 4.1 five data sets have been discussed among that data set accuracies were 71.48%,71.09%,73.33%,72.00%,80.06% these accuracies were calculated by applying the classifier model which was created by the same data itself , but when it comes to applying a classifier created using the different data set on another data set the classification accuracies were not good enough. This different may be the problem for the unsuccessful of the online system development. This is the major problem which has to be overcome to reach the final goal, to do that we have to analyse the signal variations between different sessions. By considering the accuracies in Figure 4.9 the calibration data used as dataset5 give comparably good results when applying on other datasets also, so we took dataset5 as calibration data for subject 2. 5.4 Problems encountered To the best analysis we need more data, so we have to spend much time for recording with no noise environment, but in our recording the room availability & the noise free environment is always problem. Our hardware used for data recording(Bio radio) has hardware limitation for eeg recording as it only has 8 channels it makes calibration data inefficient(Because CSP works better in more eeg channels at least 32 eeg channels). The Bio radio has to placed line of sight for proper data recording, but in our case during the data recording the device has not found by our bio capture (software used for eeg recording by Bio radio). In our early recordings we have “9999999” error is caused by missing packet error ,we haven’t know it earlier that’s why we have low classification accuracy. But when we encountered that and asked from the device providers in their recording also they have same problem. For online recording it collects the data through Matlab SDK so the device(Bio radio) should be configured by Matlab written software. In our case the device configured correctly as they mentioned but in the data range eeg changes
  • 38. 33 every time in recordings. So we checked with the test pack it gave the correct value range so no problem in the device but we have not found the exact problem for this error. This made our online prediction system into an uncertain situation. BioRadio device is battery powered device sometime during the recording it power run outs it make our recording breaks. This recordings are human based (we use subjects for recording)so after some time they get tired or their concentration is somewhere outside than the screen also make impact in the recordings. During the last recording it made by new protocol but we have not used 6 channels (we only use 5 channels) it make our recording inefficient.
  • 39. 34 CHAPTER 6 CONCLUTION AND FUTURE PROPOSAL For the person who can’t do muscle moments (severely paralyzed). They want to interact with the environment they must need a tool for that. BCI interface give this opportunity. For design such device we first tried to develop a device which can classify the different finger moments (identify Left or Right correctly) on healthy subjects, we plan to improve it further by using imagination of the movement for healthy subjects and then in the patient population. To do this we have developed the data recording protocol and then we have analysed the data offline (not real time) with the use of implemented CSP algorithm for feature extraction and LDA algorithm for classification we have got accuracy up to 80% (this was elaborated in Table 4.1). Then we have tried further by creating an online system using our implemented signal processing algorithm for classifying (find the classes left or right) in online(real time). But it hasn't gave us the expected results due to some practical difficulties. this problem can be solved by get rid of the major practical issues like getting the error by packet loss, and also by analysing the performance of the signal processing algorithm in online data. Future work to be done Even though the 60% accuracy of the model is not sufficient for an online BCI communication system, it has the capability of performing as online BCI system by adding feedback techniques. However it introduces a time delay for the prediction. In classifier Knn algorithm or Kohonon's self-organizing map can be used .Then proper algorithm for classifier can find by ROC map. In Knn algorithm, whenever we have a new point(the data to predict) to classify, we find its K nearest neighbours from the training data(classes).The distance is calculated using one of the Euclidean Distance or Mahalanobis Distance etc.And then take the nearest one as the class. In Kohonon's Self organizing map it uses the principle of brain. In brain
  • 40. 35 neurons tend to cluster in groups. The connections within the group are much greater than the connections with the neurons outside of the group. Kohonen's network tries to mimic this in a simple way.
  • 41. 36 References [1] “(The Frontiers Collection ) Bernhard Graimann, Brendan Allison, 2.pdf.” . [2] H. Ramoser, J. Muller-Gerking, and G. Pfurtscheller, “Optimal spatial filtering of single trial EEG during imagined hand movement,” Rehabil. Eng. IEEE Trans. On, vol. 8, no. 4, pp. 441–446, 2000. [3] M. Akcakaya, B. Peters, M. Moghadamfalahi, A. R. Mooney, U. Orhan, B. Oken, D. Erdogmus, and M. Fried-Oken, “Noninvasive Brain&amp;#x2013;Computer Interfaces for Augmentative and Alternative Communication,” IEEE Rev. Biomed. Eng., vol. 7, pp. 31–49, 2014. [4] J. Ye, “Least squares linear discriminant analysis,” in Proceedings of the 24th international conference on Machine learning, 2007, pp. 1087–1093. [5] J. N. Knight, “Signal fraction analysis and artifact removal in EEG,” Colorado State University, 2003. [6] T.-P. Jung, S. Makeig, C. Humphries, T.-W. Lee, M. J. Mckeown, V. Iragui, and T. J. Sejnowski, “Removing electroencephalographic artifacts by blind source separation,” Psychophysiology, vol. 37, no. 02, pp. 163–178, 2000. [7] MaxWelling, “Fisher Linear Discriminant Analysis”, note to explain Fisher linear discriminant analysis,welling@cs.toronto.edu [8] Ricardo Gutierrez-Osuna, “L10: Linear discriminants analysis” , CSCE 666 Pattern Analysis | CSE@TAMU [9] S. C. Gandevia and J. C. Rothwell, “Knowledge of motor commands and the recruitment of human motoneurons,” Brain, vol. 110, no. 5, pp. 1117–1130, 1987 [10] G. Pfurtscheller, C. Neuper, D. Flotzinger, and M. Pregenzer, “EEGbased discrimination between imagination of right and left hand movement,” Electroenc. Clin. Neurophys., vol. 103, no. 5, pp. 1–10, 1997. [11] Ibrahim Arafat, “Brain–Computer Interface: Past, Present & Future” ,http://www.academia.edu/1365518/Brain-Computer_Interface_Past_Pr [12] G. Pfurtscheller, C. Neuper, D. Flotzinger, and M. Pregenzer, “EEGbased discrimination between imagination of right and left hand movement,” Electronic. Clin. Neurophys., vol. 103, no. 5, pp. 1–10, 1997. [13] “Swardcenter for computational neuroscience California University”http://www.sccn.ucsd.edu;(14-10-14)
  • 42. 37 APPENDIX Function for filter function [sig_fil2]=filtr(Fs,signal) %FiR filter 7-30 minimum phase % sig is the raw signal to be pre-processed it has to be in time x channels % format figure a1=fir1(100,[7/(Fs/2) 30/(Fs/2)],'band'); % 7-30 normalized by nyquist freq 128 [0.0547 0.2344] sig_fil=filter(a1,1,signal); plot(sig_fil) figure Y=abs(fft(sig_fil)); f=Fs/2*linspace(0,1,length(Y)/2); plot(f,Y([1:length(Y)/2],:)) if Fs/2> 60 %Fir stop band filter 45-55 a2=fir1(100,[45/(Fs/2) 55/(Fs/2)],'stop'); %Fs=256 [0.3516 0.4297] sig_fil2=filter(a2,1,sig_fil); Y=abs(fft(sig_fil2)); figure f=Fs/2*linspace(0,1,length(Y)/2); plot(f,Y([1:length(Y)/2],:)) else sig_fil2=sig_fil; end end
  • 43. 38 CREAT MATCH DATA DataSet_name = 'sub2_p1_256_without_6_with_eog'; %name of the data set we are going to creat cnt_new=data(:,1:6); %extract the signal data only add_y=data(:,7)+data(:,8)*2; % 2 is for 'Right' 1 is for 'Left' pos=[]; % to creat positions(timePoints) of the events in the form [233 244 2334...23322] y_new=[]; % to creat event vector in the form [1 2 1 2 1 1...1] for i=1:1:length(add_y) if add_y(i)==1 pos(length(pos)+1)=i; y_new(length(y_new)+1)=1; elseif add_y(i)==2 pos(length(pos)+1)=i; y_new(length(y_new)+1)=2; end end cnt=cnt_new; mrk=struct('pos',pos,'y',y_new,'className',{{'left','Righ t'}}); nfo=struct('name',DataSet_name,'fs',256,'clab',{{'F3','C3 ','P3','F4','C4','P4'}},... 'xpos',[-0.296077061122885;-0.384615384615385;- 0.296077061122885;0.296077061122885;0.384615384615385;0.2 96077061122885],... 'ypos',[0.418716195347552;0;- 0.418716195347552;0.418716195347552;0;- 0.418716195347552]); save sub2_p1_256_without_6_with_eog.mat cnt mrk nfo
  • 44. 39 Function for epoch extraction function [class1 class2 class0]=Epoch_extract(cnt,mrk) %######################################################## %inputs- % cnt - filtered signal dim - time x chann % mrk - pos y classname %Outputs- % class1=Left dim chann x tim % class2=Right % class0=Rest %######################################################## if length(cnt)==size(cnt,1) Ecnt=double(cnt); else error('dimention error cnt - time x chann') end % Epos is a row vector containig the time points for the event markers Epos=mrk.pos; % Ey is a row vector containing events Ey=mrk.y; for i=1:1:length(Ey) if Ey(i)==1 cl1=[cl1;Ecnt(Epos(i)-128 : Epos(i)+896,:)]; if i+1 < length(Ey) cl0=[cl0;Ecnt(Epos(i)+897:Epos(i+1)-129,:)]; end elseif Ey(i)==2 cl2=[cl2;Ecnt(Epos(i)-128 : Epos(i)+896,:)]; if i+1 < length(Ey) cl0=[cl0;Ecnt(Epos(i)+897:Epos(i+1)-129,:)]; end end end end
  • 45. 40 Function for CSP algorithm function [W] = mycsp(dat1, dat2) %######################################################## %inputs- % dat1 - dim- chann X time % dat2 - %Outputs- % W - weight filter matrix dim- chann X chann %######################################################## % CSP Common spatial pattern decomposition % Use as [unmixing] = csp(dat1, dat2)function [unmixing] = csp(dat1, dat2) % CSP Common spatial pattern decomposition % This implements Ramoser H., Gerking M. spatial filtering of single trial EEG during imagined hand movement." % IEEE Trans. Rehab. Eng 8 (2000), 446, 441 R1 = dat1*dat1'; R1 = R1/trace(R1); R2 = dat2*dat2'; R2 = R2/trace(R2); % Ramoser equation (2) Rsum = R1+R2; % Find Eigenvalues and Eigenvectors of RC [EVecsum,EValsum] = eig(Rsum); % Sort eigenvalues in descending order [EValsum,ind] = sort(diag(EValsum),'descend'); EVecsum = EVecsum(:,ind); % Find Whitening Transformation Matrix - Ramoser Equation (3) %to find principal diagonal matrix :::: pinv(diag(EValsum)) P = sqrt(pinv(diag(EValsum))) * EVecsum'; % Whiten Data Using Whiting Transform - Ramoser Equation (4) S1 = P * R1 * P'; S2 = P * R2 * P'; % Ramoser equation (5) %generalized eigenvectors/values [B,D] = eig(S1,S2); [D,ind]=sort(diag(D)); B=B(:,ind); % Resulting Projection Matrix-these are the spatial filter coefficients W = (B'*P); end
  • 46. 41 Power extract function function [class_pow]=pow_ex(class1) %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %input class1 Data wanted to extract power dim=chno x points % output class_pow ;wanted channel number output % dim=trails x specified channel % power % %######################################################## ################## class_pow =[]; for epoch=1:1:fix(length(class1)/1024) l=epoch-1; epoch_i=class1(:,l*1024+1:1024*epoch); pwr=sum((epoch_i.^2)')/length(epoch_i); class_pow =[class_pow ;pwr]; end end
  • 47. 42 Fuctions for LDA function [w,V1,V2,M1,M2]=ldaini(x1,x2) %############################################################ % input x1 class1 data % x2 class2 data dim=trial x channel pow % % output w projection matrix dim=1 x channel % V1 Variance of projected data belong to class1 % V2 Variance of projected data belong to class2 % M1 mean of projected data belong to class1 % M2 mean of projected data belong to class2 %######################################################## m1=mean(x1); m2=mean(x2); s1=[]; s1=[x1(1,:)-m1]'*[x1(1,:)-m1]; for i=2:1:size(x1,1) s1=s1+[x1(i,:)-m1]'*[x1(i,:)-m1]; end s1=s1/size(x1,1); % s2=[]; s2=[x2(1,:)-m2]'*[x2(1,:)-m2]; for i=2:1:size(x2,1) s2=s2+[x2(i,:)-m2]'*[x2(i,:)-m2]; end s2=s2/size(x2,1); %to find Sb Sb=[m1-m2]'*[m1-m2]; Sw=s1+s2; w=inv(Sw)*[m1-m2]'; % to define which class that belongs % take our wanted data to check the class where it belongs %consider x as the input w=w'; x1=x1'; x2=x2'; px1=w*x1; %project the x1 to px1 px2=w*x2; %project the x2 to px2 M1=sum(px1)/length(px1);%calclate the mean M1 for projected x1 as px1 M2=sum(px2)/length(px2);%calclate the mean M2 for projected x2 as px2 V1=var(px1); %calculate variance z1 for projected x1 as px1 V2=var(px2); %calclate the variance z2 for projected x2 as px2 End
  • 48. 43 function [c]=ldaout(w,V1,V2,M1,M2,x) %############################################################# ############# % input x data want to chk the class dim=channel X 1 % w projection matrix dim=1 X channel % V1 Variance of projected data belong to class1 % V2 Variance of projected data belong to class2 % M1 mean of projected data belong to class1 % M2 mean of projected data belong to class2 % % Output c which class it belongs % 1 for class1 % 2 for class2 % 3 cannot define class %############################################################# ############# %then project the input to our classifier space x=w*x; %then pick the cluster with smallest Mahalonobis distance dx1=((x-M1)^2)/V1; dx2=((x-M2)^2)/V2; if dx1<dx2 disp('class 1') c=1; elseif dx1>dx2 disp('class 2') c=2; else disp('cannot define class1 or class 2') c=3; end end
  • 49. 44 To check our offline data analyser accuracy script clear; clc; %% to check the preprocessed filterd signal cnt=filtrd_sig; %to extract class1 class2 class0 %% to extract class1 , class1_0 class2 class2_0 [class1 class2 class0]=Epoch_extract(cnt,mrk); %to extract the W matrix of CSP [w] = mycsp(class1, class2); %% apply w matrices to seperate class1 & class2 with rests class1_w12=w*class1; class2_w12=w*class2; %% find the power for seperate them [class2_w12_pow]=pow_ex(class2_w12); [class1_w12_pow]=pow_ex(class1_w12); %% classify the with they correspond for class1 or class2 x1=class2_w12_pow; x2=class1_w12_pow; %% [w,V1,V2,M1,M2]=ldaini(x1,x2); %% %for with out csp applied only we need to take A A=[1 1 1 1 1 1];