SlideShare a Scribd company logo
1 of 44
Contents lists available at ScienceDirect
Neuroscience Letters
journal homepage: www.elsevier.com/locate/neulet
Research article
EEG-based BCI system for decoding finger movements within
the same hand
Rami Alazraia,⁎, Hisham Alwannib, Mohammad I. Daouda
a Department of Computer Engineering, School of Electrical
Engineering and Information Technology, German Jordanian
University, Amman 11180, Jordan
b Faculty of Engineering, University of Freiburg, Freiburg
79098, Germany
A R T I C L E I N F O
Keywords:
Electroencephalography (EEG)
Brain–computer interfaces (BCIs)
Time-frequency distribution
Finger movements
Support vector machines
A B S T R A C T
Decoding the movements of different fingers within the same
hand can increase the control's dimensions of the
electroencephalography (EEG)-based brain–computer interface
(BCI) systems. This in turn enables the subjects
who are using assistive devices to better perform various
dexterous tasks. However, decoding the movements
performed by different fingers within the same hand by
analyzing the EEG signals is considered a challenging
task. In this paper, we present a new EEG-based BCI system for
decoding the movements of each finger within
the same hand based on analyzing the EEG signals using a
quadratic time-frequency distribution (QTFD), namely
the Choi–William distribution (CWD). In particular, the CWD is
employed to characterize the time-varying
spectral components of the EEG signals and extract features that
can capture movement-related information
encapsulated within the EEG signals. The extracted CWD-based
features are used to build a two-layer classifi-
cation framework that decodes finger movements within the
same hand. The performance of the proposed
system is evaluated by recording the EEG signals for eighteen
healthy subjects while performing twelve finger
movements using their right hands. The results demonstrate the
efficacy of the proposed system to decode finger
movements within the same hand of each subject.
1. Introduction
A brain–computer interface (BCI) is a system that decodes brain
activities to provide users with alternative ways to control
various
computer-based applications and assistive devices. Among the
various
neuroimaging modalities, the EEG is considered the most
commonly
used neuroimaging modality for designing BCI systems [1].
Over the past decade, researchers have developed EEG-based
BCI
systems to decode the actual and imagery motor tasks of large
body-
parts [2,3], including the hands, feet, and tongue, in an attempt
to
control various assistive devices, such as wheelchairs [4],
computer-
based applications [5], and prosthetic devices [6]. Nonetheless,
the fact
that the vast majority of the existing EEG-based BCI systems
can ana-
lyze brain activities and produce a limited number of control
signals,
usually less than five control signals, reduces the capability of
using
these systems to control more complicated assistive devices,
such as
prosthetic and robotic hands, that require a large number of
control
signals to perform various dexterous tasks [7].
Recently, few researchers have started to investigate the
possibility
of decoding the movements performed by fine body-parts, such
as the
movements of each finger within the same hand [7,8], wrist
movements
of the same hand [9,10], and grasp-related movements
performed by
the same hand [11,12], in order to increase the control's
dimensions of
the EEG-based BCI systems. In fact, decoding the movements
of each
finger within the same hand based on analyzing the EEG signals
is more
difficult than decoding the movements performed by different
large
body-parts, decoding the movements performed by a specific
finger in
the left hand from the movements of the matching finger in the
right
hand, or decoding the movements of the fingers from the
movements of
the wrist within the same hand [7,9,12]. This is due to the fact
that
finger movements within the same hand activate relatively small
and
close regions in the sensorimotor cortex area within the same
hemi-
sphere of the brain [7,9,13]. Therefore, the task of using a
neuroima-
ging modality that has a relatively low spatial resolution, such
as the
EEG, to decode the movements of each finger within the same
hand is
considered challenging due to the fact that various brain regions
are
activated during the movements of individual fingers [7]. In
addition,
the nonstationary nature of the EEG signals implies that the
spectral
components of the EEG signals vary as a function of time.
Therefore,
analyzing the EEG signals in the time-domain or the frequency-
domain
might not capture the spectral characteristics of the EEG
signals. In fact,
the nonstationary nature of the EEG signals imposes the
requirement of
representing the EEG signals in a joint time-frequency domain
that can
describe the spectral variations of the signals over time [14].
https://doi.org/10.1016/j.neulet.2018.12.045
Received 29 September 2018; Received in revised form 28
December 2018; Accepted 29 December 2018
⁎ Corresponding author.
E-mail address: [email protected] (R. Alazrai).
Neuroscience Letters 698 (2019) 113–120
Available online 08 January 2019
0304-3940/ © 2019 Elsevier B.V. All rights reserved.
T
http://www.sciencedirect.com/science/journal/03043940
https://www.elsevier.com/locate/neulet
https://doi.org/10.1016/j.neulet.2018.12.045
https://doi.org/10.1016/j.neulet.2018.12.045
mailto:[email protected]
https://doi.org/10.1016/j.neulet.2018.12.045
http://crossmark.crossref.org/dialog/?doi=10.1016/j.neulet.2018
.12.045&domain=pdf
In this paper, we hypothesize that analyzing the EEG signals
using a
quadratic time-frequency distribution (QTFD), namely the
Choi–Williams distribution (CWD), can enable accurate
decoding of
fingers movements within the same hand. In particular, the
CWD is
employed to characterize the time-varying spectral components
of the
EEG signals and extract features that can capture movement-
related
information encapsulated within the EEG signals. The extracted
CWD-
based features are used to build a two-layer classification
framework
that can simultaneously identify each moving finger within the
same
hand and decode the movements performed by each identified
finger.
2. Materials and methods
2.1. Subjects
Eighteen healthy subjects (6 females and 12 males, with an
average ± standard deviation age of 21.2 ± 3.0 years)
volunteered to
participate in this study. EEG signals were recorded for each
subject
while performing twelve finger movements using her/his right
hand,
including four thumb-related movements, namely the thumb
adduction,
thumb abduction, thumb flexion, and thumb extension
movements, and
the flexion and extension movements of the index, middle, ring,
and
little fingers. Before participating in the experiment, each
subject re-
ceived a thorough explanation of the experimental procedure
and
signed a consent form. The experimental procedure of this study
was
approved by the Research Ethics Committee at the German
Jordanian
University and was conducted in accordance with the
Declaration of
Helsinki.
2.2. Experimental protocol
At the beginning of the experiment, each subject was asked to
sit on
a chair and to relax her/his arms on a table located in front of
her/him.
A computer screen was placed on the table at a distance of
approxi-
mately 60 cm from the subject and employed to display various
visual
cues. In particular, each visual cue notifies the subject to
perform a full
flexion movement followed by a full extension movement using
a spe-
cific finger or a full adduction movement followed by a full
abduction
movement using the thumb.
For each trial, a visual cue was displayed for three seconds
followed
by a black screen that prompts the subject to start performing
the se-
quence of flexion and extension movements using a specific
finger or
the adduction and abduction movements using the thumb.
During the
recording of each trial, the experimenter carefully follows the
move-
ments of the subject's fingers and hits the button of an event
marker to
mark transitions from flexion to extension and from adduction
to ab-
duction. The total number of trials recorded for each finger
movement
is five trials per each subject. The average ± standard deviation
durations of the flexion and extension movements computed
over the
five fingers and all subjects are 4.1 ± 0.4s and 3.6 ± 0.1 s,
respec-
tively, while the average ± standard deviation durations of the
thumb
adduction and thumb abduction movements computed over all
subjects
are 4.7 ± 0.15 and 3.9 ± 0.14 s, respectively.
2.3. Data acquisition and preprocessing
The BioSemi ActiveTwo EEG system
(https://www.biosemi.com)
was used to record the EEG signals using 11 Ag/AgCl
electrodes at a
sampling rate of 2048 Hz. The utilized EEG electrodes are
arranged on
the scalp according to the 10–20 international electrode
placement
system at the following locations: F3, F4, Fz, C3, C4, Cz, P3,
P4, Pz, T7, and
T8, which are referenced to the common mode sense (CMS)/
driven
right leg (DRL) at the C1 and C2 locations. The recorded EEG
signals
were downsampled to 256 Hz and filtered by applying a
bandpass filter
with a bandwidth of 0.5–35 Hz. Moreover, the automatic artifact
re-
moval (AAR) toolbox [15] was employed to reduce the muscle
and
electrooculography (EOG) artifacts in the filtered EEG signals.
2.4. Time-frequency representation and feature extraction
In this study, we propose to analyze the EEG signals using a
quad-
ratic time-frequency distribution (QTFD), namely the Choi–
Williams
Distribution (CWD) [16]. The CWD can be viewed as a two-
dimensional
(2D) transformation that maps the original time-domain EEG
signals
into a joint time-frequency domain which has an excellent
resolution in
both the time and frequency domains [14,16]. Hence, the use of
the
CWD to analyze the EEG signals enables the construction of a
time-
frequency representation (TFR) of the EEG signals that can
quantify the
distribution of the energy encapsulated in the EEG signals over
the time
and frequency domains [14]. Specifically, to compute the CWD,
we
employed a sliding window that divides the EEG signal of each
elec-
trode into a set of overlapped segments, such that the size of
each
segment is 256 samples and the overlap between any two
consecutive
segments is 128 samples. The size and overlap of the sliding
window
were selected experimentally as described in Section 3.3. Then,
the
CWD is computed for an EEG segment, denoted as s(t), as
follows [14]:
1. Compute the analytic signal of s(t), denoted as x(t), as
follows:
�= +x t s t j s t( ) ( ) { ( ) }, (1)
where � {·} is the Hilbert transform [17].
2. Compute the CWD of x(t), denoted as ρx(t, f), as follows
[16,18]:
∫ ∫= ∂ ∂
−∞
∞
−∞
∞
− +ρ t f χ μ ν κ μ ν e( , ) ( , ) ( , ) ,x x
j π fν tμ
ν μ
2 ( )
(2)
where χx(μ, ν) is the ambiguity function of x(t), and κ(μ, ν) is a
time-
frequency smoothing kernel. In particular, χs(μ, ν) represents
the
Fourier transform of the auto-correlation function of x(t). χs(μ,
ν)
can be computed as follows [16,18]:
∫= + − ∂
−∞
∞
χ μ ν x t
ν
x t
ν
e t( , ) (
2
) * (
2
) ,x
j πμt2
(3)
where x*(·) is the complex conjugate of x(·). The time-
frequency
smoothing kernel, κ(μ, ν), can be expressed as follows [16]:
⎜ ⎟= ⎛
⎝
− ⎞
⎠
κ μ ν
μ ν
α
( , ) exp ,
2 2
2 (4)
where α > 0 is a smoothing parameter that was experimentally
selected to be 0.5.
The dimensionality of the CWD-based TFR computed for each
EEG
segment is equal to H × L, where H = 256 represents the number
of
time samples within an EEG segment and L = 512 represents the
number of frequency samples. Such a high dimensionality can
increase
the complexity of the classification task. In order to reduce the
di-
mensionality of the obtained CWD-based TFR, we propose to
extend
two frequency-domain features, namely the normalized Renyi
entropy
and the energy concentration features, to the joint time-
frequency do-
main [11,19,14]. These two extended features aim to quantify
the
constructed CWD-based TFR of each EEG segment. In
particular, the
normalized Renyi entropy, F1, of the CWD quantifies the
regularity of
the distribution of the energy encapsulated within a specific
EEG seg-
ment. The F1 feature can be computed as follows [11,19,14]:
∑ ∑= −
⎛
⎝
⎜
⎜
⎛
⎝
⎜ ∑ ∑
⎞
⎠
⎟
⎞
⎠
⎟
⎟= = = =
F
ρ t f
ρ t f
(0.5) log
( , )
( , )
.
t
H
f
L
x
t
H
f
L
x
1 2
1 1 1 1
2
(5)
On the other hand, the energy concentration, F2, of the CWD
pro-
vides a measure that describes the spread of the energy
encapsulated
within a specific EEG segment. The F2 feature can be obtained
as fol-
lows [11,19,14]:
R. Alazrai et al. Neuroscience Letters 698 (2019) 113–120
114
https://www.biosemi.com
∑ ∑= ⎛
⎝
⎜
⎞
⎠
⎟
= =
F ρ t f| ( , ) | .
t
H
f
L
x2
1 1
1/2
2
(6)
Fig. 1 provides a graphical illustration of the feature extraction
process. In particular, at each position of the sliding window,
the fea-
tures F1 and F2 are computed from the constructed CWD-based
TFR of
each EEG segment. The total number of EEG segments at each
window
position is equal to 11 segments, where each segment represents
the
EEG signal of a particular EEG electrode within the current
window
position. The extracted features from the EEG segments at a
particular
window position are grouped to form a feature vector.
Therefore, the
total number of features comprised within each feature vector is
equal
to 22 features.
2.5. Classification framework
In this work, we propose a two-layer classification framework
(2LCF) to simultaneously identify each moving finger within
the same
hand and decode the movements performed by each identified
finger.
The proposed 2LCF converts the original complex classification
task
(i.e., classifying a feature vector into one of the twelve different
finger
movements described in Section 2.1) into a sequence of two
simpler
classification tasks that are performed at each layer. In
particular, the
first classification layer comprises one classifier, denoted as
C1,1, that
analyzes each input feature vector to identify the moving finger
within
the same hand, without specifying the movement performed by
the
identified moving finger. Explicitly, the C1,1 classifier assigns
each input
feature vector to one of the following five different movement
classes:
the thumb movement (M1), index movement (M2), middle
movement
(M3), ring movement (M4), and little movement (M5). In this
study, we
refer to the movements M1, M2, M3, M4, and M5 as movements
of dif-
ferent fingers within the same hand. The C1,1 classifier is
implemented
using a multi-class support vector machine (SVM) classifier
with radial
basis function (RBF) kernel [20]. After that, the input feature
vector is
passed to the second classification layer, which consists of five
different
SVM classifiers with RBF kernels. Each classifier in the second
classi-
fication layer is associated with a particular finger and is
designed to
decode movements performed by that particular finger.
Specifically, the
first classifier at the second classification layer, denoted as
C2,1, is a
multi-class SVM classifier that classifies an input feature vector
that is
identified at the first layer as M1 class into one of four thumb-
related
movements, namely the thumb adduction (M1,1), thumb
abduction
(M1,2), thumb flexion (M1,3), and thumb extension (M1,4)
movements.
The second classifier, denoted as C2,2, is a binary SVM
classifier that
classifies an input feature vector that is identified at the first
layer as M2
class into one of two index-related movements, namely the
index
flexion (M2,1) and index extension (M2,2) movements. The
third clas-
sifier, denoted as C2,3, is a binary SVM classifier that classifies
an input
feature vector that is identified at the first layer as M3 class
into one of
two middle-related movements, namely the middle flexion
(M3,1) and
middle extension (M3,2) movements. The fourth classifier,
denoted as
C2,4, is a binary SVM classifier that classifies an input feature
vector that
is identified at the first layer as M4 class into one of two ring-
related
movements, namely the ring flexion (M4,1) and ring extension
(M4,2)
movements. Finally, the fifth classifier, denoted as C2,5, is a
binary SVM
classifier that classifies an input feature vector that is identified
at the
first layer as M5 class into one of two little-related movements,
namely
the little flexion (M5,1) and little extension (M5,2) movements.
In this
study, we refer to the movements M1,1, M1,2, M1,3, M1,4,
M2,1, M2,2,
M3,1, M3,2, M4,1, M4,2, M5,1, and M5,2 as movements of the
same finger.
Fig. 1. Graphical illustration of the feature extraction procedure
employed at each window position.
R. Alazrai et al. Neuroscience Letters 698 (2019) 113–120
115
Fig. 2 provides a structure diagram of the proposed 2LCF.
2.6. Performance evaluation procedures and metrics
For each subject, we construct the 2LCF by utilizing a ten-fold
cross-
validation procedure to train and test the SVM classifiers within
the first
and second layers of the proposed 2LCF [11,19]. The ten-fold
cross-
validation procedure is repeated for ten times and the average
classi-
fication performance for each subject is computed over the ten
repeti-
tions. The implementation of the multi-class SVM classifiers in
our
proposed 2LCF, namely the C1,1 and C2,1 classifiers, is carried
out using
the one-against-one scheme [20]. In addition, the regularization
para-
meter, C, and the RBF kernel parameter, γ, of each SVM
classifier are
tuned by performing a grid-search to find the values of C and γ
that
minimize the classification error [21].
To quantify the classification performance of each classifier in
the
constructed 2LCF, we employed two standard evaluation
metrics,
namely the classification accuracy (CA) and the F1-score, that
are
computed as follows [22]:
=
+
+ + +
×CA
(tp tn)
(tp tn fp fn)
100%,
(7)
− = ×
+
×F
P R
P R
score 2
( * )
( )
100%,1
(8)
where tp, tn, fp, and fn represent the number of true positive,
true ne-
gative, false positive, and false negative cases, respectively. In
addition,
P = tp/(tp + fp) and R = tp/(tp + fn) represent the precision and
recall,
respectively. In fact, the F1-score provides a weighted average
of the
precision and recall that takes into consideration the false
positive and
false negative rates.
3. Experimental results
3.1. Results of the first classification layer
In this section, we present the classification results of the first
classification layer, namely the C1,1 classifier. Table 1 shows
the F1-
scores obtained for the M1, M2, M3, M4, and M5 classes
computed for
each one of the eighteen subjects (S1 to S18). The average F1-
score va-
lues obtained for the M1, M2, M3, M4, and M5 classes are
86.7%, 87.1%,
84.1%, 79.0%, and 86.5%, respectively.
Fig. 3 shows the CA values and the corresponding standard
devia-
tions obtained by the first classification layer for each subject.
The re-
sults presented in Fig. 3 show that the mean ± standard
deviation CA
value of the first classification layer, which discriminates
between the
M1, M2, M3, M4, and M5 movement classes, computed over all
eighteen
subjects is equal to 85.85 ± 1.1 % Moreover, the mean CA
values
computed for the eighteen subjects are between 74.1%, which
was
obtained for subject 10 (S10), and 93.1%, which was obtained
for
subject 16 (S16).
In addition, we compare the significance of the CA values of
the C1,1
classifier that are computed for each subject with the random
classifi-
cation rate (RCR), which is defined as the reciprocal of the
number of
classes and has a value of 20%, by performing t-tests with a
significance
level of 0.01. The p values computed for all eighteen subjects
were less
than 0.01, which indicates that the CA obtained for the C1,1
classifier
associated with each subject is significantly higher than the
RCR (the
red dashed line in Fig. 3).
3.2. Results of the second classification layer
In this section, we present the classification results of each
classifier
in the second classification layer, namely the C2,1, C2,2, C2,3,
C2,4, and
C2,5 classifiers, computed for each of the eighteen subjects.
Table 2
presents the F1-scores computed for the C2,1, C2,2, C2,3, C2,4,
and C2,5
classifiers per each subject along with the mean F1-scores
computed for
each of the five classifiers over the eighteen subjects. In
particular, for
the C2,1 classifier, the average F1-score values obtained for
decoding the
four thumb movements, namely the M1,1, M1,2, M1,3, and
M1,4 move-
ments, are 67.3%, 53.5%, 67.4%, and 56.1%, respectively. For
the C2,2
classifier, the average F1-score values obtained for decoding the
flexion
and extension movements of the index finger, namely the M2,1
and M2,2
movements, are 72.1% and 67.5%, respectively. Moreover, for
the C2,3
classifier, the average F1-score values obtained for decoding the
flexion
Fig. 2. Structure diagram of the proposed 2LCF.
R. Alazrai et al. Neuroscience Letters 698 (2019) 113–120
116
and extension movements of the middle finger, namely the M3,1
and
M3,2 movements, are 76.7% and 67.2%, respectively. For the
C2,4
classifier, the average F1-score values obtained for decoding the
flexion
and extension movements of the ring finger, namely the M4,1
and M4,2
movements, are 74.9% and 59.5%, respectively. Finally, for the
C2,5
classifier, the average F1-score values obtained for decoding the
flexion
and extension movements of the little finger, namely the M5,1
and M5,2
movements, are 70.7% and 63.0%, respectively.
Fig. 4 presents the CAs and corresponding standard deviations
ob-
tained for the C2,1, C2,2, C2,3, C2,4, and C2,5 classifiers
computed per
each subject. Furthermore, Fig. 4 provides the average CA
values
computed for each of the five classifiers over the eighteen
subjects. In
particular, the results presented in Fig. 4 indicate that the C2,1
classifier
was able to classify the M1,1, M1,2, M1,3, and M1,4
movements with an
average ± standard deviation CA of 64.6 ± 3.6 %. Moreover, the
C2,2
classifier was able to classify the M2,1 and M2,2 movements
with an
average ± standard deviation CA of 70.4 ± 5.5 %. For the C2,3
clas-
sifier, the average ± standard deviation CA value obtained in
dis-
criminating between the M3,1 and M3,2 movements was 73.4 ±
5.5 %.
In addition, the C2,4 classifier was able to classify the M4,1
and M4,2
movements with an average ± standard deviation CA of 70.4 ±
5.3 %.
Finally, for the C2,5 classifier, the average ± standard deviation
CA
value achieved in discriminating between the M5,1 and M5,2
movements
was 70.2 ± 3.9 %.
In addition, for each subject, we compare the significance of the
CA
values computed for each classifier in the second classification
layer,
namely the C2,1, C2,2, C2,3, C2,4, and C2,5 classifiers, with
the RCR value
associated with each of these classifiers by performing t-tests
with a
significance level of 0.01. In particular, the RCR of the C2,1,
which is
shown as a blue dashed line in Fig. 3, is equal to 25%, while the
RCR
associated with each of the other four classifiers in the second
classi-
fication layer, which is shown as a black dashed line in Fig. 3,
is equal
to 50%. For each classifier in the second classification layer,
the p va-
lues computed for all subject were less than 0.01, which
indicates that
the CA values computed for each subject per each classifier are
sig-
nificantly higher than the RCRs.
3.3. Analysis the effect of the sliding window size
Table 3 provides the average CA and F1-score values computed
for
the classifiers in our proposed 2LCF using different sizes and
overlaps of
the sliding window. In particular, the average CA and F1-score
values
presented in Table 3 are computed across all subjects using the
cross-
validation procedure described in Section 2.6. These results
indicate
that the best average CA and F1-score values were obtained
when the
size of the sliding window and overlap are 256 and 128,
respectively.
3.4. Comparison with the traditional multi-class SVM classier
(TMCC)
In this section, we compare the classification performance of
our
proposed 2LCF with the TMCC. In particular, the TMCC
consists of one
multi-class SVM classifier with RBF kernel that classifies
feature vectors
into one of the twelve finger movements described in subsection
2.1.
The performance of the TMCC was evaluated using the ten-fold
cross-
validation procedure described in Section 2.6. Table 4 presents
the
average CA and F1-score values computed over the ten
repetitions of the
cross-validation procedure across the twelve finger movements
per each
subject. The average CA and F1-score values computed for the
TMCC
over all subjects are 40.5% and 39.1%, respectively. On the
contrary,
the average CA and F1-score values computed for the second
classifi-
cation layer of our proposed 2LCF, which are presented in Fig.
4, are
69.8% and 67.4%, respectively. These results indicate that the
classi-
fication performance of our proposed 2LCF outperforms
significantly
the classification performance of the TMCC.
Table 1
The F1-scores (%) of the C1,1 classifier computed for each
subject.
Subject S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 S11 S12 S13 S14 S15
S16 S17 S18 Average
Movement M1 83.7 76.5 92.1 91.2 84.3 90.2 88.6 87.6 92.0 81.5
86.5 83.0 78.0 90.7 91.4 90.5 86.4 86.5 86.7
M2 89.8 77.8 83.5 89.6 78.9 90.5 94.6 86.8 89.2 78.4 82.3 76.4
89.3 94.3 87.7 94.9 97.5 87.5 87.2
M3 81.2 85.3 87.8 83.5 77.8 69.8 91.2 88.9 88.2 72.6 80.3 81.9
79.6 92.9 77.5 93.5 91.1 91.2 84.1
M4 58.9 77.3 89.1 85.2 65.5 74.3 79.9 94.2 96.5 71.3 90.5 67.9
80.9 74.4 86.6 72.2 91.8 65.5 79.0
M5 84.7 80.1 88.0 91.6 91.2 79.7 91.9 89.3 88.9 68.1 90.4 84.2
83.5 94.6 85.3 96.7 85.7 83.4 86.5
Fig. 3. The CA values of the C1,1 classifier computed for each
subject. The red vertical lines represent the standard deviations
in the CA values and the red dashed line
represents the RCR. (For interpretation of the references to
color in text/this figure legend, the reader is referred to the web
version of the article.)
R. Alazrai et al. Neuroscience Letters 698 (2019) 113–120
117
4. Discussion
The main focus of the current study is to investigate the
capability of
using the CWD-based features along with the proposed 2LCF to
analyze
the EEG signals and decode finger movements within the same
hand.
The results obtained for the first and second classification
layers de-
monstrate the capability of our proposed approach to
successfully
decode twelve finger movements within the same hand for
eighteen
able-bodied subjects.
4.1. Movements of different fingers within the same hand
The results obtained for the first classification layer, which are
provided in Table 1 and Fig. 3, indicate that the extracted
CWD-based
Table 2
The F1-scores (%) of the C2,1, C2,2, C2,3, C2,4, and C2,5
classifiers within the second classification layer computed for
each subject and across all subjects.
Subject C2,1 C2,2 C2,3 C2,4 C2,5
M1,1 M1,2 M1,3 M1,4 M2,1 M2,2 M3,1 M3,2 M4,1 M4,2 M5,1
M5,2
S1 70.9 41.4 62.6 54.4 58.7 71.8 71.4 66.2 81.2 83.9 63.6 68.7
S2 79.0 66.7 63.5 48.6 69.8 58.9 72.1 66.5 67.9 41.1 74.2 52.4
S3 47.8 57.9 66.4 36.2 74.7 54.0 85.9 63.9 70.3 60.5 81.5 56.7
S4 79.8 55.4 69.5 46.5 76.3 62.3 69.2 62.7 82.3 59.6 63.0 57.4
S5 72.2 67.9 64.8 69.0 62.6 62.6 88.1 69.6 72.0 55.6 69.0 54.0
S6 65.1 51.1 67.4 59.4 80.1 76.5 83.5 67.0 80.8 75.8 80.1 55.6
S7 70.5 72.3 54.9 63.8 79.5 70.8 67.4 54.4 67.6 51.5 65.6 69.6
S8 47.2 40.3 76.0 52.6 75.2 64.5 87.4 62.4 81.1 43.3 71.5 60.4
S9 63.5 46.2 66.3 69.0 53.9 66.0 56.6 70.6 50.9 56.1 73.9 82.8
S10 64.3 62.6 60.0 46.1 73.5 77.1 78.5 65.3 75.1 56.5 78.0 53.3
S11 74.8 34.8 78.1 35.5 80.3 64.4 85.3 76.7 82.3 76.9 80.7 76.7
S12 54.7 33.3 61.9 53.2 84.9 70.7 80.9 69.7 76.1 57.9 64.5 68.5
S13 64.2 65.0 64.8 56.2 62.5 65.0 51.7 64.8 67.5 54.4 75.4 58.1
S14 74.5 41.1 77.6 65.9 71.4 55.2 81.2 58.9 73.1 50.6 78.9 65.0
S15 85.6 62.2 76.7 55.8 73.9 59.0 78.3 62.4 67.9 39.0 68.7 52.6
S16 67.8 61.2 58.9 84.1 61.3 79.4 81.2 85.3 84.1 89.6 51.7 79.2
S17 74.7 45.7 76.5 53.0 85.9 76.9 84.7 75.6 85.7 60.6 76.9 53.0
S18 55.0 58.8 67.1 59.5 72.3 79.7 77.0 68.0 82.8 58.4 56.4 69.4
Average 67.3 53.6 67.4 56.0 72.0 67.5 76.7 67.2 74.9 59.5 70.8
63.0
Fig. 4. The CA values of the C2,1, C2,2, C2,3,
C2,4, and C2,5 classifiers computed for each
subject. The red vertical lines represent the
standard deviations in the CAs. The blue da-
shed line represents the RCR of the C2,2 clas-
sifier, while the black dashed line represents
the RCR of the C2,2, C2,3, C2,4, and C2,5 classi-
fiers. (For interpretation of the references to
color in text/this figure legend, the reader is
referred to the web version of the article.)
Table 3
The average CA (%) and F1-scores (%) computed for the
classifiers of our proposed 2LCF using different sizes of the
sliding window.
Sliding window size Overlap size First classification layer
Second classification layer
C1,1 C2,1 C2,2 C2,3 C2,4 C2,5
CA F1-score CA F1-score CA F1-score CA F1-score CA F1-
score CA F1-score
64 32 74.7 74.1 60.5 56.4 65.9 63.7 64.4 62.2 65.8 63.2 67.6
64.7
128 64 80.5 80.0 63.6 59.3 69.6 67.3 67.1 65.2 68.4 66.9 68.5
64.9
256 128 85.8 84.7 64.6 61.1 70.4 69.8 73.4 71.9 70.4 67.2 70.2
66.9
512 256 83.2 81.9 62.9 46.8 68.1 59.9 71.1 56.9 69.8 59.3 69.6
52.9
1024 512 70.3 66.4 49.0 26.2 63.7 45.7 59.9 44.4 73.1 43.3 67.2
37.6
R. Alazrai et al. Neuroscience Letters 698 (2019) 113–120
118
features were able to obtain accurate identification of the
moving fin-
gers within the same hand. In particular, Table 1 indicates that
the
average F1-scores obtained for the identification of the moving
fingers
are above 79%, where the lowest F1-score of 79.0% was
computed for
the ring finger. Moreover, the results of the first classification
layer
suggest that the EEG signals encapsulate sufficient information
to de-
code the movements of fine body-parts, such as the finger
movements,
which complies with the findings reported in previous studies,
such as
[7,8].
4.2. Movements of the same finger within the same hand
The results obtained for the second classification layer, which
are
provided in Table 2 and Fig. 4, indicate that the discrimination
between
the movements performed by a specific finger is more
challenging than
decoding the movements performed by different fingers within
the
same hand. For example, the first classification layer was able
to
identify that the moving finger is the thumb finger with an
average F1-
score of 86.7%, while the second classification layer was able
to dis-
criminate between the four thumb-related movements with an
average
F1-score of 61.1%. Moreover, the results presented in Fig. 4
indicate
that increasing the number of movements performed by the same
finger
can drastically decrease the ability to differentiate between the
move-
ments performed by the same finger. In particular, Fig. 4 shows
that the
mean CA value of the C2,1 classifier, which discriminates
between four
thumb-related movements, is significantly lower than the CAs
obtained
for each of the other four classifiers in the second classification
layer
that discriminates between only two movements of each finger.
This
reduction in the classification performance can be attributed to
the
following factors: (1) Finger movements within the same hand
activate
relatively close regions in the sensorimotor cortex area within
the same
hemisphere of the brain [7,9,13]. (2) The limited spatial
resolution and
low signal-to-noise ratio (SNR) of the EEG modality reduces
the cap-
ability of capturing brain activities within the activated regions
of the
brain during finger movements [7]. (3) As a consequence of the
two
factors mentioned above, increasing the number of different
move-
ments that are performed by each finger can significantly
increase the
difficulty of dividing the feature space into separable decision
regions,
where each region comprises feature vectors that belong to a
particular
movement. In fact, several previous EEG-related studies have
indicated
that the CAs obtained for multi-class classification problems,
which
involve more than two classes, were significantly lower than the
CAs
obtained for binary classification problems that involve two
classes
[7,19,23].
4.3. Comparison with other approaches
Recently, few studies have investigated the possibility of
decoding
finger movements within the same hand based on EEG signals.
For
example, Liao et al. [7] recorded EEG signals using 128
electrodes for
eleven healthy subjects while performing flexion and extension
move-
ments using each of the fingers in their right hands. The
recorded EEG
signals were analyzed using principal component analysis and a
set of
power spectrum-based features. For each subject, the extracted
features
were used to build ten binary SVM classifiers, where each
classifier is
associated with a pair of fingers. The average CA computed for
all pairs
of fingers across all subjects was 77.1%. In another study,
Quandt et al.
[8] recorded the EEG signals using 32 electrodes for thirteen
healthy
subjects while pressing a button using the thumb, index, middle,
and
little fingers. For each subject, the recorded EEG signals were
utilized to
construct four binary SVM classifiers, where each classifier is
associated
with one of the four fingers. In particular, the first, second,
third, and
fourth classifiers aim to identify whether the moving finger is
the
thumb, index, middle, and little, respectively. The average CA
value
computed over the four fingers across all subjects was 43.5%.
The current study provides five improvements over the
approaches
presented in [7,8]: Firstly, the first classification layer of our
proposed
2LCF utilizes a multi-class SVM classifier to discriminate
between the
movement of all fingers within the same hand rather than using
binary
SVM classifiers to discriminate between the movements of each
pair of
fingers as in [7,8]. In this regard, Liao et al. [7] indicated that
dis-
criminating between the movements of individual fingers within
the
same hand using a multi-class classifier is more difficult than
dis-
criminating between the movements of each pair of fingers
within the
same hand. In fact, the use of multi-class classification
approaches to
decode finger movements within the same hand can facilitate
the de-
velopment of EEG-based BCI systems with higher control's
dimensions,
which can enhance the functionality of various dexterous
assistive de-
vices. Secondly, the current study considered both identifying
move-
ments of different fingers within the same hand and decoding
the
movements of the moving finger, while the approaches
presented in
[7,8] considered only the identification of the moving finger
without
decoding the movement performed by the moving finger. Hence,
our
proposed approach can increase the control's dimensions of the
devel-
oped EEG-based BCI systems. Thirdly, the results reported in
the cur-
rent study are based on utilizing eleven EEG electrodes
compared with
the results reported in the studies [7,8] which are based on
utilizing
128 and 32 electrodes, respectively. This demonstrates the
capability of
the extracted CWD-based features to capture movement-related
in-
formation that are encapsulated within the EEG signals to
achieve high
CA values. Fourthly, to the best of our knowledge, this is the
first study
that investigates the possibility of decoding twelve different
movements
performed by different fingers within the same hand. Such a
large
number of movements makes the classification task more
challenging
compared with the approaches [7,8] that considered decoding
only five
and four movements, respectively, of individual fingers within
the same
hand. Fifthly, the classification architectures employed in the
ap-
proaches [7,8] are based on constructing a set of binary SVM
classifiers,
where each classifier produces one decision. These decisions
were not
combined to produce one final decision that specifies the
moving finger
within the same hand. Such classification architecture can
highly suffer
from the false positive error, in which a feature vector can be
collec-
tively misclassified by multiple binary classifiers and assigned
to in-
correct fingers. In contrast, our proposed 2LCF produces one
decision at
the first classification layer that identifies the moving finger,
and one
decision at the second classification layer that specifies the
movement
performed by the identified moving finger.
5. Conclusions
In the present study, we have demonstrated the possibility of
de-
coding movements performed by each finger within the same
hand
Table 4
The average CA (%) and F1-score (%) values computed for each
subject using the TMCC.
Evaluation metric Subject Average across all subjects
S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 S11 S12 S13 S14 S15 S16 S17
S18
CA 42.8 35.7 41.3 39.6 41.8 36.0 44.2 43.3 41.4 36.2 43.9 37.6
34.4 42.5 45.8 44.8 39.0 37.8 40.5
F1-score 44.3 36.0 39.2 39.6 41.8 36.4 44.4 43.4 37.4 35.3 36.6
39.2 35.5 40.2 44.4 38.0 35.3 37.0 39.1
R. Alazrai et al. Neuroscience Letters 698 (2019) 113–120
119
using EEG signals. The results presented in this study suggest
the fea-
sibility of using the CWD to analyze the EEG signals and
extract
quantitative features that are capable of discriminating between
dif-
ferent finger movements. In the future, we plan to investigate
the fol-
lowing research directions: (1) Studying the problem of
decoding si-
multaneous movements performed by multiple fingers within
the same
hand to improve the control mechanisms of prosthetic hands. (2)
Extending our experiments by including subjects with an
extended age
range and balanced gender distribution. (3) Studying the
potential of
using a higher number of EEG channels that achieve high
resolution
coverage of the motor cortex region to enhance the accuracy of
clas-
sifying the movements performed by the same finger.
Acknowledgements
This work is supported by the Scientific Research Support Fund
of
Jordan (grant no. ENG/1/9/2015).
References
[1] L.F. Nicolas-Alonso, J. Gomez-Gil, Brain–computer
interfaces, a review, Sensors 12
(2) (2012) 1211–1279.
[2] G. Pfurtscheller, C. Neuper, Motor imagery and direct
brain–computer commu-
nication, Proc. IEEE 89 (7) (2001) 1123–1134.
[3] A.S. Royer, A.J. Doud, M.L. Rose, B. He, EEG control of a
virtual helicopter in 3-
dimensional space using intelligent control strategies, IEEE
Trans. Neural Syst.
Rehabil. Eng. 18 (6) (2010) 581–589.
[4] E.W. Sellers, T.M. Vaughan, J.R. Wolpaw, A brain–
computer interface for long-term
independent home use, Amyotroph. Lateral Scler. 11 (5) (2010)
449–455.
[5] R. Scherer, G. Muller, C. Neuper, B. Graimann, G.
Pfurtscheller, An asynchronously
controlled EEG-based virtual keyboard: improvement of the
spelling rate, IEEE
Trans. Biomed. Eng. 51 (6) (2004) 979–984.
[6] G. Pfurtscheller, C. Guger, G. Müller, G. Krausz, C. Neuper,
Brain oscillations con-
trol hand orthosis in a tetraplegic, Neurosci. Lett. 292 (3)
(2000) 211–214.
[7] K. Liao, R. Xiao, J. Gonzalez, L. Ding, Decoding individual
finger movements from
one hand using human EEG signals, PLOS ONE 9 (1) (2014)
e85192.
[8] F. Quandt, C. Reichert, H. Hinrichs, H.-J. Heinze, R.
Knight, J.W. Rieger, Single trial
discrimination of individual finger movements on one hand: a
combined MEG and
EEG study, Neuroimage 59 (4) (2012) 3316–3324.
[9] B.J. Edelman, B. Baxter, B. He, EEG source imaging
enhances the decoding of
complex right-hand motor imagery tasks, IEEE Trans. Biomed.
Eng. 63 (1) (2016)
4–14.
[10] A. Vuckovic, F. Sepulveda, Delta band contribution in cue
based single trial clas-
sification of real and imaginary wrist movements, Med. Biol.
Eng. Comput. 46 (6)
(2008) 529–539.
[11] R. Alazrai, H. Alwanni, Y. Baslan, N. Alnuman, M.I.
Daoud, EEG-based brain–-
computer interface for decoding motor imagery tasks within the
same hand using
Choi–Williams time-frequency distribution, Sensors 17 (9)
(2017) 1937.
[12] X. Yong, C. Menon, EEG classification of different
imaginary movements within the
same limb, PLOS ONE 10 (4) (2015) e0121896.
[13] G. Pfurtscheller, F.L. Da Silva, Event-related EEG/MEG
synchronization and de-
synchronization: basic principles, Clin. Neurophysiol. 110 (11)
(1999) 1842–1857.
[14] B. Boashash, Time-frequency Signal Analysis and
Processing: A Comprehensive
Reference, Academic Press, 2015.
[15] G. Gómez-Herrero, W. De Clercq, H. Anwar, O. Kara, K.
Egiazarian, S. Van Huffel,
W. Van Paesschen, Automatic removal of ocular artifacts in the
EEG without an
EOG reference channel, Proceedings of the 7th IEEE Nordic
Signal Processing
Symposium (2006) 130–133.
[16] H.-I. Choi, W.J. Williams, Improved time-frequency
representation of multi-
component signals using exponential kernels, IEEE Trans.
Acoust. Speech Signal
Process. 37 (6) (1989) 862–871.
[17] S.L. Hahn, Hilbert Transforms in Signal Processing, vol. 2,
Artech House, Boston,
1996.
[18] L. Cohen, Time-frequency distributions – a review, Proc.
IEEE 77 (7) (1989)
941–981.
[19] R. Alazrai, R. Homoud, H. Alwanni, M.I. Daoud, EEG-
based emotion recognition
using quadratic time-frequency distribution, Sensors 18 (8)
(2018) 2739.
[20] C.-C. Chang, C.-J. Lin, Libsvm: a library for support
vector machines, ACM Trans.
Intell. Syst. Technol. 2 (3) (2011) 1–27.
[21] C.-W. Hsu, C.-C. Chang, C.-J. Lin, et al., A practical guide
to support vector clas-
sification, Tech. rep., Department of Computer Science,
National Taiwan
University, Taipei, Taiwan, 2003.
[22] J. Han, J. Pei, M. Kamber, Data Mining: Concepts and
Techniques, Elsevier, 2011.
[23] F. Shiman, E. López-Larraz, A. Sarasola-Sanz, N.
Irastorza-Landa, M. Spüler,
N. Birbaumer, A. Ramos-Murguialday, Classification of
different reaching move-
ments from the same limb using EEG, J. Neural Eng. 14 (4)
(2017) 046018.
R. Alazrai et al. Neuroscience Letters 698 (2019) 113–120
120
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0005
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0005
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0010
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0010
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0015
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0015
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0015
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0020
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0020
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0025
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0025
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0025
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0030
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0030
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0035
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0035
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0040
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0040
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0040
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0045
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0045
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0045
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0050
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0050
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0050
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0055
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0055
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0055
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0060
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0060
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0065
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0065
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0070
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0070
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0075
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0075
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0075
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0075
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0080
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0080
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0080
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0085
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0085
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0090
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0090
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0095
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0095
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0100
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0100
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0105
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0105
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0105
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0110
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0115
http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0115
http://refhub.elsevier.com/S0304-3940(18)30902-
9/sbref0115EEG-based BCI system for decoding finger
movements within the same handIntroductionMaterials and
methodsSubjectsExperimental protocolData acquisition and
preprocessingTime-frequency representation and feature
extractionClassification frameworkPerformance evaluation
procedures and metricsExperimental resultsResults of the first
classification layerResults of the second classification
layerAnalysis the effect of the sliding window sizeComparison
with the traditional multi-class SVM classier
(TMCC)DiscussionMovements of different fingers within the
same handMovements of the same finger within the same
handComparison with other
approachesConclusionsAcknowledgementsReferences
Discussion 11 instructions
find a journal article that displays information using at least two
of the following: I have attached the article that shows 2 items
· Graphical display: Pie chart, bar graph, histogram, boxplot,
etc.
· Frequency distribution
· Measures of central tendency and measures of dispersion
For your posting, include a link to the information. Comment on
the choice of display(s), and measures of central tendency and
dispersion and how well they allow the reader to clearly
understand the information being presented. Suggest changes
that could make the reader better understand the information
provided.
Answer the question(s) in the instructions. You must provide a
working link to the journal article or you can attach a pdf of the
journal article to your posting. The article that you locate must
contain two of the three listed items. Comment on the choice of
display(s), and measures of central tendency and dispersion and
how well they allow the reader to clearly understand the
information being presented. Suggest changes that could make
the reader better understand the information provided. Your
answer must include at least 2 paragraphs, minimum of 4
sentences per paragraph.
Contents lists available at ScienceDirectNeuroscience Lett.docx

More Related Content

Similar to Contents lists available at ScienceDirectNeuroscience Lett.docx

Analysis of eeg for motor imagery
Analysis of eeg for motor imageryAnalysis of eeg for motor imagery
Analysis of eeg for motor imageryijbesjournal
 
Bionic arm using muscle sensor v3
Bionic arm using muscle sensor v3Bionic arm using muscle sensor v3
Bionic arm using muscle sensor v3IJARIIT
 
Recognition of emotional states using EEG signals based on time-frequency ana...
Recognition of emotional states using EEG signals based on time-frequency ana...Recognition of emotional states using EEG signals based on time-frequency ana...
Recognition of emotional states using EEG signals based on time-frequency ana...IJECEIAES
 
Brain computer interface based smart keyboard using neurosky mindwave headset
Brain computer interface based smart keyboard using neurosky mindwave headsetBrain computer interface based smart keyboard using neurosky mindwave headset
Brain computer interface based smart keyboard using neurosky mindwave headsetTELKOMNIKA JOURNAL
 
A Wearable Rehabilitation Device For Paralysis
A Wearable Rehabilitation Device For ParalysisA Wearable Rehabilitation Device For Paralysis
A Wearable Rehabilitation Device For ParalysisAdityanarayan PS Jagdev
 
IRJET- An Efficient Approach for Removal of Ocular Artifacts in EEG-Brain Com...
IRJET- An Efficient Approach for Removal of Ocular Artifacts in EEG-Brain Com...IRJET- An Efficient Approach for Removal of Ocular Artifacts in EEG-Brain Com...
IRJET- An Efficient Approach for Removal of Ocular Artifacts in EEG-Brain Com...IRJET Journal
 
Classification of Eye Movements Using Electrooculography and Neural Networks
Classification of Eye Movements Using Electrooculography and Neural NetworksClassification of Eye Movements Using Electrooculography and Neural Networks
Classification of Eye Movements Using Electrooculography and Neural NetworksWaqas Tariq
 
Alotaiby2014 article eeg_seizure_detectionandpredicti
Alotaiby2014 article eeg_seizure_detectionandpredictiAlotaiby2014 article eeg_seizure_detectionandpredicti
Alotaiby2014 article eeg_seizure_detectionandpredictiMuhammad Rizwan
 
A Comparative Study of Machine Learning Algorithms for EEG Signal Classification
A Comparative Study of Machine Learning Algorithms for EEG Signal ClassificationA Comparative Study of Machine Learning Algorithms for EEG Signal Classification
A Comparative Study of Machine Learning Algorithms for EEG Signal Classificationsipij
 
A COMPARATIVE STUDY OF MACHINE LEARNING ALGORITHMS FOR EEG SIGNAL CLASSIFICATION
A COMPARATIVE STUDY OF MACHINE LEARNING ALGORITHMS FOR EEG SIGNAL CLASSIFICATIONA COMPARATIVE STUDY OF MACHINE LEARNING ALGORITHMS FOR EEG SIGNAL CLASSIFICATION
A COMPARATIVE STUDY OF MACHINE LEARNING ALGORITHMS FOR EEG SIGNAL CLASSIFICATIONsipij
 
A Comparative Study of Machine Learning Algorithms for EEG Signal Classification
A Comparative Study of Machine Learning Algorithms for EEG Signal ClassificationA Comparative Study of Machine Learning Algorithms for EEG Signal Classification
A Comparative Study of Machine Learning Algorithms for EEG Signal Classificationsipij
 
A Comparative Study of Machine Learning Algorithms for EEG Signal Classification
A Comparative Study of Machine Learning Algorithms for EEG Signal ClassificationA Comparative Study of Machine Learning Algorithms for EEG Signal Classification
A Comparative Study of Machine Learning Algorithms for EEG Signal Classificationsipij
 
MULTIMODAL INTERFACE OF BRAQIN COMPUTER INTERFACE AND ELECTOOCULOGRAPHY
MULTIMODAL INTERFACE OF BRAQIN COMPUTER INTERFACE AND ELECTOOCULOGRAPHYMULTIMODAL INTERFACE OF BRAQIN COMPUTER INTERFACE AND ELECTOOCULOGRAPHY
MULTIMODAL INTERFACE OF BRAQIN COMPUTER INTERFACE AND ELECTOOCULOGRAPHYchelsiageorge20
 
Brain computer interfacing for controlling wheelchair movement
Brain computer interfacing for controlling wheelchair movementBrain computer interfacing for controlling wheelchair movement
Brain computer interfacing for controlling wheelchair movementIRJET Journal
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)IJERD Editor
 

Similar to Contents lists available at ScienceDirectNeuroscience Lett.docx (20)

Analysis of eeg for motor imagery
Analysis of eeg for motor imageryAnalysis of eeg for motor imagery
Analysis of eeg for motor imagery
 
Bionic arm using muscle sensor v3
Bionic arm using muscle sensor v3Bionic arm using muscle sensor v3
Bionic arm using muscle sensor v3
 
50120130406044
5012013040604450120130406044
50120130406044
 
Recognition of emotional states using EEG signals based on time-frequency ana...
Recognition of emotional states using EEG signals based on time-frequency ana...Recognition of emotional states using EEG signals based on time-frequency ana...
Recognition of emotional states using EEG signals based on time-frequency ana...
 
Brain computer interface based smart keyboard using neurosky mindwave headset
Brain computer interface based smart keyboard using neurosky mindwave headsetBrain computer interface based smart keyboard using neurosky mindwave headset
Brain computer interface based smart keyboard using neurosky mindwave headset
 
Lj2519711975
Lj2519711975Lj2519711975
Lj2519711975
 
Lj2519711975
Lj2519711975Lj2519711975
Lj2519711975
 
A Wearable Rehabilitation Device For Paralysis
A Wearable Rehabilitation Device For ParalysisA Wearable Rehabilitation Device For Paralysis
A Wearable Rehabilitation Device For Paralysis
 
radation2357
radation2357radation2357
radation2357
 
IRJET- An Efficient Approach for Removal of Ocular Artifacts in EEG-Brain Com...
IRJET- An Efficient Approach for Removal of Ocular Artifacts in EEG-Brain Com...IRJET- An Efficient Approach for Removal of Ocular Artifacts in EEG-Brain Com...
IRJET- An Efficient Approach for Removal of Ocular Artifacts in EEG-Brain Com...
 
Kh3517801787
Kh3517801787Kh3517801787
Kh3517801787
 
Classification of Eye Movements Using Electrooculography and Neural Networks
Classification of Eye Movements Using Electrooculography and Neural NetworksClassification of Eye Movements Using Electrooculography and Neural Networks
Classification of Eye Movements Using Electrooculography and Neural Networks
 
Alotaiby2014 article eeg_seizure_detectionandpredicti
Alotaiby2014 article eeg_seizure_detectionandpredictiAlotaiby2014 article eeg_seizure_detectionandpredicti
Alotaiby2014 article eeg_seizure_detectionandpredicti
 
A Comparative Study of Machine Learning Algorithms for EEG Signal Classification
A Comparative Study of Machine Learning Algorithms for EEG Signal ClassificationA Comparative Study of Machine Learning Algorithms for EEG Signal Classification
A Comparative Study of Machine Learning Algorithms for EEG Signal Classification
 
A COMPARATIVE STUDY OF MACHINE LEARNING ALGORITHMS FOR EEG SIGNAL CLASSIFICATION
A COMPARATIVE STUDY OF MACHINE LEARNING ALGORITHMS FOR EEG SIGNAL CLASSIFICATIONA COMPARATIVE STUDY OF MACHINE LEARNING ALGORITHMS FOR EEG SIGNAL CLASSIFICATION
A COMPARATIVE STUDY OF MACHINE LEARNING ALGORITHMS FOR EEG SIGNAL CLASSIFICATION
 
A Comparative Study of Machine Learning Algorithms for EEG Signal Classification
A Comparative Study of Machine Learning Algorithms for EEG Signal ClassificationA Comparative Study of Machine Learning Algorithms for EEG Signal Classification
A Comparative Study of Machine Learning Algorithms for EEG Signal Classification
 
A Comparative Study of Machine Learning Algorithms for EEG Signal Classification
A Comparative Study of Machine Learning Algorithms for EEG Signal ClassificationA Comparative Study of Machine Learning Algorithms for EEG Signal Classification
A Comparative Study of Machine Learning Algorithms for EEG Signal Classification
 
MULTIMODAL INTERFACE OF BRAQIN COMPUTER INTERFACE AND ELECTOOCULOGRAPHY
MULTIMODAL INTERFACE OF BRAQIN COMPUTER INTERFACE AND ELECTOOCULOGRAPHYMULTIMODAL INTERFACE OF BRAQIN COMPUTER INTERFACE AND ELECTOOCULOGRAPHY
MULTIMODAL INTERFACE OF BRAQIN COMPUTER INTERFACE AND ELECTOOCULOGRAPHY
 
Brain computer interfacing for controlling wheelchair movement
Brain computer interfacing for controlling wheelchair movementBrain computer interfacing for controlling wheelchair movement
Brain computer interfacing for controlling wheelchair movement
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
 

More from bobbywlane695641

Assignment 2 FederalismThe system of federalism was instituted wi.docx
Assignment 2 FederalismThe system of federalism was instituted wi.docxAssignment 2 FederalismThe system of federalism was instituted wi.docx
Assignment 2 FederalismThe system of federalism was instituted wi.docxbobbywlane695641
 
Assignment 2 FederalismThe system of federalism was instituted .docx
Assignment 2 FederalismThe system of federalism was instituted .docxAssignment 2 FederalismThe system of federalism was instituted .docx
Assignment 2 FederalismThe system of federalism was instituted .docxbobbywlane695641
 
Assignment 2 Evidence Based Practice at Good Seed Drop-InAcco.docx
Assignment 2 Evidence Based Practice at Good Seed Drop-InAcco.docxAssignment 2 Evidence Based Practice at Good Seed Drop-InAcco.docx
Assignment 2 Evidence Based Practice at Good Seed Drop-InAcco.docxbobbywlane695641
 
Assignment 2 Evidence Based PracticeAccording to the Council .docx
Assignment 2 Evidence Based PracticeAccording to the Council .docxAssignment 2 Evidence Based PracticeAccording to the Council .docx
Assignment 2 Evidence Based PracticeAccording to the Council .docxbobbywlane695641
 
Assignment 2 Evidence Based PracticeAccording to the Council on.docx
Assignment 2 Evidence Based PracticeAccording to the Council on.docxAssignment 2 Evidence Based PracticeAccording to the Council on.docx
Assignment 2 Evidence Based PracticeAccording to the Council on.docxbobbywlane695641
 
Assignment 2 Examining DifferencesIn this module, we examined cri.docx
Assignment 2 Examining DifferencesIn this module, we examined cri.docxAssignment 2 Examining DifferencesIn this module, we examined cri.docx
Assignment 2 Examining DifferencesIn this module, we examined cri.docxbobbywlane695641
 
Assignment 2 Ethics and Emerging TechnologiesRead the following.docx
Assignment 2 Ethics and Emerging TechnologiesRead the following.docxAssignment 2 Ethics and Emerging TechnologiesRead the following.docx
Assignment 2 Ethics and Emerging TechnologiesRead the following.docxbobbywlane695641
 
Assignment 2 Ethical Issues and Foreign InvestmentsBy Friday, A.docx
Assignment 2 Ethical Issues and Foreign InvestmentsBy Friday, A.docxAssignment 2 Ethical Issues and Foreign InvestmentsBy Friday, A.docx
Assignment 2 Ethical Issues and Foreign InvestmentsBy Friday, A.docxbobbywlane695641
 
Assignment 2 Ethical BehaviorIdentify a case in the news that y.docx
Assignment 2 Ethical BehaviorIdentify a case in the news that y.docxAssignment 2 Ethical BehaviorIdentify a case in the news that y.docx
Assignment 2 Ethical BehaviorIdentify a case in the news that y.docxbobbywlane695641
 
Assignment 2 Ethical (Moral) RelativismIn America, many are comfo.docx
Assignment 2 Ethical (Moral) RelativismIn America, many are comfo.docxAssignment 2 Ethical (Moral) RelativismIn America, many are comfo.docx
Assignment 2 Ethical (Moral) RelativismIn America, many are comfo.docxbobbywlane695641
 
Assignment 2 Essay Power in Swift and Moliere Both Moliere and S.docx
Assignment 2 Essay Power in Swift and Moliere Both Moliere and S.docxAssignment 2 Essay Power in Swift and Moliere Both Moliere and S.docx
Assignment 2 Essay Power in Swift and Moliere Both Moliere and S.docxbobbywlane695641
 
Assignment 2 E taxonomy· Information TechnologyInformatio.docx
Assignment 2 E taxonomy· Information TechnologyInformatio.docxAssignment 2 E taxonomy· Information TechnologyInformatio.docx
Assignment 2 E taxonomy· Information TechnologyInformatio.docxbobbywlane695641
 
Assignment 2 Dropbox AssignmentCurrent Trends and Issues in Manag.docx
Assignment 2 Dropbox AssignmentCurrent Trends and Issues in Manag.docxAssignment 2 Dropbox AssignmentCurrent Trends and Issues in Manag.docx
Assignment 2 Dropbox AssignmentCurrent Trends and Issues in Manag.docxbobbywlane695641
 
Assignment 2 Discussion—The Impact of CommunicationRemember a tim.docx
Assignment 2 Discussion—The Impact of CommunicationRemember a tim.docxAssignment 2 Discussion—The Impact of CommunicationRemember a tim.docx
Assignment 2 Discussion—The Impact of CommunicationRemember a tim.docxbobbywlane695641
 
Assignment 2 Discussion—Technology and GlobalizationYour Module.docx
Assignment 2 Discussion—Technology and GlobalizationYour Module.docxAssignment 2 Discussion—Technology and GlobalizationYour Module.docx
Assignment 2 Discussion—Technology and GlobalizationYour Module.docxbobbywlane695641
 
Assignment 2 Discussion—Providing GuidanceThe Genesis team has re.docx
Assignment 2 Discussion—Providing GuidanceThe Genesis team has re.docxAssignment 2 Discussion—Providing GuidanceThe Genesis team has re.docx
Assignment 2 Discussion—Providing GuidanceThe Genesis team has re.docxbobbywlane695641
 
Assignment 2 Discussion—Munger’s Mental ModelsIn his article A L.docx
Assignment 2 Discussion—Munger’s Mental ModelsIn his article A L.docxAssignment 2 Discussion—Munger’s Mental ModelsIn his article A L.docx
Assignment 2 Discussion—Munger’s Mental ModelsIn his article A L.docxbobbywlane695641
 
Assignment 2 DiscussionDuring the first year or two of its exis.docx
Assignment 2 DiscussionDuring the first year or two of its exis.docxAssignment 2 DiscussionDuring the first year or two of its exis.docx
Assignment 2 DiscussionDuring the first year or two of its exis.docxbobbywlane695641
 
Assignment 2 Discussion QuestionWorking in teams leads to complex.docx
Assignment 2 Discussion QuestionWorking in teams leads to complex.docxAssignment 2 Discussion QuestionWorking in teams leads to complex.docx
Assignment 2 Discussion QuestionWorking in teams leads to complex.docxbobbywlane695641
 
Assignment 2 Discussion Question Strong corporate cultures have.docx
Assignment 2 Discussion Question Strong corporate cultures have.docxAssignment 2 Discussion Question Strong corporate cultures have.docx
Assignment 2 Discussion Question Strong corporate cultures have.docxbobbywlane695641
 

More from bobbywlane695641 (20)

Assignment 2 FederalismThe system of federalism was instituted wi.docx
Assignment 2 FederalismThe system of federalism was instituted wi.docxAssignment 2 FederalismThe system of federalism was instituted wi.docx
Assignment 2 FederalismThe system of federalism was instituted wi.docx
 
Assignment 2 FederalismThe system of federalism was instituted .docx
Assignment 2 FederalismThe system of federalism was instituted .docxAssignment 2 FederalismThe system of federalism was instituted .docx
Assignment 2 FederalismThe system of federalism was instituted .docx
 
Assignment 2 Evidence Based Practice at Good Seed Drop-InAcco.docx
Assignment 2 Evidence Based Practice at Good Seed Drop-InAcco.docxAssignment 2 Evidence Based Practice at Good Seed Drop-InAcco.docx
Assignment 2 Evidence Based Practice at Good Seed Drop-InAcco.docx
 
Assignment 2 Evidence Based PracticeAccording to the Council .docx
Assignment 2 Evidence Based PracticeAccording to the Council .docxAssignment 2 Evidence Based PracticeAccording to the Council .docx
Assignment 2 Evidence Based PracticeAccording to the Council .docx
 
Assignment 2 Evidence Based PracticeAccording to the Council on.docx
Assignment 2 Evidence Based PracticeAccording to the Council on.docxAssignment 2 Evidence Based PracticeAccording to the Council on.docx
Assignment 2 Evidence Based PracticeAccording to the Council on.docx
 
Assignment 2 Examining DifferencesIn this module, we examined cri.docx
Assignment 2 Examining DifferencesIn this module, we examined cri.docxAssignment 2 Examining DifferencesIn this module, we examined cri.docx
Assignment 2 Examining DifferencesIn this module, we examined cri.docx
 
Assignment 2 Ethics and Emerging TechnologiesRead the following.docx
Assignment 2 Ethics and Emerging TechnologiesRead the following.docxAssignment 2 Ethics and Emerging TechnologiesRead the following.docx
Assignment 2 Ethics and Emerging TechnologiesRead the following.docx
 
Assignment 2 Ethical Issues and Foreign InvestmentsBy Friday, A.docx
Assignment 2 Ethical Issues and Foreign InvestmentsBy Friday, A.docxAssignment 2 Ethical Issues and Foreign InvestmentsBy Friday, A.docx
Assignment 2 Ethical Issues and Foreign InvestmentsBy Friday, A.docx
 
Assignment 2 Ethical BehaviorIdentify a case in the news that y.docx
Assignment 2 Ethical BehaviorIdentify a case in the news that y.docxAssignment 2 Ethical BehaviorIdentify a case in the news that y.docx
Assignment 2 Ethical BehaviorIdentify a case in the news that y.docx
 
Assignment 2 Ethical (Moral) RelativismIn America, many are comfo.docx
Assignment 2 Ethical (Moral) RelativismIn America, many are comfo.docxAssignment 2 Ethical (Moral) RelativismIn America, many are comfo.docx
Assignment 2 Ethical (Moral) RelativismIn America, many are comfo.docx
 
Assignment 2 Essay Power in Swift and Moliere Both Moliere and S.docx
Assignment 2 Essay Power in Swift and Moliere Both Moliere and S.docxAssignment 2 Essay Power in Swift and Moliere Both Moliere and S.docx
Assignment 2 Essay Power in Swift and Moliere Both Moliere and S.docx
 
Assignment 2 E taxonomy· Information TechnologyInformatio.docx
Assignment 2 E taxonomy· Information TechnologyInformatio.docxAssignment 2 E taxonomy· Information TechnologyInformatio.docx
Assignment 2 E taxonomy· Information TechnologyInformatio.docx
 
Assignment 2 Dropbox AssignmentCurrent Trends and Issues in Manag.docx
Assignment 2 Dropbox AssignmentCurrent Trends and Issues in Manag.docxAssignment 2 Dropbox AssignmentCurrent Trends and Issues in Manag.docx
Assignment 2 Dropbox AssignmentCurrent Trends and Issues in Manag.docx
 
Assignment 2 Discussion—The Impact of CommunicationRemember a tim.docx
Assignment 2 Discussion—The Impact of CommunicationRemember a tim.docxAssignment 2 Discussion—The Impact of CommunicationRemember a tim.docx
Assignment 2 Discussion—The Impact of CommunicationRemember a tim.docx
 
Assignment 2 Discussion—Technology and GlobalizationYour Module.docx
Assignment 2 Discussion—Technology and GlobalizationYour Module.docxAssignment 2 Discussion—Technology and GlobalizationYour Module.docx
Assignment 2 Discussion—Technology and GlobalizationYour Module.docx
 
Assignment 2 Discussion—Providing GuidanceThe Genesis team has re.docx
Assignment 2 Discussion—Providing GuidanceThe Genesis team has re.docxAssignment 2 Discussion—Providing GuidanceThe Genesis team has re.docx
Assignment 2 Discussion—Providing GuidanceThe Genesis team has re.docx
 
Assignment 2 Discussion—Munger’s Mental ModelsIn his article A L.docx
Assignment 2 Discussion—Munger’s Mental ModelsIn his article A L.docxAssignment 2 Discussion—Munger’s Mental ModelsIn his article A L.docx
Assignment 2 Discussion—Munger’s Mental ModelsIn his article A L.docx
 
Assignment 2 DiscussionDuring the first year or two of its exis.docx
Assignment 2 DiscussionDuring the first year or two of its exis.docxAssignment 2 DiscussionDuring the first year or two of its exis.docx
Assignment 2 DiscussionDuring the first year or two of its exis.docx
 
Assignment 2 Discussion QuestionWorking in teams leads to complex.docx
Assignment 2 Discussion QuestionWorking in teams leads to complex.docxAssignment 2 Discussion QuestionWorking in teams leads to complex.docx
Assignment 2 Discussion QuestionWorking in teams leads to complex.docx
 
Assignment 2 Discussion Question Strong corporate cultures have.docx
Assignment 2 Discussion Question Strong corporate cultures have.docxAssignment 2 Discussion Question Strong corporate cultures have.docx
Assignment 2 Discussion Question Strong corporate cultures have.docx
 

Recently uploaded

How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17Celine George
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon AUnboundStockton
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Educationpboyjonauth
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Krashi Coaching
 
Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsanshu789521
 
Science lesson Moon for 4th quarter lesson
Science lesson Moon for 4th quarter lessonScience lesson Moon for 4th quarter lesson
Science lesson Moon for 4th quarter lessonJericReyAuditor
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...Marc Dusseiller Dusjagr
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 
Hybridoma Technology ( Production , Purification , and Application )
Hybridoma Technology  ( Production , Purification , and Application  ) Hybridoma Technology  ( Production , Purification , and Application  )
Hybridoma Technology ( Production , Purification , and Application ) Sakshi Ghasle
 
Blooming Together_ Growing a Community Garden Worksheet.docx
Blooming Together_ Growing a Community Garden Worksheet.docxBlooming Together_ Growing a Community Garden Worksheet.docx
Blooming Together_ Growing a Community Garden Worksheet.docxUnboundStockton
 
ENGLISH5 QUARTER4 MODULE1 WEEK1-3 How Visual and Multimedia Elements.pptx
ENGLISH5 QUARTER4 MODULE1 WEEK1-3 How Visual and Multimedia Elements.pptxENGLISH5 QUARTER4 MODULE1 WEEK1-3 How Visual and Multimedia Elements.pptx
ENGLISH5 QUARTER4 MODULE1 WEEK1-3 How Visual and Multimedia Elements.pptxAnaBeatriceAblay2
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityGeoBlogs
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)eniolaolutunde
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdfSoniaTolstoy
 
Final demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxFinal demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxAvyJaneVismanos
 
internship ppt on smartinternz platform as salesforce developer
internship ppt on smartinternz platform as salesforce developerinternship ppt on smartinternz platform as salesforce developer
internship ppt on smartinternz platform as salesforce developerunnathinaik
 
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Celine George
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxGaneshChakor2
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionSafetyChain Software
 

Recently uploaded (20)

How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon A
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Education
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 
Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha elections
 
Science lesson Moon for 4th quarter lesson
Science lesson Moon for 4th quarter lessonScience lesson Moon for 4th quarter lesson
Science lesson Moon for 4th quarter lesson
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
Hybridoma Technology ( Production , Purification , and Application )
Hybridoma Technology  ( Production , Purification , and Application  ) Hybridoma Technology  ( Production , Purification , and Application  )
Hybridoma Technology ( Production , Purification , and Application )
 
Blooming Together_ Growing a Community Garden Worksheet.docx
Blooming Together_ Growing a Community Garden Worksheet.docxBlooming Together_ Growing a Community Garden Worksheet.docx
Blooming Together_ Growing a Community Garden Worksheet.docx
 
ENGLISH5 QUARTER4 MODULE1 WEEK1-3 How Visual and Multimedia Elements.pptx
ENGLISH5 QUARTER4 MODULE1 WEEK1-3 How Visual and Multimedia Elements.pptxENGLISH5 QUARTER4 MODULE1 WEEK1-3 How Visual and Multimedia Elements.pptx
ENGLISH5 QUARTER4 MODULE1 WEEK1-3 How Visual and Multimedia Elements.pptx
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
 
Final demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxFinal demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptx
 
internship ppt on smartinternz platform as salesforce developer
internship ppt on smartinternz platform as salesforce developerinternship ppt on smartinternz platform as salesforce developer
internship ppt on smartinternz platform as salesforce developer
 
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptx
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory Inspection
 

Contents lists available at ScienceDirectNeuroscience Lett.docx

  • 1. Contents lists available at ScienceDirect Neuroscience Letters journal homepage: www.elsevier.com/locate/neulet Research article EEG-based BCI system for decoding finger movements within the same hand Rami Alazraia,⁎, Hisham Alwannib, Mohammad I. Daouda a Department of Computer Engineering, School of Electrical Engineering and Information Technology, German Jordanian University, Amman 11180, Jordan b Faculty of Engineering, University of Freiburg, Freiburg 79098, Germany A R T I C L E I N F O Keywords: Electroencephalography (EEG) Brain–computer interfaces (BCIs) Time-frequency distribution Finger movements Support vector machines A B S T R A C T Decoding the movements of different fingers within the same hand can increase the control's dimensions of the
  • 2. electroencephalography (EEG)-based brain–computer interface (BCI) systems. This in turn enables the subjects who are using assistive devices to better perform various dexterous tasks. However, decoding the movements performed by different fingers within the same hand by analyzing the EEG signals is considered a challenging task. In this paper, we present a new EEG-based BCI system for decoding the movements of each finger within the same hand based on analyzing the EEG signals using a quadratic time-frequency distribution (QTFD), namely the Choi–William distribution (CWD). In particular, the CWD is employed to characterize the time-varying spectral components of the EEG signals and extract features that can capture movement-related information encapsulated within the EEG signals. The extracted CWD-based features are used to build a two-layer classifi- cation framework that decodes finger movements within the same hand. The performance of the proposed system is evaluated by recording the EEG signals for eighteen healthy subjects while performing twelve finger movements using their right hands. The results demonstrate the efficacy of the proposed system to decode finger movements within the same hand of each subject. 1. Introduction A brain–computer interface (BCI) is a system that decodes brain activities to provide users with alternative ways to control various computer-based applications and assistive devices. Among the various neuroimaging modalities, the EEG is considered the most commonly used neuroimaging modality for designing BCI systems [1]. Over the past decade, researchers have developed EEG-based
  • 3. BCI systems to decode the actual and imagery motor tasks of large body- parts [2,3], including the hands, feet, and tongue, in an attempt to control various assistive devices, such as wheelchairs [4], computer- based applications [5], and prosthetic devices [6]. Nonetheless, the fact that the vast majority of the existing EEG-based BCI systems can ana- lyze brain activities and produce a limited number of control signals, usually less than five control signals, reduces the capability of using these systems to control more complicated assistive devices, such as prosthetic and robotic hands, that require a large number of control signals to perform various dexterous tasks [7]. Recently, few researchers have started to investigate the possibility of decoding the movements performed by fine body-parts, such as the movements of each finger within the same hand [7,8], wrist movements of the same hand [9,10], and grasp-related movements performed by the same hand [11,12], in order to increase the control's dimensions of the EEG-based BCI systems. In fact, decoding the movements of each finger within the same hand based on analyzing the EEG signals is more
  • 4. difficult than decoding the movements performed by different large body-parts, decoding the movements performed by a specific finger in the left hand from the movements of the matching finger in the right hand, or decoding the movements of the fingers from the movements of the wrist within the same hand [7,9,12]. This is due to the fact that finger movements within the same hand activate relatively small and close regions in the sensorimotor cortex area within the same hemi- sphere of the brain [7,9,13]. Therefore, the task of using a neuroima- ging modality that has a relatively low spatial resolution, such as the EEG, to decode the movements of each finger within the same hand is considered challenging due to the fact that various brain regions are activated during the movements of individual fingers [7]. In addition, the nonstationary nature of the EEG signals implies that the spectral components of the EEG signals vary as a function of time. Therefore, analyzing the EEG signals in the time-domain or the frequency- domain might not capture the spectral characteristics of the EEG signals. In fact, the nonstationary nature of the EEG signals imposes the requirement of representing the EEG signals in a joint time-frequency domain that can
  • 5. describe the spectral variations of the signals over time [14]. https://doi.org/10.1016/j.neulet.2018.12.045 Received 29 September 2018; Received in revised form 28 December 2018; Accepted 29 December 2018 ⁎ Corresponding author. E-mail address: [email protected] (R. Alazrai). Neuroscience Letters 698 (2019) 113–120 Available online 08 January 2019 0304-3940/ © 2019 Elsevier B.V. All rights reserved. T http://www.sciencedirect.com/science/journal/03043940 https://www.elsevier.com/locate/neulet https://doi.org/10.1016/j.neulet.2018.12.045 https://doi.org/10.1016/j.neulet.2018.12.045 mailto:[email protected] https://doi.org/10.1016/j.neulet.2018.12.045 http://crossmark.crossref.org/dialog/?doi=10.1016/j.neulet.2018 .12.045&domain=pdf In this paper, we hypothesize that analyzing the EEG signals using a quadratic time-frequency distribution (QTFD), namely the Choi–Williams distribution (CWD), can enable accurate decoding of fingers movements within the same hand. In particular, the CWD is employed to characterize the time-varying spectral components of the EEG signals and extract features that can capture movement-
  • 6. related information encapsulated within the EEG signals. The extracted CWD- based features are used to build a two-layer classification framework that can simultaneously identify each moving finger within the same hand and decode the movements performed by each identified finger. 2. Materials and methods 2.1. Subjects Eighteen healthy subjects (6 females and 12 males, with an average ± standard deviation age of 21.2 ± 3.0 years) volunteered to participate in this study. EEG signals were recorded for each subject while performing twelve finger movements using her/his right hand, including four thumb-related movements, namely the thumb adduction, thumb abduction, thumb flexion, and thumb extension movements, and the flexion and extension movements of the index, middle, ring, and little fingers. Before participating in the experiment, each subject re- ceived a thorough explanation of the experimental procedure and signed a consent form. The experimental procedure of this study was approved by the Research Ethics Committee at the German Jordanian University and was conducted in accordance with the
  • 7. Declaration of Helsinki. 2.2. Experimental protocol At the beginning of the experiment, each subject was asked to sit on a chair and to relax her/his arms on a table located in front of her/him. A computer screen was placed on the table at a distance of approxi- mately 60 cm from the subject and employed to display various visual cues. In particular, each visual cue notifies the subject to perform a full flexion movement followed by a full extension movement using a spe- cific finger or a full adduction movement followed by a full abduction movement using the thumb. For each trial, a visual cue was displayed for three seconds followed by a black screen that prompts the subject to start performing the se- quence of flexion and extension movements using a specific finger or the adduction and abduction movements using the thumb. During the recording of each trial, the experimenter carefully follows the move- ments of the subject's fingers and hits the button of an event marker to mark transitions from flexion to extension and from adduction to ab- duction. The total number of trials recorded for each finger
  • 8. movement is five trials per each subject. The average ± standard deviation durations of the flexion and extension movements computed over the five fingers and all subjects are 4.1 ± 0.4s and 3.6 ± 0.1 s, respec- tively, while the average ± standard deviation durations of the thumb adduction and thumb abduction movements computed over all subjects are 4.7 ± 0.15 and 3.9 ± 0.14 s, respectively. 2.3. Data acquisition and preprocessing The BioSemi ActiveTwo EEG system (https://www.biosemi.com) was used to record the EEG signals using 11 Ag/AgCl electrodes at a sampling rate of 2048 Hz. The utilized EEG electrodes are arranged on the scalp according to the 10–20 international electrode placement system at the following locations: F3, F4, Fz, C3, C4, Cz, P3, P4, Pz, T7, and T8, which are referenced to the common mode sense (CMS)/ driven right leg (DRL) at the C1 and C2 locations. The recorded EEG signals were downsampled to 256 Hz and filtered by applying a bandpass filter with a bandwidth of 0.5–35 Hz. Moreover, the automatic artifact re- moval (AAR) toolbox [15] was employed to reduce the muscle and electrooculography (EOG) artifacts in the filtered EEG signals.
  • 9. 2.4. Time-frequency representation and feature extraction In this study, we propose to analyze the EEG signals using a quad- ratic time-frequency distribution (QTFD), namely the Choi– Williams Distribution (CWD) [16]. The CWD can be viewed as a two- dimensional (2D) transformation that maps the original time-domain EEG signals into a joint time-frequency domain which has an excellent resolution in both the time and frequency domains [14,16]. Hence, the use of the CWD to analyze the EEG signals enables the construction of a time- frequency representation (TFR) of the EEG signals that can quantify the distribution of the energy encapsulated in the EEG signals over the time and frequency domains [14]. Specifically, to compute the CWD, we employed a sliding window that divides the EEG signal of each elec- trode into a set of overlapped segments, such that the size of each segment is 256 samples and the overlap between any two consecutive segments is 128 samples. The size and overlap of the sliding window were selected experimentally as described in Section 3.3. Then, the CWD is computed for an EEG segment, denoted as s(t), as follows [14]:
  • 10. 1. Compute the analytic signal of s(t), denoted as x(t), as follows: �= +x t s t j s t( ) ( ) { ( ) }, (1) where � {·} is the Hilbert transform [17]. 2. Compute the CWD of x(t), denoted as ρx(t, f), as follows [16,18]: ∫ ∫= ∂ ∂ −∞ ∞ −∞ ∞ − +ρ t f χ μ ν κ μ ν e( , ) ( , ) ( , ) ,x x j π fν tμ ν μ 2 ( ) (2) where χx(μ, ν) is the ambiguity function of x(t), and κ(μ, ν) is a time- frequency smoothing kernel. In particular, χs(μ, ν) represents the Fourier transform of the auto-correlation function of x(t). χs(μ, ν) can be computed as follows [16,18]: ∫= + − ∂ −∞
  • 11. ∞ χ μ ν x t ν x t ν e t( , ) ( 2 ) * ( 2 ) ,x j πμt2 (3) where x*(·) is the complex conjugate of x(·). The time- frequency smoothing kernel, κ(μ, ν), can be expressed as follows [16]: ⎜ ⎟= ⎛ ⎝ − ⎞ ⎠ κ μ ν μ ν α ( , ) exp , 2 2 2 (4)
  • 12. where α > 0 is a smoothing parameter that was experimentally selected to be 0.5. The dimensionality of the CWD-based TFR computed for each EEG segment is equal to H × L, where H = 256 represents the number of time samples within an EEG segment and L = 512 represents the number of frequency samples. Such a high dimensionality can increase the complexity of the classification task. In order to reduce the di- mensionality of the obtained CWD-based TFR, we propose to extend two frequency-domain features, namely the normalized Renyi entropy and the energy concentration features, to the joint time- frequency do- main [11,19,14]. These two extended features aim to quantify the constructed CWD-based TFR of each EEG segment. In particular, the normalized Renyi entropy, F1, of the CWD quantifies the regularity of the distribution of the energy encapsulated within a specific EEG seg- ment. The F1 feature can be computed as follows [11,19,14]: ∑ ∑= − ⎛ ⎝ ⎜ ⎜
  • 13. ⎛ ⎝ ⎜ ∑ ∑ ⎞ ⎠ ⎟ ⎞ ⎠ ⎟ ⎟= = = = F ρ t f ρ t f (0.5) log ( , ) ( , ) . t H f L x t
  • 14. H f L x 1 2 1 1 1 1 2 (5) On the other hand, the energy concentration, F2, of the CWD pro- vides a measure that describes the spread of the energy encapsulated within a specific EEG segment. The F2 feature can be obtained as fol- lows [11,19,14]: R. Alazrai et al. Neuroscience Letters 698 (2019) 113–120 114 https://www.biosemi.com ∑ ∑= ⎛ ⎝ ⎜ ⎞ ⎠
  • 15. ⎟ = = F ρ t f| ( , ) | . t H f L x2 1 1 1/2 2 (6) Fig. 1 provides a graphical illustration of the feature extraction process. In particular, at each position of the sliding window, the fea- tures F1 and F2 are computed from the constructed CWD-based TFR of each EEG segment. The total number of EEG segments at each window position is equal to 11 segments, where each segment represents the EEG signal of a particular EEG electrode within the current window position. The extracted features from the EEG segments at a particular window position are grouped to form a feature vector.
  • 16. Therefore, the total number of features comprised within each feature vector is equal to 22 features. 2.5. Classification framework In this work, we propose a two-layer classification framework (2LCF) to simultaneously identify each moving finger within the same hand and decode the movements performed by each identified finger. The proposed 2LCF converts the original complex classification task (i.e., classifying a feature vector into one of the twelve different finger movements described in Section 2.1) into a sequence of two simpler classification tasks that are performed at each layer. In particular, the first classification layer comprises one classifier, denoted as C1,1, that analyzes each input feature vector to identify the moving finger within the same hand, without specifying the movement performed by the identified moving finger. Explicitly, the C1,1 classifier assigns each input feature vector to one of the following five different movement classes: the thumb movement (M1), index movement (M2), middle movement (M3), ring movement (M4), and little movement (M5). In this study, we refer to the movements M1, M2, M3, M4, and M5 as movements
  • 17. of dif- ferent fingers within the same hand. The C1,1 classifier is implemented using a multi-class support vector machine (SVM) classifier with radial basis function (RBF) kernel [20]. After that, the input feature vector is passed to the second classification layer, which consists of five different SVM classifiers with RBF kernels. Each classifier in the second classi- fication layer is associated with a particular finger and is designed to decode movements performed by that particular finger. Specifically, the first classifier at the second classification layer, denoted as C2,1, is a multi-class SVM classifier that classifies an input feature vector that is identified at the first layer as M1 class into one of four thumb- related movements, namely the thumb adduction (M1,1), thumb abduction (M1,2), thumb flexion (M1,3), and thumb extension (M1,4) movements. The second classifier, denoted as C2,2, is a binary SVM classifier that classifies an input feature vector that is identified at the first layer as M2 class into one of two index-related movements, namely the index flexion (M2,1) and index extension (M2,2) movements. The third clas- sifier, denoted as C2,3, is a binary SVM classifier that classifies an input feature vector that is identified at the first layer as M3 class
  • 18. into one of two middle-related movements, namely the middle flexion (M3,1) and middle extension (M3,2) movements. The fourth classifier, denoted as C2,4, is a binary SVM classifier that classifies an input feature vector that is identified at the first layer as M4 class into one of two ring- related movements, namely the ring flexion (M4,1) and ring extension (M4,2) movements. Finally, the fifth classifier, denoted as C2,5, is a binary SVM classifier that classifies an input feature vector that is identified at the first layer as M5 class into one of two little-related movements, namely the little flexion (M5,1) and little extension (M5,2) movements. In this study, we refer to the movements M1,1, M1,2, M1,3, M1,4, M2,1, M2,2, M3,1, M3,2, M4,1, M4,2, M5,1, and M5,2 as movements of the same finger. Fig. 1. Graphical illustration of the feature extraction procedure employed at each window position. R. Alazrai et al. Neuroscience Letters 698 (2019) 113–120 115 Fig. 2 provides a structure diagram of the proposed 2LCF. 2.6. Performance evaluation procedures and metrics
  • 19. For each subject, we construct the 2LCF by utilizing a ten-fold cross- validation procedure to train and test the SVM classifiers within the first and second layers of the proposed 2LCF [11,19]. The ten-fold cross- validation procedure is repeated for ten times and the average classi- fication performance for each subject is computed over the ten repeti- tions. The implementation of the multi-class SVM classifiers in our proposed 2LCF, namely the C1,1 and C2,1 classifiers, is carried out using the one-against-one scheme [20]. In addition, the regularization para- meter, C, and the RBF kernel parameter, γ, of each SVM classifier are tuned by performing a grid-search to find the values of C and γ that minimize the classification error [21]. To quantify the classification performance of each classifier in the constructed 2LCF, we employed two standard evaluation metrics, namely the classification accuracy (CA) and the F1-score, that are computed as follows [22]: = + + + + ×CA
  • 20. (tp tn) (tp tn fp fn) 100%, (7) − = × + ×F P R P R score 2 ( * ) ( ) 100%,1 (8) where tp, tn, fp, and fn represent the number of true positive, true ne- gative, false positive, and false negative cases, respectively. In addition, P = tp/(tp + fp) and R = tp/(tp + fn) represent the precision and recall, respectively. In fact, the F1-score provides a weighted average of the precision and recall that takes into consideration the false positive and false negative rates. 3. Experimental results
  • 21. 3.1. Results of the first classification layer In this section, we present the classification results of the first classification layer, namely the C1,1 classifier. Table 1 shows the F1- scores obtained for the M1, M2, M3, M4, and M5 classes computed for each one of the eighteen subjects (S1 to S18). The average F1- score va- lues obtained for the M1, M2, M3, M4, and M5 classes are 86.7%, 87.1%, 84.1%, 79.0%, and 86.5%, respectively. Fig. 3 shows the CA values and the corresponding standard devia- tions obtained by the first classification layer for each subject. The re- sults presented in Fig. 3 show that the mean ± standard deviation CA value of the first classification layer, which discriminates between the M1, M2, M3, M4, and M5 movement classes, computed over all eighteen subjects is equal to 85.85 ± 1.1 % Moreover, the mean CA values computed for the eighteen subjects are between 74.1%, which was obtained for subject 10 (S10), and 93.1%, which was obtained for subject 16 (S16). In addition, we compare the significance of the CA values of the C1,1 classifier that are computed for each subject with the random classifi-
  • 22. cation rate (RCR), which is defined as the reciprocal of the number of classes and has a value of 20%, by performing t-tests with a significance level of 0.01. The p values computed for all eighteen subjects were less than 0.01, which indicates that the CA obtained for the C1,1 classifier associated with each subject is significantly higher than the RCR (the red dashed line in Fig. 3). 3.2. Results of the second classification layer In this section, we present the classification results of each classifier in the second classification layer, namely the C2,1, C2,2, C2,3, C2,4, and C2,5 classifiers, computed for each of the eighteen subjects. Table 2 presents the F1-scores computed for the C2,1, C2,2, C2,3, C2,4, and C2,5 classifiers per each subject along with the mean F1-scores computed for each of the five classifiers over the eighteen subjects. In particular, for the C2,1 classifier, the average F1-score values obtained for decoding the four thumb movements, namely the M1,1, M1,2, M1,3, and M1,4 move- ments, are 67.3%, 53.5%, 67.4%, and 56.1%, respectively. For the C2,2 classifier, the average F1-score values obtained for decoding the flexion and extension movements of the index finger, namely the M2,1 and M2,2
  • 23. movements, are 72.1% and 67.5%, respectively. Moreover, for the C2,3 classifier, the average F1-score values obtained for decoding the flexion Fig. 2. Structure diagram of the proposed 2LCF. R. Alazrai et al. Neuroscience Letters 698 (2019) 113–120 116 and extension movements of the middle finger, namely the M3,1 and M3,2 movements, are 76.7% and 67.2%, respectively. For the C2,4 classifier, the average F1-score values obtained for decoding the flexion and extension movements of the ring finger, namely the M4,1 and M4,2 movements, are 74.9% and 59.5%, respectively. Finally, for the C2,5 classifier, the average F1-score values obtained for decoding the flexion and extension movements of the little finger, namely the M5,1 and M5,2 movements, are 70.7% and 63.0%, respectively. Fig. 4 presents the CAs and corresponding standard deviations ob- tained for the C2,1, C2,2, C2,3, C2,4, and C2,5 classifiers computed per each subject. Furthermore, Fig. 4 provides the average CA values computed for each of the five classifiers over the eighteen
  • 24. subjects. In particular, the results presented in Fig. 4 indicate that the C2,1 classifier was able to classify the M1,1, M1,2, M1,3, and M1,4 movements with an average ± standard deviation CA of 64.6 ± 3.6 %. Moreover, the C2,2 classifier was able to classify the M2,1 and M2,2 movements with an average ± standard deviation CA of 70.4 ± 5.5 %. For the C2,3 clas- sifier, the average ± standard deviation CA value obtained in dis- criminating between the M3,1 and M3,2 movements was 73.4 ± 5.5 %. In addition, the C2,4 classifier was able to classify the M4,1 and M4,2 movements with an average ± standard deviation CA of 70.4 ± 5.3 %. Finally, for the C2,5 classifier, the average ± standard deviation CA value achieved in discriminating between the M5,1 and M5,2 movements was 70.2 ± 3.9 %. In addition, for each subject, we compare the significance of the CA values computed for each classifier in the second classification layer, namely the C2,1, C2,2, C2,3, C2,4, and C2,5 classifiers, with the RCR value associated with each of these classifiers by performing t-tests with a significance level of 0.01. In particular, the RCR of the C2,1, which is shown as a blue dashed line in Fig. 3, is equal to 25%, while the
  • 25. RCR associated with each of the other four classifiers in the second classi- fication layer, which is shown as a black dashed line in Fig. 3, is equal to 50%. For each classifier in the second classification layer, the p va- lues computed for all subject were less than 0.01, which indicates that the CA values computed for each subject per each classifier are sig- nificantly higher than the RCRs. 3.3. Analysis the effect of the sliding window size Table 3 provides the average CA and F1-score values computed for the classifiers in our proposed 2LCF using different sizes and overlaps of the sliding window. In particular, the average CA and F1-score values presented in Table 3 are computed across all subjects using the cross- validation procedure described in Section 2.6. These results indicate that the best average CA and F1-score values were obtained when the size of the sliding window and overlap are 256 and 128, respectively. 3.4. Comparison with the traditional multi-class SVM classier (TMCC) In this section, we compare the classification performance of our
  • 26. proposed 2LCF with the TMCC. In particular, the TMCC consists of one multi-class SVM classifier with RBF kernel that classifies feature vectors into one of the twelve finger movements described in subsection 2.1. The performance of the TMCC was evaluated using the ten-fold cross- validation procedure described in Section 2.6. Table 4 presents the average CA and F1-score values computed over the ten repetitions of the cross-validation procedure across the twelve finger movements per each subject. The average CA and F1-score values computed for the TMCC over all subjects are 40.5% and 39.1%, respectively. On the contrary, the average CA and F1-score values computed for the second classifi- cation layer of our proposed 2LCF, which are presented in Fig. 4, are 69.8% and 67.4%, respectively. These results indicate that the classi- fication performance of our proposed 2LCF outperforms significantly the classification performance of the TMCC. Table 1 The F1-scores (%) of the C1,1 classifier computed for each subject. Subject S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 S11 S12 S13 S14 S15 S16 S17 S18 Average Movement M1 83.7 76.5 92.1 91.2 84.3 90.2 88.6 87.6 92.0 81.5
  • 27. 86.5 83.0 78.0 90.7 91.4 90.5 86.4 86.5 86.7 M2 89.8 77.8 83.5 89.6 78.9 90.5 94.6 86.8 89.2 78.4 82.3 76.4 89.3 94.3 87.7 94.9 97.5 87.5 87.2 M3 81.2 85.3 87.8 83.5 77.8 69.8 91.2 88.9 88.2 72.6 80.3 81.9 79.6 92.9 77.5 93.5 91.1 91.2 84.1 M4 58.9 77.3 89.1 85.2 65.5 74.3 79.9 94.2 96.5 71.3 90.5 67.9 80.9 74.4 86.6 72.2 91.8 65.5 79.0 M5 84.7 80.1 88.0 91.6 91.2 79.7 91.9 89.3 88.9 68.1 90.4 84.2 83.5 94.6 85.3 96.7 85.7 83.4 86.5 Fig. 3. The CA values of the C1,1 classifier computed for each subject. The red vertical lines represent the standard deviations in the CA values and the red dashed line represents the RCR. (For interpretation of the references to color in text/this figure legend, the reader is referred to the web version of the article.) R. Alazrai et al. Neuroscience Letters 698 (2019) 113–120 117 4. Discussion The main focus of the current study is to investigate the capability of using the CWD-based features along with the proposed 2LCF to analyze the EEG signals and decode finger movements within the same hand. The results obtained for the first and second classification layers de- monstrate the capability of our proposed approach to successfully
  • 28. decode twelve finger movements within the same hand for eighteen able-bodied subjects. 4.1. Movements of different fingers within the same hand The results obtained for the first classification layer, which are provided in Table 1 and Fig. 3, indicate that the extracted CWD-based Table 2 The F1-scores (%) of the C2,1, C2,2, C2,3, C2,4, and C2,5 classifiers within the second classification layer computed for each subject and across all subjects. Subject C2,1 C2,2 C2,3 C2,4 C2,5 M1,1 M1,2 M1,3 M1,4 M2,1 M2,2 M3,1 M3,2 M4,1 M4,2 M5,1 M5,2 S1 70.9 41.4 62.6 54.4 58.7 71.8 71.4 66.2 81.2 83.9 63.6 68.7 S2 79.0 66.7 63.5 48.6 69.8 58.9 72.1 66.5 67.9 41.1 74.2 52.4 S3 47.8 57.9 66.4 36.2 74.7 54.0 85.9 63.9 70.3 60.5 81.5 56.7 S4 79.8 55.4 69.5 46.5 76.3 62.3 69.2 62.7 82.3 59.6 63.0 57.4 S5 72.2 67.9 64.8 69.0 62.6 62.6 88.1 69.6 72.0 55.6 69.0 54.0 S6 65.1 51.1 67.4 59.4 80.1 76.5 83.5 67.0 80.8 75.8 80.1 55.6 S7 70.5 72.3 54.9 63.8 79.5 70.8 67.4 54.4 67.6 51.5 65.6 69.6 S8 47.2 40.3 76.0 52.6 75.2 64.5 87.4 62.4 81.1 43.3 71.5 60.4 S9 63.5 46.2 66.3 69.0 53.9 66.0 56.6 70.6 50.9 56.1 73.9 82.8 S10 64.3 62.6 60.0 46.1 73.5 77.1 78.5 65.3 75.1 56.5 78.0 53.3 S11 74.8 34.8 78.1 35.5 80.3 64.4 85.3 76.7 82.3 76.9 80.7 76.7 S12 54.7 33.3 61.9 53.2 84.9 70.7 80.9 69.7 76.1 57.9 64.5 68.5 S13 64.2 65.0 64.8 56.2 62.5 65.0 51.7 64.8 67.5 54.4 75.4 58.1 S14 74.5 41.1 77.6 65.9 71.4 55.2 81.2 58.9 73.1 50.6 78.9 65.0 S15 85.6 62.2 76.7 55.8 73.9 59.0 78.3 62.4 67.9 39.0 68.7 52.6 S16 67.8 61.2 58.9 84.1 61.3 79.4 81.2 85.3 84.1 89.6 51.7 79.2
  • 29. S17 74.7 45.7 76.5 53.0 85.9 76.9 84.7 75.6 85.7 60.6 76.9 53.0 S18 55.0 58.8 67.1 59.5 72.3 79.7 77.0 68.0 82.8 58.4 56.4 69.4 Average 67.3 53.6 67.4 56.0 72.0 67.5 76.7 67.2 74.9 59.5 70.8 63.0 Fig. 4. The CA values of the C2,1, C2,2, C2,3, C2,4, and C2,5 classifiers computed for each subject. The red vertical lines represent the standard deviations in the CAs. The blue da- shed line represents the RCR of the C2,2 clas- sifier, while the black dashed line represents the RCR of the C2,2, C2,3, C2,4, and C2,5 classi- fiers. (For interpretation of the references to color in text/this figure legend, the reader is referred to the web version of the article.) Table 3 The average CA (%) and F1-scores (%) computed for the classifiers of our proposed 2LCF using different sizes of the sliding window. Sliding window size Overlap size First classification layer Second classification layer C1,1 C2,1 C2,2 C2,3 C2,4 C2,5 CA F1-score CA F1-score CA F1-score CA F1-score CA F1- score CA F1-score 64 32 74.7 74.1 60.5 56.4 65.9 63.7 64.4 62.2 65.8 63.2 67.6 64.7 128 64 80.5 80.0 63.6 59.3 69.6 67.3 67.1 65.2 68.4 66.9 68.5 64.9 256 128 85.8 84.7 64.6 61.1 70.4 69.8 73.4 71.9 70.4 67.2 70.2 66.9
  • 30. 512 256 83.2 81.9 62.9 46.8 68.1 59.9 71.1 56.9 69.8 59.3 69.6 52.9 1024 512 70.3 66.4 49.0 26.2 63.7 45.7 59.9 44.4 73.1 43.3 67.2 37.6 R. Alazrai et al. Neuroscience Letters 698 (2019) 113–120 118 features were able to obtain accurate identification of the moving fin- gers within the same hand. In particular, Table 1 indicates that the average F1-scores obtained for the identification of the moving fingers are above 79%, where the lowest F1-score of 79.0% was computed for the ring finger. Moreover, the results of the first classification layer suggest that the EEG signals encapsulate sufficient information to de- code the movements of fine body-parts, such as the finger movements, which complies with the findings reported in previous studies, such as [7,8]. 4.2. Movements of the same finger within the same hand The results obtained for the second classification layer, which are provided in Table 2 and Fig. 4, indicate that the discrimination between the movements performed by a specific finger is more
  • 31. challenging than decoding the movements performed by different fingers within the same hand. For example, the first classification layer was able to identify that the moving finger is the thumb finger with an average F1- score of 86.7%, while the second classification layer was able to dis- criminate between the four thumb-related movements with an average F1-score of 61.1%. Moreover, the results presented in Fig. 4 indicate that increasing the number of movements performed by the same finger can drastically decrease the ability to differentiate between the move- ments performed by the same finger. In particular, Fig. 4 shows that the mean CA value of the C2,1 classifier, which discriminates between four thumb-related movements, is significantly lower than the CAs obtained for each of the other four classifiers in the second classification layer that discriminates between only two movements of each finger. This reduction in the classification performance can be attributed to the following factors: (1) Finger movements within the same hand activate relatively close regions in the sensorimotor cortex area within the same hemisphere of the brain [7,9,13]. (2) The limited spatial resolution and low signal-to-noise ratio (SNR) of the EEG modality reduces
  • 32. the cap- ability of capturing brain activities within the activated regions of the brain during finger movements [7]. (3) As a consequence of the two factors mentioned above, increasing the number of different move- ments that are performed by each finger can significantly increase the difficulty of dividing the feature space into separable decision regions, where each region comprises feature vectors that belong to a particular movement. In fact, several previous EEG-related studies have indicated that the CAs obtained for multi-class classification problems, which involve more than two classes, were significantly lower than the CAs obtained for binary classification problems that involve two classes [7,19,23]. 4.3. Comparison with other approaches Recently, few studies have investigated the possibility of decoding finger movements within the same hand based on EEG signals. For example, Liao et al. [7] recorded EEG signals using 128 electrodes for eleven healthy subjects while performing flexion and extension move- ments using each of the fingers in their right hands. The recorded EEG signals were analyzed using principal component analysis and a
  • 33. set of power spectrum-based features. For each subject, the extracted features were used to build ten binary SVM classifiers, where each classifier is associated with a pair of fingers. The average CA computed for all pairs of fingers across all subjects was 77.1%. In another study, Quandt et al. [8] recorded the EEG signals using 32 electrodes for thirteen healthy subjects while pressing a button using the thumb, index, middle, and little fingers. For each subject, the recorded EEG signals were utilized to construct four binary SVM classifiers, where each classifier is associated with one of the four fingers. In particular, the first, second, third, and fourth classifiers aim to identify whether the moving finger is the thumb, index, middle, and little, respectively. The average CA value computed over the four fingers across all subjects was 43.5%. The current study provides five improvements over the approaches presented in [7,8]: Firstly, the first classification layer of our proposed 2LCF utilizes a multi-class SVM classifier to discriminate between the movement of all fingers within the same hand rather than using binary SVM classifiers to discriminate between the movements of each pair of
  • 34. fingers as in [7,8]. In this regard, Liao et al. [7] indicated that dis- criminating between the movements of individual fingers within the same hand using a multi-class classifier is more difficult than dis- criminating between the movements of each pair of fingers within the same hand. In fact, the use of multi-class classification approaches to decode finger movements within the same hand can facilitate the de- velopment of EEG-based BCI systems with higher control's dimensions, which can enhance the functionality of various dexterous assistive de- vices. Secondly, the current study considered both identifying move- ments of different fingers within the same hand and decoding the movements of the moving finger, while the approaches presented in [7,8] considered only the identification of the moving finger without decoding the movement performed by the moving finger. Hence, our proposed approach can increase the control's dimensions of the devel- oped EEG-based BCI systems. Thirdly, the results reported in the cur- rent study are based on utilizing eleven EEG electrodes compared with the results reported in the studies [7,8] which are based on utilizing 128 and 32 electrodes, respectively. This demonstrates the capability of
  • 35. the extracted CWD-based features to capture movement-related in- formation that are encapsulated within the EEG signals to achieve high CA values. Fourthly, to the best of our knowledge, this is the first study that investigates the possibility of decoding twelve different movements performed by different fingers within the same hand. Such a large number of movements makes the classification task more challenging compared with the approaches [7,8] that considered decoding only five and four movements, respectively, of individual fingers within the same hand. Fifthly, the classification architectures employed in the ap- proaches [7,8] are based on constructing a set of binary SVM classifiers, where each classifier produces one decision. These decisions were not combined to produce one final decision that specifies the moving finger within the same hand. Such classification architecture can highly suffer from the false positive error, in which a feature vector can be collec- tively misclassified by multiple binary classifiers and assigned to in- correct fingers. In contrast, our proposed 2LCF produces one decision at the first classification layer that identifies the moving finger, and one decision at the second classification layer that specifies the movement
  • 36. performed by the identified moving finger. 5. Conclusions In the present study, we have demonstrated the possibility of de- coding movements performed by each finger within the same hand Table 4 The average CA (%) and F1-score (%) values computed for each subject using the TMCC. Evaluation metric Subject Average across all subjects S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 S11 S12 S13 S14 S15 S16 S17 S18 CA 42.8 35.7 41.3 39.6 41.8 36.0 44.2 43.3 41.4 36.2 43.9 37.6 34.4 42.5 45.8 44.8 39.0 37.8 40.5 F1-score 44.3 36.0 39.2 39.6 41.8 36.4 44.4 43.4 37.4 35.3 36.6 39.2 35.5 40.2 44.4 38.0 35.3 37.0 39.1 R. Alazrai et al. Neuroscience Letters 698 (2019) 113–120 119 using EEG signals. The results presented in this study suggest the fea- sibility of using the CWD to analyze the EEG signals and extract quantitative features that are capable of discriminating between dif- ferent finger movements. In the future, we plan to investigate
  • 37. the fol- lowing research directions: (1) Studying the problem of decoding si- multaneous movements performed by multiple fingers within the same hand to improve the control mechanisms of prosthetic hands. (2) Extending our experiments by including subjects with an extended age range and balanced gender distribution. (3) Studying the potential of using a higher number of EEG channels that achieve high resolution coverage of the motor cortex region to enhance the accuracy of clas- sifying the movements performed by the same finger. Acknowledgements This work is supported by the Scientific Research Support Fund of Jordan (grant no. ENG/1/9/2015). References [1] L.F. Nicolas-Alonso, J. Gomez-Gil, Brain–computer interfaces, a review, Sensors 12 (2) (2012) 1211–1279. [2] G. Pfurtscheller, C. Neuper, Motor imagery and direct brain–computer commu- nication, Proc. IEEE 89 (7) (2001) 1123–1134. [3] A.S. Royer, A.J. Doud, M.L. Rose, B. He, EEG control of a virtual helicopter in 3- dimensional space using intelligent control strategies, IEEE Trans. Neural Syst.
  • 38. Rehabil. Eng. 18 (6) (2010) 581–589. [4] E.W. Sellers, T.M. Vaughan, J.R. Wolpaw, A brain– computer interface for long-term independent home use, Amyotroph. Lateral Scler. 11 (5) (2010) 449–455. [5] R. Scherer, G. Muller, C. Neuper, B. Graimann, G. Pfurtscheller, An asynchronously controlled EEG-based virtual keyboard: improvement of the spelling rate, IEEE Trans. Biomed. Eng. 51 (6) (2004) 979–984. [6] G. Pfurtscheller, C. Guger, G. Müller, G. Krausz, C. Neuper, Brain oscillations con- trol hand orthosis in a tetraplegic, Neurosci. Lett. 292 (3) (2000) 211–214. [7] K. Liao, R. Xiao, J. Gonzalez, L. Ding, Decoding individual finger movements from one hand using human EEG signals, PLOS ONE 9 (1) (2014) e85192. [8] F. Quandt, C. Reichert, H. Hinrichs, H.-J. Heinze, R. Knight, J.W. Rieger, Single trial discrimination of individual finger movements on one hand: a combined MEG and EEG study, Neuroimage 59 (4) (2012) 3316–3324. [9] B.J. Edelman, B. Baxter, B. He, EEG source imaging enhances the decoding of complex right-hand motor imagery tasks, IEEE Trans. Biomed. Eng. 63 (1) (2016) 4–14.
  • 39. [10] A. Vuckovic, F. Sepulveda, Delta band contribution in cue based single trial clas- sification of real and imaginary wrist movements, Med. Biol. Eng. Comput. 46 (6) (2008) 529–539. [11] R. Alazrai, H. Alwanni, Y. Baslan, N. Alnuman, M.I. Daoud, EEG-based brain–- computer interface for decoding motor imagery tasks within the same hand using Choi–Williams time-frequency distribution, Sensors 17 (9) (2017) 1937. [12] X. Yong, C. Menon, EEG classification of different imaginary movements within the same limb, PLOS ONE 10 (4) (2015) e0121896. [13] G. Pfurtscheller, F.L. Da Silva, Event-related EEG/MEG synchronization and de- synchronization: basic principles, Clin. Neurophysiol. 110 (11) (1999) 1842–1857. [14] B. Boashash, Time-frequency Signal Analysis and Processing: A Comprehensive Reference, Academic Press, 2015. [15] G. Gómez-Herrero, W. De Clercq, H. Anwar, O. Kara, K. Egiazarian, S. Van Huffel, W. Van Paesschen, Automatic removal of ocular artifacts in the EEG without an EOG reference channel, Proceedings of the 7th IEEE Nordic Signal Processing Symposium (2006) 130–133. [16] H.-I. Choi, W.J. Williams, Improved time-frequency representation of multi-
  • 40. component signals using exponential kernels, IEEE Trans. Acoust. Speech Signal Process. 37 (6) (1989) 862–871. [17] S.L. Hahn, Hilbert Transforms in Signal Processing, vol. 2, Artech House, Boston, 1996. [18] L. Cohen, Time-frequency distributions – a review, Proc. IEEE 77 (7) (1989) 941–981. [19] R. Alazrai, R. Homoud, H. Alwanni, M.I. Daoud, EEG- based emotion recognition using quadratic time-frequency distribution, Sensors 18 (8) (2018) 2739. [20] C.-C. Chang, C.-J. Lin, Libsvm: a library for support vector machines, ACM Trans. Intell. Syst. Technol. 2 (3) (2011) 1–27. [21] C.-W. Hsu, C.-C. Chang, C.-J. Lin, et al., A practical guide to support vector clas- sification, Tech. rep., Department of Computer Science, National Taiwan University, Taipei, Taiwan, 2003. [22] J. Han, J. Pei, M. Kamber, Data Mining: Concepts and Techniques, Elsevier, 2011. [23] F. Shiman, E. López-Larraz, A. Sarasola-Sanz, N. Irastorza-Landa, M. Spüler, N. Birbaumer, A. Ramos-Murguialday, Classification of different reaching move- ments from the same limb using EEG, J. Neural Eng. 14 (4) (2017) 046018.
  • 41. R. Alazrai et al. Neuroscience Letters 698 (2019) 113–120 120 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0005 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0005 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0010 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0010 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0015 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0015 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0015 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0020 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0020 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0025 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0025 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0025 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0030 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0030 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0035 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0035 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0040 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0040 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0040 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0045 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0045 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0045 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0050 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0050 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0050 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0055 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0055 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0055 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0060 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0060 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0065
  • 42. http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0065 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0070 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0070 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0075 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0075 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0075 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0075 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0080 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0080 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0080 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0085 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0085 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0090 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0090 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0095 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0095 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0100 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0100 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0105 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0105 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0105 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0110 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0115 http://refhub.elsevier.com/S0304-3940(18)30902-9/sbref0115 http://refhub.elsevier.com/S0304-3940(18)30902- 9/sbref0115EEG-based BCI system for decoding finger movements within the same handIntroductionMaterials and methodsSubjectsExperimental protocolData acquisition and preprocessingTime-frequency representation and feature extractionClassification frameworkPerformance evaluation procedures and metricsExperimental resultsResults of the first classification layerResults of the second classification layerAnalysis the effect of the sliding window sizeComparison with the traditional multi-class SVM classier (TMCC)DiscussionMovements of different fingers within the same handMovements of the same finger within the same
  • 43. handComparison with other approachesConclusionsAcknowledgementsReferences Discussion 11 instructions find a journal article that displays information using at least two of the following: I have attached the article that shows 2 items · Graphical display: Pie chart, bar graph, histogram, boxplot, etc. · Frequency distribution · Measures of central tendency and measures of dispersion For your posting, include a link to the information. Comment on the choice of display(s), and measures of central tendency and dispersion and how well they allow the reader to clearly understand the information being presented. Suggest changes that could make the reader better understand the information provided. Answer the question(s) in the instructions. You must provide a working link to the journal article or you can attach a pdf of the journal article to your posting. The article that you locate must contain two of the three listed items. Comment on the choice of display(s), and measures of central tendency and dispersion and how well they allow the reader to clearly understand the information being presented. Suggest changes that could make the reader better understand the information provided. Your answer must include at least 2 paragraphs, minimum of 4 sentences per paragraph.