SlideShare a Scribd company logo
MASTER’S PROJECT, THE COLLEGE OF WILLIAM AND MARY, SPRING 2015 1
An Exploration on the Potential of an
Electroencephalographic Headset for Human
Computer Interaction
Johnathan Savino, Peter Kemper
Department of Computer Science
The College of William and Mary
Williamsburg, VA, 23185, USA
{jesavino, kemper}@cs.wm.edu
!
Abstract—As computers become more and more integrated
into our daily life, so does the need for improved ways of
interacting with them. We look into the possibility of using an
EEG Brain Sensing Headset as means of better interfacing with
a computer. Through a user study we analyze human brain
wave patterns when responding to simple yes - no questions, in
addition to looking at the accuracy of an Emotiv EEG Headset
in recognizing different facial movement patterns. We implement
our findings into a brain controlled music player, capable of
recognizing head movements and certain facial movements to
aid turning the player on and off plus allowing rating of played
songs to be done hands free. We provide data to conclude
that both brow motion and head motion provide accurate and
reliably recognizable data for deployment into a variety of brain
computer applications.
Index Terms—EEG, Brain Computer Interface, Human Com-
puter Interaction, User Study
1 INTRODUCTION
INTERACTING with computers has become so
common in our daily lives that many people
will not go more than one day without spending
some time in front of one their devices. This inter-
action has been developed carefully over the last
fifty years, moving from keyboard only systems,
towards more advanced graphical user interfaces.
It only seems natural that if we can improve the
way that humans interact with their computers, we
can drastically improve one of the more frequent
This project was approved by the College of William and Mary
Protection of Human Subjects Committee (Phone 757-221-3966) on
2015-03-03 and expires on 2016-03-03.
interactions we make throughout the day. In doing
so, we not only reduce the amount of time required
to complete mundane tasks, but also allow more
time to spent on the problem at hand instead of
interfacing with the computer. We are also able to
broaden the group of users able to use computers
by decreasing the learning curve required to use a
range of computing devices.
We would like to utilize Electroencephalo-
graphic (EEG) sensing equipment in order to ex-
ploit common brain signal patterns which occur in
tandem with our daily interactions. By harnessing
these patterns in the different regions of the brain, it
is possible to track different emotions and recognize
evoked signals based on physical or visual stimuli.
EEG sensors work by measuring the electrical sig-
nals in different regions of the brain, which, when
combined, allow features such as mood to be ex-
tracted in real time. The benefit to EEG based brain
monitoring systems is that EEG is non-invasive; all
signal collection is done with sensors places on top
of the head. This allows for easy integration into
daily life.
In order to provide a fully immersive experience
for your everyday user, there must be a way to
accurately disseminate the signal we get from the
EEG sensors from the noise of background brain ac-
tivity. While a significant amount of work has been
done mapping different cortical regions of the brain
to specific emotions, less has been done looking at
the signals the brain produces throughout everyday
interaction with a computer. More of this will be
discussed in section 2 with other related works.
MASTER’S PROJECT, THE COLLEGE OF WILLIAM AND MARY, SPRING 2015 2
This study is an attempt to see how the brain
responds throughout interaction with computers.
More specifically, we will look at how users af-
fective and effective responses change as they are
presented a series of yes and no questions. These
binary questions can be assumed similar to confir-
mation dialogs often encountered with a computer.
In addition, we will look to determine which fa-
cial expressions are best utilized as a state control
trigger. The main contributions of this paper can be
summarized as follows:
• We present a user study which looks to
analyze how users brains respond through-
out interaction with a computer. This infor-
mation becomes incredibly powerful when
attempting to interconnect brain and com-
puter. From this study we learn typical brain
patterns experience by average users when
they interact with a computer on an every-
day basis.
• We implement a simple binary controller
into an EEG based brain music player. This
system will allow users to make choices in
program such as accepting a dialog popup
using a gyroscope on their head.
• We implement an on - off controller to pause
and unpause the music player. Users can
raise their brow twice to achieve this, which
adds the ability to operate the player hands
free.
The rest of this paper is structured as follows. We
present related work and our motivation in section
(2), information on the user study in section (3),
an analysis of the data in section (4), and finally
the background of our implementation in the music
player in section (5). We then conclude and present
our future work.
2 MOTIVATION
While electroencephalography has been around for
a number of years, the work done in regards to
human computer interaction has not been explored
to its full potential.
2.1 EEG Background
While many people have heard of electroen-
cephalography, it certainly is not an everyday term.
In order to grasp the limitations of EEG as a brain
sensing solution, we will first discuss two different
approaches of sensing.
Before we do this, we will define what an Event
Related Potential (ERP) is. In short, an ERP mea-
sures the brains specific responses to some sort of
specific cognitive, sensory or motor event. We go
further into how ERPs are used in BCI.
2.1.1 SSVEP
The first approach is the analysis of Steady-State
Visual Evoked Potentials (SSVEP). SSVEP are nat-
ural responses to stimulus of specific frequencies
[1]. These visually evoked potentials are elicited
by sudden visual stimuli and the repetitive stimuli
lead to stable oscillations in EEG. These voltage
oscillation patterns are called SSVEP [2].
SSVEP is evoked at the frequency of the stim-
ulus. When the retina is excited by a visual cue in
range of 3.5 Hz to 75 Hz, the brain generates electri-
cal activity mimicking this frequency [2]. This activ-
ity can be further broken down into low, medium,
and high frequency bands. Because the stimulus
is directly related to frequency, SSVEP is a good
indicator of visual disease in a variety of patients.
In relation to BCI, SSVEP functions well in ap-
plications that send a large number of commands
which require a high reliability. A typical setup for
an SSVEP-based system involves using one or mul-
tiple LED lights to flicker at varying frequencies.
SSVEP is ideal for users where small eye movement
is allowed, users that are capable of sustained atten-
tion effort, and applications where small command
delays are allowed.
SSVEP could be applied to our application, but
we chose to perform our study with the Emotiv
headset because it is a commercial headset. In addi-
tion, because we are interested in a users emotional
and physical responses during ordinary computer
interaction, we chose the Emotiv headset over one
using SSVEP.
2.1.2 P300
The second approach is called P300 Evoked Po-
tential. This wave is a component of an Event
Related Potential, not limited to auditory, visual or
somatosensory stimuli [1]. P300 is one of the major
peaks in the ERP wave response. The presentation
of stimulus in an oddball paradigm can produce a
positive peak in the EEG, approximately 300ms af-
ter onset of the stimulus [2]. The triggered response
is called the P300 component of ERP.
P300 sensing nodes are placed along the center
of the skull and the back of the head. The wave
captured by the P300 component ranges in the 2 to
5 µHz, and only lasts 150 to 200ms [2]. Due to the
MASTER’S PROJECT, THE COLLEGE OF WILLIAM AND MARY, SPRING 2015 3
Fig. 1. An example of a P300 system
small nature of these measurements, one can imag-
ine that a significant amount of signal processing
must be done in order to get access to any sort of
meaningful data.
We show in Figure 1 a simple setup for clas-
sifying P300 data to implement a spelling system.
EEG Data is first acquired, and then sent for Pre-
Processing. In this step, noise is removed from the
gathered signal. After that, a Principle Component
Analysis is run in order to highlight the signals
that contribute the most, which are then fed into
a classifier.
In order to understand the basis for P300-based
BCI, we will look at the speller in the system shown
in Figure 1. The user is presented with a six by
six grid, and instructed to focus on the letter that
they would like to choose. The rows of the table
are then randomly flashed, which evokes a P300-
response when the row the user is focusing on lights
up. This process is then repeated for the columns,
which allows the system to narrow down the letter
the user is interested in.
From this simple example, we can see that P300
is a very strong system for BCI, but unfortunately is
quite slow. As a result, more recent spelling systems
still utilize some variation on this simple spelling
paradigm.
For our purposes, P300 based sensing could be
applied into any of the decision based sensing, but
it does not give us the emotional responses we are
looking for to determine how a user is feeling at
a given time. Similarly to the reason we chose not
to use SSVEP, the Emotiv headset is commercially
available, which further plays into our decision to
use it.
2.2 Related Work
Many applications have been developed for use
with EEG headsets. These applications extend into
the realm of web browsers, gaming systems, and
even mobility control systems [3]. In fact, there has
been some work done to connect these brain sensing
methods to mobile phones, in order to interact
with the smaller devices [4]. Such a wide array
of applications highlights the desire for a deeper
understanding of the way our brains interact with
computers.
Using the Emotiv EPOC headset specifically, the
authors in [5] utilized the gyroscope in the headset
in order to control the movement of a wheelchair.
The system was developed to move the chair using
either one head motion or four head motions. This
system shows that the headset is powerful enough
to control larger scale applications, and can be effec-
tive enough for day to day use.
In [6], the authors develop a pointing device
similar in functionality to a mouse for use by
quadriplegic users. They were able to emulate stan-
dard mouse movement, and easily able to teach
this new system to quadriplegic individuals. This
shows us that a gyro-scoped based system can have
practical application both for day-to-day or average
users, and can possibly help aid disabled users.
Another area of interest in BCI research is the
need to know when a user is attempting to access
the system. Because of the inherent always-active
nature of the human brain, there needs to be a way
to turn the system on and off. Researchers have at-
tempted to use complicated Gaussian Probabilities
[7] to solve this problem, but the math required is
very advanced. Instead, we attempt to show that
there may be better ways of doing this, based on
the natural responses of the human brain during
normal interaction with a computer. Specifically, we
attempt to find a facial movement to accurately and
reliably turn the EEG listening system on and off.
2.3 Equipment
We use the Emotiv EPOC (seen in Figure 2) Headset
for our testing. The EPOC comes with fourteen sen-
sors and two reference points, and transmits data
wirelessly. The headset uses the sequential sampling
method, which for our purposes entails using the
data available at every time-step to extract features
for that time-step. The EPOC headset connects via
WiFi in order to transmit the data back to an ag-
gregator. There, signal processing is done to reduce
the noise, a simple Principal Component Analysis is
done, and the classification of the features is passed
to the user either via the Emotiv Control Panel, or
through their open API.
The Emotiv headset is capable of measuring a
number of things. The built in two-axis gyro-scope
MASTER’S PROJECT, THE COLLEGE OF WILLIAM AND MARY, SPRING 2015 4
Fig. 2. The Emotiv Epoc Headset
provides an accurate sensing of any head movement
from the user. The Emotiv API allows access to
a number of other classifications in the emotional
and physical ranges, termed Affectiv and Expressiv
respectively. The headset also measures the users
intent to perform specific trained tasks through the
Cognitiv Suite.
The Expressiv suite offers access to physical
events. These events are limited to Blink, Right
Wink, Left Wink, Look Right, Look Left, Look
Up, Look Down, Raise Brow, Furrow Brow, Smile,
Clench, Right Smirk, Left Smirk, and Smile. For each
of these actions, when the headset classifies one as
occurring, a signal is sent to the application, which
we then log. All events operate on a binary scale,
either they happen or they do not, except for the
brow events, which give a measure of extent. The
sensitivity of the system to each of these events can
be changed, but for our study we use the default
configuration in order to give the best representa-
tion of a non-trained user.
The Affectiv suite reports real time changes in
the subjective emotions experienced by the user.
This suite looks for universal brainwave charac-
teristics, and stores this data so the results can be
rescaled over time. We select the new user option
for every study participant so not to bias our results
with Emotiv’s learning system.
The Affectiv suite offers analysis of five differ-
ent emotions, Short-Term Excitement, Long-Term
Excitement, Frustration, Engagement and Medita-
tion. Short-Term Excitement is a measure of posi-
tive physiological arousal. Related emotions to this
include titillation, nervousness and agitation. Long-
Term Excitement is similar to short-term, but the
detection is fine tuned to changes in excitement over
a longer period of time. The definition for the Frus-
-1500
-1000
-500
0
500
1000
1500
GyroscopeReading
Time (ms)
Gyroscope Data Logged for Head Motion
X Y
Fig. 3. Gyroscope data for Positive / Negative Head Motions
tration measurement parallels the emotion experi-
enced in everyday life. Engagement is considered
to be experienced as alertness and the conscious
direction of attention towards task related stim-
uli. Related emotions include alertness, vigilance,
concentration, stimulation and interest. Meditation
is defined as the measurement of how relaxed a
person is.
In the Cognitiv Suite, the headset allows for
training of thirteen different actions, not limited to
push, pull, rotate and disappear. This suite works
by training a classifier with how a user’s brain
responds when thinking about these specific ac-
tions. Then, the classifier listens for these patterns
to classify up to four of these actions at the same
time. We do not use the Cognitiv Suite in our study,
but note that for actions like clicking a mouse or
turning the volume down in the music player, this
suite would be very useful.
While the Emotiv headset is very good, it is at
a consumer price range. We wanted to assure that
our results were simple enough to be achievable
without requiring users to own elaborate, bulky
EEG systems. As a result, some of the recognition
promised by the Emotiv headset is not up to the
standards we could have hoped for. Our user study
looks to determine which actions are best recog-
nized and therefore are best suited for integration
into a reliable application.
2.4 Key Features
In our testing with the Emotiv headset, we came
across a few key observations. The Emotiv headset
does a few things very well, and other things not
MASTER’S PROJECT, THE COLLEGE OF WILLIAM AND MARY, SPRING 2015 5
Action Percentage of Successes
Raise Eyebrows 80.00%
Blink 20.00%
Left Wink 10.00%
Right Wink 10.00%
Look in Direction 6.00%
Fig. 4. True Positive Accuracy of Different Effectiv Motions using
the Emotiv EPOC headset
Fig. 5. The main user interface for control over the EEG logging
application.
quite as well. We tested the Effectiv suite for one
person while learning how to use the headset for
our user study. As you can see in Figure 4, the true
positive rates for blinking and eye direction are well
below the accuracy one might require in a real-time
application. That being said, the eyebrow motion
detection is much stronger. We still record data for
all these features with our users, as it is possible that
other users could have better results.
In addition to the fourteen sensors, the Emotiv
EPOC headset contains a two direction gyroscope
built into the headset. In researching the ways that
humans respond to questions in everyday conver-
sation, we noticed that head motion played a large
factor in determining a users response from phys-
ical cues alone. We tested the Emotiv gyroscope
and extracted data for someone nodding positively,
negatively, and vigorously nodding positively and
negatively. As Figure 3 clearly shows, it is not
challenging to discern how a user is responding to
a question based on the gyroscope data alone. As
a result, a significant portion of our study will be
based on utilizing the gyroscope to extract human
response information.
3 USER STUDY
3.1 Study Background
We implement a testing application in java with two
parts. The first is a logging based system, which
extracts the classification of the EEG data and logs
this data. We log emotional data for Engagement,
Fig. 6. A sample question a user would see throughout the study.
In total the user is asked 10 questions.
Frustration, Short Term Excitement, Long Term Ex-
citement, and Meditation. We also log the delta
of the motion in the X and Y directions of the
gyroscope. Finally, we log the Effective responses
of the participant, which include Eyebrow motion,
blinking, winking with both eyes, and directional
eye motion. Of all the Effective responses, only
the Eyebrow measurement is a measure of extent,
ranging from 0 to 1, while the rest are binary, either
they happened or they did not at each time-frame.
The second part is a simple survey, run at the
same time as the logging application. Participants
are asked a series of yes - no questions (presented
in full in Appendix A), and they are asked to re-
spond using head motions and audible cues. Upon
answering ten yes - no questions, they are asked to
specifically do each of the Effective physical actions.
This is to both test the accuracy of the headset for
multiple people and get a baseline for analyzing
if any of these physical responses are visible in
searching for positive or negative responses from
our participants.
Yes - no questions appear on screen for ten
seconds, and participants have five seconds be-
tween questions. Minimal instruction is given to the
participants, and the questions are not seen until
the session begins. The questions are specifically
written so that there can be no gray area in regards
to their answer. After the session, the responses of
the participant are recorded so we do have a ground
truth label for each question.
Nine users participated in our study. All par-
ticipants were willing, and signed a consent form
before participating. The pool was made up of four
males and five females, ranging from undergradu-
ates, graduates, and faculty of The College. Most
users were very excited to have their brain signals
looked at, which we do note when looking at the
collected data.
MASTER’S PROJECT, THE COLLEGE OF WILLIAM AND MARY, SPRING 2015 6
4 RESULTS AND ANALYSIS
Of the nine participants, eight provided quality re-
sults. Only one participant’s data had to be thrown
out due to insufficient signal quality throughout the
experiment. The participant had long, thick hair,
which may have been the reason for the poor signal
quality. While the data was lost, we did learn that
the Emotiv headset is not an end-all on the hard-
ware level. Also, in each of our figures here, we
show the data from one participant instead of all
eight. This is because the trends we point out are
easier to observe in one participant, and are seen
across all participants.
4.1 Collected Data
We plot the gyroscope data for the entire survey in
Figure 7 and the Affectiv data in Figure 9.
4.1.1 Gyroscope Data
From the Gyroscope figure, we can see that it is
easy to discern what the user is answering. This
is because nodding one’s head in affirmation con-
trasted with the negative shake are actions that
occur on separate axis. So from an affirmative /
negative perspective, realizing a user’s answers is,
as expected, quite easy. The interesting result comes
from a closer analysis of the gyroscope data. When
a user is more emphatic about their response, they
shake their head more vigorously, which shows
up in the gyroscope delta. We zoom in on two
different yes responses for one participant in Figure
8. The two questions asked here were questions
three and four, or ”Have you ever been to the state
of Virginia?” and ”Do you like chocolate?”. Clearly
the first response is less emphatic than the second,
which aligns with how we expected participants to
answer. Because the study was entirely conducted
in the state of Virginia, it makes sense that a partic-
ipant (assuming they like chocolate) would more
emphatically affirm their taste for chocolate over
their habitation of Virginia.
This relation also manifests itself when looking
at the average gyroscope delta for both responses.
When looking at the two questions, the first had
an average Y magnitude of 32.74, while the second
had an average Y magnitude of 56.54. Because we
can find such a difference, we can further divide
responses into strong no, no, yes, and strong yes.
We will use this information further in the imple-
mentation section.
-500
-400
-300
-200
-100
0
100
200
300
400
500
GyroscopeDelta(IMU)
Time (seconds)
Gyroscope Data Collected over Time
gyroX gyroY
Fig. 7. Graph of collected gyroscope data over time for one study
participant. The blue lines are the readings for the X axis, and
the orange for the Y axis. It is easy to tell that when the user is
answering yes to a question, the Y axis reports more data, and
that the X axis reports more motion when the user is answering
no.
-400
-300
-200
-100
0
100
200
300
400
GyroscopeDelta(IMU)
Time (seconds)
Y Axis Data for Questions 3 and 4
Fig. 8. Graph of one participant’s answers to two individual
questions. The first answer is to the question ”Have you ever
been to the state of Virginia?” and the second is to the question
”Do you like chocolate?”.
4.1.2 Affectiv Data
Looking at the Affectiv Data for one participant,
we can see the various emotions throughout the
entire survey. As we would expect, the Long-Term
Excitement and Short-Term Excitement scores both
drop as the survey goes on. Initially, users are ex-
cited to being using EEG sensing equipment, likely
for the first time. But once the novelty wears off,
the mundane task of answering simple questions
and moving features of their face around becomes
MASTER’S PROJECT, THE COLLEGE OF WILLIAM AND MARY, SPRING 2015 7
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
EmotionalResponse
Time (seconds)
Emotional Data over Time
Frustration Short Term Excitement Long Term Excitement
Fig. 9. Graph of the collected Affectiv Data over the course of
the study. The emotional data is normalized to the [0,1] range,
with 1 being a strong feeling and 0 being no feeling.
boring, and the excitement scores drop off.
Frustration rises and falls throughout the survey,
but peaks the most near the end for all participants.
Because the frustration score can be paralleled with
boredom, once the user hits the end of the study,
all of the novelty has worn off, and their brain is
resetting to a more natural pattern. It is expected
that answering ten simple questions and repeating
eight actions would wear on most people over time.
We did mention meditation and engagement
when discussing the Emotiv Affectiv Suite. For all
participants in the study, every participant recorded
constant data throughout the survey for these two
emotions. At this time we are not sure if this was
due to a hardware malfunction with the specific
headset we tested on, a weak signal strength, or
some other unforeseen error.
4.1.3 Expressiv Data
In addition to logging the gyroscope data and emo-
tional data, we looked to verify our hypothesis
about the facial motions recognized by the Emotiv
Headset. We plot the recording of blinking and eye-
brow events in Figure 11 and Figure 10 respectively.
From these we look to see if either motion could
be used to accurately and consistently turn a music
player on and off.
First looking at Figure 11, we can see that there
is no periodic pattern. We also note that there is
no significant difference between the number of
blinks recorded at a given time compared to the
number recorded in the ten second period where the
participant was consciously, continuously, blinking.
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
MagnitudeofEyebrowRaise
Time (seconds)
Recorded Magnitude of Brow Raising over Time
Fig. 10. Graph of recorded brow raise magnitudes throughout
the study. This magnitude is on a zero to one scale, and is a
measure of extent. The red outline is where we asked the user to
repetitively raise their brow. We can see that there is a significant
increase in brow motion in this time period.
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
MagnitudeoftheBlink
Time (seconds)
Recorded Blinking Over Time
Fig. 11. Graph of recorded blink events over the course of the
survey. The blink event is recorded on a binary scale; either it
happens or it does not. The red outline is where we asked the
user to repetitively blink their eyes. We can see that there are
only 5 or 6 blinks recorded in this time frame, far less than the
number of times the user blinked.
This further supports our original hypothesis that
blinking would not be an accurate facial motion to
be used with any sort of consistency.
On the other hand, Figure 10 shows the extent
to which the Emotiv headset recorded the brow
being raised. By looking at the graph we can see
the exact period when the participant was asked
to repeatedly raise their eyebrows. Other than this
one period, eyebrow extent at any one period is
relatively controlled. From this we back up our
hypothesis that eyebrow motion could be used as
MASTER’S PROJECT, THE COLLEGE OF WILLIAM AND MARY, SPRING 2015 8
an accurate state transition trigger.
We also looked at the recording of winking
with individual eyes in addition to the recording
of which way a person is looking. Neither action
returned significant data so we do not include the
figures in this report. We determined that neither
motion would provide consistent or accurate results
as a trigger for state changes.
4.2 Statistical Analysis
As part of our analysis on emotions, we looked at
the statistical correlation between positive answer
and short-term excitement, in addition to the cor-
relation between negative answers and frustration.
We would expect both to have positive relation-
ships.
We ran a Pearson’s Correlation test on Frustra-
tion, Short-Term Excitement, yes answers and no
answers. We took the absolute value of the gyro-
scope data to end up with a function which is larger
when the participant was answering. We can see
the results of our analysis in Figure 12. When the
Pearson Correlation Coefficient is close to one it
signifies a positive, linearly correlated relationship,
and close to negative one implies the opposite. As
we can see, the only relationship which exists is
the fact that there is a moderately strong positive
relationship between short-term excitement and yes
answers. While the results are not strong enough to
be used in an implementation, we note that there is
a strong relationship between short-term excitement
and yes answers.
We also look at the average magnitude of the
motion collected by the gyroscope. In Figure 13 we
plot the average magnitudes of the motion we col-
lect for all users. We only look at data greater than
40 inertial units because that allows us to see on
average how much a user moves their head while
answering a question specifically. As we can see,
yes answers, on average, require more motion than
no answers. This makes sense when we consider
how the human body has a larger range of motion
looking up and down compared to left and right.
We use this information in our implementation of
the results classification system to set thresholds for
each rating. We also see that in developing a system,
it would make sense to train a classifier for each
individual. Because people move in different ways,
a classifier would allow the ratings to be tailored to
the individual. We leave this for future work.
Frustration Excitement No Yes
Frustration 1.00000
Excitement -0.16022 1.00000
X = No 0.04362 0.07417 1.00000
Y = Yes -0.05421 0.31528 0.10533 1.00000
Fig. 12. The Pearson Coefficient’s of Frustration and Short-Term
Excitement compared with X and Y axis gyroscope data.
0
50
100
150
200
250
300
1 2 3 4 5 6 7 8
AverageNon-ZeroMotionDetected
Participant Number
Average Magnitude of Yes and No
Responses
xAxis yAxis
Fig. 13. The average magnitude of yes answers and no answers
for each user. We can see that yes answers require more motion
in the head on average.
5 IMPLEMENTATION
We implement our findings in a Brain-Controlled
music player.
5.1 Existing Application
As a result of the study, we implement our response
classification system and state transition classifier
on top of a music player. The existing architec-
ture is shown in Figure 14. The base structure is
built on top of the .NET Windows Media Player,
which is shown in Figure 15. The system connects
to the Emotiv EEG headset using their API. Once
we have access to the headset, extracting both the
Raw EEG data in addition to the classifications the
Emotiv Signal Processing gives us, both of these
data sets are stored in a database. This information,
in addition to the rating system we discuss next, is
compiled to drive a song recommendation engine.
When the media player loads the first and subse-
quent songs to play, the recommendation engine
drives this decision process.
The user has the option to select between Work,
Study, and Relax modes, and music is played which
MASTER’S PROJECT, THE COLLEGE OF WILLIAM AND MARY, SPRING 2015 9
Fig. 14. The existing architecture of the BCI controlled music
player. Input is done through the Emotiv Headset which is the
stored in the database and used to recommend songs for the
user to listen to.
Fig. 15. The user interface of the Brain Controlled Music Player.
matches their mood for each mode. In addition, the
player starts in a mode which matches the time
of day, so the initial music played is more likely
to match what the user would like to listen to.
The information that drives the selection engine is
extracted from a series of ratings the user gives each
song once it is done playing. The rating is done on
a 1 to 5 scale, which ranges from ”I hated this song”
to ”I loved this song”. In addition to taking the
pure rating, the users Arousal and Valence levels
are recorded using the Emotiv headset, and all of
this is compiled to rate the song for the given mode.
This information is in turn used to select songs for
the user as the player is used more.
5.2 On - Off Switch
As we discussed, one of the things we were looking
to evaluate in the user study was which facial event
would provide reliable and accurate recognition for
use turning the Emotiv system on and off. We deter-
mined from the study that the Emotiv system best
recognizes the motion of the brow, and this is what
we add to the existing music player application.
While the main idea was to look for turning the
system on and off, there is no reason at the moment
to stop recording EEG data while the player is
running. Instead, we allow the user to pause and
unpause the music player using two raises of their
eyebrows. We have found this works with reliable
consistency while using the application. In addition,
we have shown that this can easily be extended to
an application which relies more heavily on brain
signals as input, and could quite easily be added to
tell the EEG headset to start and stop listening.
5.3 Gyroscope Based Rating Dialog
As we learned in the user study, head motions such
as nodding and shaking the head are both quite
easy to detect and easy to extract more information
than yes and no out of. Using this information, we
implement a classification scheme for the user to
answer how much they enjoyed listening to a song.
In the existing implementation this dialog appears
when the user stops listening to a song, either by
pressing the ”next” button or when the song ends.
We extend this usage scenario by asking the user
simply if they liked the song that was playing. This
question can be see in Figure 16.
Structurally, the main thread spawns off two
children, one to show the prompt and one to listen
for the gyroscope. Access to the EEG headset is
passed through the BCIEngine which is responsible
for recording the BCI data about the song playing.
Once we have access to the headset, it is easy to
extract the gyroscope delta and average the absolute
value of this over time. Once enough data has been
collected (we select 300 as a constant number of
points) we classify the response into either a strong
no, no, neutral, yes, or a strong yes. This is repre-
sented to the user in the dialog box shown in Figure
17. The user is able to simply accept the rating by
waiting five seconds or clicking the yes button, or,
if the system did not correctly classify how they felt
about the song, the user can change the rating.
The overall rating structure works very well. For
users with enough variance between a standard an-
swer and a strong answer the system will accurately
classify their rating of the song within one point
every time. However, we do notice from our data
in the user study that setting static thresholds for
the different classifications is not optimal. We leave
MASTER’S PROJECT, THE COLLEGE OF WILLIAM AND MARY, SPRING 2015 10
Fig. 16. The dialog that appears when the user finishes playing
a song. The system listens for a response from the user or waits
five seconds before resolving to the default neutral value.
Fig. 17. The verification dialog that appears once a user has
classified a song. The user is allowed to change the rating if
they do not believe the system correctly rated the song.
training the system for individual use as future
work.
6 CONCLUSION
We have explored the use of the Emotiv EEG head-
set as a means for interacting with your computer.
While we were unable to find significant correla-
tions between reported brain activity and the an-
swers to simple yes and no questions, we were able
to determine that one’s head motions can provide a
reasonable scale of agreement and disagreement. In
addition, we learned that the Emotiv headset does
best when listening for motion in the brow. Utilizing
these two facts we implement an on-off switch and
rating classification system on top of an existing BCI
music player. Both of these contributions can easily
be extended to a myriad of different applications,
and have been shown to work. We are hopeful that
our research will be utilized to improve future brain
computer interfaces as both hardware capabilities
and consumer demands increase in the years to
come.
6.1 Future Work
There are a few areas in which we could extend our
current research. Because the user study was done
using one Emotiv EEG headset, we would like to
test these concepts using other commercial headsets
to see if we can find more significant correlations
between mood and how a user answers a question.
As EEG research becomes a bigger field in com-
puter science, more data will need to collected for
a variety of different headsets to fully understand
what parts of the brain fire for specific computer
interactions.
We also would like to use the emotional data and
gyroscope data to improve our rating classification
system in the music player. By analyzing how a
users head is moving while they listen to a song, it
might be possible to adjust our rating system to an
even finer grained scale to truly understand when
a user liked a certain song. Moreover, if we can
combine these results with the fluctuations in mood,
we might be able to come up with an even stronger
guess of how the user felt about a particular song.
APPENDIX A
SURVEY GIVEN TO PARTICIPANTS
The survey as presented is shown here. Each ques-
tion / action was displayed to the participant for ten
seconds before disappearing. Another five seconds
elapsed between questions.
1) Have you had a meal in the last twenty four
hours?
2) Have you left the country in the last twenty
four hours?
3) Have you ever been to the state of Virginia?
4) Do you like chocolate?
5) Have you ever been to Havana?
6) Can you run a mile in under 5 minutes?
7) Can you ride a bike?
8) Can you whistle?
MASTER’S PROJECT, THE COLLEGE OF WILLIAM AND MARY, SPRING 2015 11
9) Have you ever purchased a television?
10) Did you have coffee this morning?
11) Please move your eyebrows up and down
for ten seconds.
12) Please blink slowly for ten seconds.
13) Please wink with your left eye only for ten
seconds.
14) Please wink with your right eye only for ten
seconds.
15) Please use your eyes to look left and back to
center. Repeat this for ten seconds.
16) Please use your eyes to look right and back
to center. Repeat this for ten seconds.
17) Please use your eyes to look up and back to
center. Repeat this for ten seconds.
18) Please use your eyes to look down and back
to center. Repeat this for ten seconds.
REFERENCES
[1] F. Beverina, G. Palmas, S. Silvoni, F. Piccione, and S. Giove,
“User adaptive bcis: SSVEP and P300 based interfaces,”
PsychNology Journal, 2003.
[2] S. Amiri, A. Rabbi, L. Azinfar, and R. Fazel-Rezai, “A
review of P300, SSVEP and hybrid P300/SSVEP brain-
computer interface systems,” Biomedical Image and Signal
Processing Laboratory, Department of Electrical Engineering,
University of North Dakota, 2013.
[3] M. Jackson and M. Rudolph, “Applications for brain-
computer interfaces,” IEEE, 2010.
[4] A. Stopczynski, J. E. Larsen, C. Stahlhut, M. K. Petersen,
and L. K. Hansen, “A smartphone interface for a wireless
EEG headset with real-time 3d reconstruction.”
[5] E. J. Rechy-Ramirez, H. Hu, and K. McDonald-Maier,
“Head movements based control of an intelligent
wheelchair in an indoor environment,” IEEE, 2012.
[6] C. Pereira, R. Neto, A. Reynaldo, M. Canndida de Mi-
randa Luza, and R. Oliveira, “Development and evalua-
tion of a head-controlled human-computer interface with
mouse-like functions for physically disabled users,” Clinics,
2009.
[7] S. Fazli, M. Danoczy, F. Popescu, B. Blankertz, and K.-R.
Muller, “Using rest class and control paradigms for brain
computer interfacing,” IWANN, 2009.

More Related Content

What's hot

Brain fingerprint technology presentation
Brain fingerprint technology presentationBrain fingerprint technology presentation
Brain fingerprint technology presentation
Harsha Gundapaneni
 
TMS_Basics_Presentation_at_BangorUniversity
TMS_Basics_Presentation_at_BangorUniversityTMS_Basics_Presentation_at_BangorUniversity
TMS_Basics_Presentation_at_BangorUniversity
Marco Gandolfo
 
Brain fingerprinting techology by madhavi rao
Brain fingerprinting techology by madhavi raoBrain fingerprinting techology by madhavi rao
Brain fingerprinting techology by madhavi rao
smadhabi
 
Brain fingerprinting
Brain fingerprintingBrain fingerprinting
Brain fingerprintingSai Mahesh
 
Brain fingerprinting by ankit 2017............
Brain fingerprinting by ankit 2017............Brain fingerprinting by ankit 2017............
Brain fingerprinting by ankit 2017............
ankitg29
 
BRAIN FINGERPRINTING
BRAIN FINGERPRINTINGBRAIN FINGERPRINTING
BRAIN FINGERPRINTING
Aurobindo Nayak
 
Bft Abstract
Bft  AbstractBft  Abstract
Bft Abstract
sargam2010
 
Brain fingerprinting
Brain fingerprintingBrain fingerprinting
Brain fingerprinting
pgrr
 
Brain fingerprinting technology
Brain fingerprinting technologyBrain fingerprinting technology
Brain fingerprinting technologyMahan Senthil
 
Brain frequency based handicap wheelchair
Brain frequency based handicap wheelchairBrain frequency based handicap wheelchair
Brain frequency based handicap wheelchair
Dhanuaravinth K
 
Brain Finger Printing Technology
Brain Finger Printing TechnologyBrain Finger Printing Technology
Brain Finger Printing Technology
Yashu Cutepal
 
F1102024349
F1102024349F1102024349
F1102024349
IOSR Journals
 
BRAIN FINGERPRINTING_NIRMAL
BRAIN FINGERPRINTING_NIRMALBRAIN FINGERPRINTING_NIRMAL
BRAIN FINGERPRINTING_NIRMALNirmal Yadav
 
Brain fingerprinting
Brain fingerprintingBrain fingerprinting
Brain fingerprinting
Kommineni Pullarao
 
Brain finger printing
Brain finger printingBrain finger printing
Brain finger printing
Mohit Arora
 

What's hot (20)

Brain finger printing
Brain finger printingBrain finger printing
Brain finger printing
 
Brain fingerprint technology presentation
Brain fingerprint technology presentationBrain fingerprint technology presentation
Brain fingerprint technology presentation
 
TMS_Basics_Presentation_at_BangorUniversity
TMS_Basics_Presentation_at_BangorUniversityTMS_Basics_Presentation_at_BangorUniversity
TMS_Basics_Presentation_at_BangorUniversity
 
poster
posterposter
poster
 
Tech.ppt2
Tech.ppt2Tech.ppt2
Tech.ppt2
 
Bfp final presentation
Bfp final presentationBfp final presentation
Bfp final presentation
 
Brain fingerprinting techology by madhavi rao
Brain fingerprinting techology by madhavi raoBrain fingerprinting techology by madhavi rao
Brain fingerprinting techology by madhavi rao
 
Brain fingerprinting
Brain fingerprintingBrain fingerprinting
Brain fingerprinting
 
Report
ReportReport
Report
 
Brain fingerprinting by ankit 2017............
Brain fingerprinting by ankit 2017............Brain fingerprinting by ankit 2017............
Brain fingerprinting by ankit 2017............
 
BRAIN FINGERPRINTING
BRAIN FINGERPRINTINGBRAIN FINGERPRINTING
BRAIN FINGERPRINTING
 
Bft Abstract
Bft  AbstractBft  Abstract
Bft Abstract
 
Brain fingerprinting
Brain fingerprintingBrain fingerprinting
Brain fingerprinting
 
Brain fingerprinting technology
Brain fingerprinting technologyBrain fingerprinting technology
Brain fingerprinting technology
 
Brain frequency based handicap wheelchair
Brain frequency based handicap wheelchairBrain frequency based handicap wheelchair
Brain frequency based handicap wheelchair
 
Brain Finger Printing Technology
Brain Finger Printing TechnologyBrain Finger Printing Technology
Brain Finger Printing Technology
 
F1102024349
F1102024349F1102024349
F1102024349
 
BRAIN FINGERPRINTING_NIRMAL
BRAIN FINGERPRINTING_NIRMALBRAIN FINGERPRINTING_NIRMAL
BRAIN FINGERPRINTING_NIRMAL
 
Brain fingerprinting
Brain fingerprintingBrain fingerprinting
Brain fingerprinting
 
Brain finger printing
Brain finger printingBrain finger printing
Brain finger printing
 

Viewers also liked

Surgical extraction of suprnumeraray tooth.
Surgical extraction of suprnumeraray tooth.Surgical extraction of suprnumeraray tooth.
Surgical extraction of suprnumeraray tooth.
Dr. Roshni Maurya
 
Places of remembrance in spain
Places of remembrance in spainPlaces of remembrance in spain
Places of remembrance in spainjose angel
 
Social and Technical Evolution of the Ruby on Rails Software Ecosystem
Social and Technical Evolution of the Ruby on Rails Software EcosystemSocial and Technical Evolution of the Ruby on Rails Software Ecosystem
Social and Technical Evolution of the Ruby on Rails Software Ecosystem
econst
 
Rotational control in orthodontics
Rotational control in orthodonticsRotational control in orthodontics
Rotational control in orthodontics
Indian dental academy
 
Surgical orthodontics / oral surgery courses
Surgical orthodontics / oral surgery courses  Surgical orthodontics / oral surgery courses
Surgical orthodontics / oral surgery courses
Indian dental academy
 
Frankel functional appliance
Frankel functional applianceFrankel functional appliance
Frankel functional appliance
Indian dental academy
 
Orthodontic correction of rotated tooth
Orthodontic correction of rotated toothOrthodontic correction of rotated tooth
Orthodontic correction of rotated tooth
Dr. Roshni Maurya
 
Palatal crib
Palatal cribPalatal crib
Palatal crib
Dr. Roshni Maurya
 
Down's analysis/certified fixed orthodontic courses by Indian dental academy
Down's analysis/certified fixed orthodontic courses by Indian dental academy Down's analysis/certified fixed orthodontic courses by Indian dental academy
Down's analysis/certified fixed orthodontic courses by Indian dental academy
Indian dental academy
 
Diagnosis & treatment planning
Diagnosis & treatment planningDiagnosis & treatment planning
Diagnosis & treatment planning
Indian dental academy
 
Facebow / dental courses
Facebow / dental coursesFacebow / dental courses
Facebow / dental courses
Indian dental academy
 
Removable partial denture theory and practice 2011
Removable partial denture  theory and practice 2011Removable partial denture  theory and practice 2011
Removable partial denture theory and practice 2011
Mostafa Fayad
 
Comprehensive orthodontics
Comprehensive orthodonticsComprehensive orthodontics
Comprehensive orthodontics
Indian dental academy
 
Analisis mecanico del anclaje
Analisis mecanico del anclajeAnalisis mecanico del anclaje
Analisis mecanico del anclajeLeonardo Gualán
 
Accelerated orthodontic tooth movement
Accelerated orthodontic tooth movementAccelerated orthodontic tooth movement
Accelerated orthodontic tooth movement
Dr.Aisha Khoja
 
Videoimaging /certified fixed orthodontic courses by Indian dental academy
Videoimaging /certified fixed orthodontic courses by Indian dental academy  Videoimaging /certified fixed orthodontic courses by Indian dental academy
Videoimaging /certified fixed orthodontic courses by Indian dental academy
Indian dental academy
 
Rotation of teeth & its management
Rotation of teeth & its managementRotation of teeth & its management
Rotation of teeth & its management
manas mokashi
 
Anatomia en radiografías panorámicas
Anatomia en radiografías panorámicasAnatomia en radiografías panorámicas
Anatomia en radiografías panorámicas
ortodiagnosticodigital
 
파이썬 Collections 모듈 이해하기
파이썬 Collections 모듈 이해하기파이썬 Collections 모듈 이해하기
파이썬 Collections 모듈 이해하기
Yong Joon Moon
 

Viewers also liked (20)

Surgical extraction of suprnumeraray tooth.
Surgical extraction of suprnumeraray tooth.Surgical extraction of suprnumeraray tooth.
Surgical extraction of suprnumeraray tooth.
 
Places of remembrance in spain
Places of remembrance in spainPlaces of remembrance in spain
Places of remembrance in spain
 
Social and Technical Evolution of the Ruby on Rails Software Ecosystem
Social and Technical Evolution of the Ruby on Rails Software EcosystemSocial and Technical Evolution of the Ruby on Rails Software Ecosystem
Social and Technical Evolution of the Ruby on Rails Software Ecosystem
 
Rotational control in orthodontics
Rotational control in orthodonticsRotational control in orthodontics
Rotational control in orthodontics
 
Surgical orthodontics / oral surgery courses
Surgical orthodontics / oral surgery courses  Surgical orthodontics / oral surgery courses
Surgical orthodontics / oral surgery courses
 
Frankel functional appliance
Frankel functional applianceFrankel functional appliance
Frankel functional appliance
 
Orthodontic correction of rotated tooth
Orthodontic correction of rotated toothOrthodontic correction of rotated tooth
Orthodontic correction of rotated tooth
 
Palatal crib
Palatal cribPalatal crib
Palatal crib
 
Down's analysis/certified fixed orthodontic courses by Indian dental academy
Down's analysis/certified fixed orthodontic courses by Indian dental academy Down's analysis/certified fixed orthodontic courses by Indian dental academy
Down's analysis/certified fixed orthodontic courses by Indian dental academy
 
Diagnosis & treatment planning
Diagnosis & treatment planningDiagnosis & treatment planning
Diagnosis & treatment planning
 
Facebow / dental courses
Facebow / dental coursesFacebow / dental courses
Facebow / dental courses
 
Removable partial denture theory and practice 2011
Removable partial denture  theory and practice 2011Removable partial denture  theory and practice 2011
Removable partial denture theory and practice 2011
 
Comprehensive orthodontics
Comprehensive orthodonticsComprehensive orthodontics
Comprehensive orthodontics
 
Analisis mecanico del anclaje
Analisis mecanico del anclajeAnalisis mecanico del anclaje
Analisis mecanico del anclaje
 
Accelerated orthodontic tooth movement
Accelerated orthodontic tooth movementAccelerated orthodontic tooth movement
Accelerated orthodontic tooth movement
 
Videoimaging /certified fixed orthodontic courses by Indian dental academy
Videoimaging /certified fixed orthodontic courses by Indian dental academy  Videoimaging /certified fixed orthodontic courses by Indian dental academy
Videoimaging /certified fixed orthodontic courses by Indian dental academy
 
Cementum
Cementum Cementum
Cementum
 
Rotation of teeth & its management
Rotation of teeth & its managementRotation of teeth & its management
Rotation of teeth & its management
 
Anatomia en radiografías panorámicas
Anatomia en radiografías panorámicasAnatomia en radiografías panorámicas
Anatomia en radiografías panorámicas
 
파이썬 Collections 모듈 이해하기
파이썬 Collections 모듈 이해하기파이썬 Collections 모듈 이해하기
파이썬 Collections 모듈 이해하기
 

Similar to An Exploration on the Potential of an Electroencephalographic Headset for Human Computer Interaction

Mind control device
Mind control deviceMind control device
Mind control device
Soumik Sinha
 
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGS
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGSMETHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGS
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGS
ijistjournal
 
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGS
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGSMETHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGS
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGS
ijistjournal
 
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGS
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGSMETHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGS
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGS
ijistjournal
 
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGS
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGSMETHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGS
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGS
ijistjournal
 
IRJET- Depression Prediction System using Different Methods
IRJET- Depression Prediction System using Different MethodsIRJET- Depression Prediction System using Different Methods
IRJET- Depression Prediction System using Different Methods
IRJET Journal
 
Ab044195198
Ab044195198Ab044195198
Ab044195198
IJERA Editor
 
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGS
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGSMETHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGS
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGS
ijistjournal
 
IRJET- Review on Depression Prediction using Different Methods
IRJET- Review on Depression Prediction using Different MethodsIRJET- Review on Depression Prediction using Different Methods
IRJET- Review on Depression Prediction using Different Methods
IRJET Journal
 
Brain Finger Printing
Brain Finger PrintingBrain Finger Printing
Brain Finger PrintingGarima Singh
 
Feature Extraction Techniques and Classification Algorithms for EEG Signals t...
Feature Extraction Techniques and Classification Algorithms for EEG Signals t...Feature Extraction Techniques and Classification Algorithms for EEG Signals t...
Feature Extraction Techniques and Classification Algorithms for EEG Signals t...
Editor IJCATR
 
Braingate
BraingateBraingate
Braingate
Karthik
 
Brain fingerprinting report
Brain fingerprinting reportBrain fingerprinting report
Brain fingerprinting reportSamyuktha Rani
 
BRAIN GATE
BRAIN GATEBRAIN GATE
BRAIN GATE
Namrata Koley
 
Braingate
BraingateBraingate
Braingate
sayalipatil528
 
Prediction Model for Emotion Recognition Using EEG
Prediction Model for Emotion Recognition Using EEGPrediction Model for Emotion Recognition Using EEG
Prediction Model for Emotion Recognition Using EEG
IRJET Journal
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
IJERD Editor
 
Final Project Report
Final Project ReportFinal Project Report
Final Project ReportAvinash Pawar
 
Password system that are mind control
Password system that are mind controlPassword system that are mind control
Password system that are mind control
vivatechijri
 

Similar to An Exploration on the Potential of an Electroencephalographic Headset for Human Computer Interaction (20)

Mind control device
Mind control deviceMind control device
Mind control device
 
Car Accident Avoider Using Brain Wave Sensor
Car Accident Avoider Using Brain Wave SensorCar Accident Avoider Using Brain Wave Sensor
Car Accident Avoider Using Brain Wave Sensor
 
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGS
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGSMETHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGS
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGS
 
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGS
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGSMETHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGS
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGS
 
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGS
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGSMETHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGS
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGS
 
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGS
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGSMETHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGS
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGS
 
IRJET- Depression Prediction System using Different Methods
IRJET- Depression Prediction System using Different MethodsIRJET- Depression Prediction System using Different Methods
IRJET- Depression Prediction System using Different Methods
 
Ab044195198
Ab044195198Ab044195198
Ab044195198
 
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGS
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGSMETHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGS
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGS
 
IRJET- Review on Depression Prediction using Different Methods
IRJET- Review on Depression Prediction using Different MethodsIRJET- Review on Depression Prediction using Different Methods
IRJET- Review on Depression Prediction using Different Methods
 
Brain Finger Printing
Brain Finger PrintingBrain Finger Printing
Brain Finger Printing
 
Feature Extraction Techniques and Classification Algorithms for EEG Signals t...
Feature Extraction Techniques and Classification Algorithms for EEG Signals t...Feature Extraction Techniques and Classification Algorithms for EEG Signals t...
Feature Extraction Techniques and Classification Algorithms for EEG Signals t...
 
Braingate
BraingateBraingate
Braingate
 
Brain fingerprinting report
Brain fingerprinting reportBrain fingerprinting report
Brain fingerprinting report
 
BRAIN GATE
BRAIN GATEBRAIN GATE
BRAIN GATE
 
Braingate
BraingateBraingate
Braingate
 
Prediction Model for Emotion Recognition Using EEG
Prediction Model for Emotion Recognition Using EEGPrediction Model for Emotion Recognition Using EEG
Prediction Model for Emotion Recognition Using EEG
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
 
Final Project Report
Final Project ReportFinal Project Report
Final Project Report
 
Password system that are mind control
Password system that are mind controlPassword system that are mind control
Password system that are mind control
 

An Exploration on the Potential of an Electroencephalographic Headset for Human Computer Interaction

  • 1. MASTER’S PROJECT, THE COLLEGE OF WILLIAM AND MARY, SPRING 2015 1 An Exploration on the Potential of an Electroencephalographic Headset for Human Computer Interaction Johnathan Savino, Peter Kemper Department of Computer Science The College of William and Mary Williamsburg, VA, 23185, USA {jesavino, kemper}@cs.wm.edu ! Abstract—As computers become more and more integrated into our daily life, so does the need for improved ways of interacting with them. We look into the possibility of using an EEG Brain Sensing Headset as means of better interfacing with a computer. Through a user study we analyze human brain wave patterns when responding to simple yes - no questions, in addition to looking at the accuracy of an Emotiv EEG Headset in recognizing different facial movement patterns. We implement our findings into a brain controlled music player, capable of recognizing head movements and certain facial movements to aid turning the player on and off plus allowing rating of played songs to be done hands free. We provide data to conclude that both brow motion and head motion provide accurate and reliably recognizable data for deployment into a variety of brain computer applications. Index Terms—EEG, Brain Computer Interface, Human Com- puter Interaction, User Study 1 INTRODUCTION INTERACTING with computers has become so common in our daily lives that many people will not go more than one day without spending some time in front of one their devices. This inter- action has been developed carefully over the last fifty years, moving from keyboard only systems, towards more advanced graphical user interfaces. It only seems natural that if we can improve the way that humans interact with their computers, we can drastically improve one of the more frequent This project was approved by the College of William and Mary Protection of Human Subjects Committee (Phone 757-221-3966) on 2015-03-03 and expires on 2016-03-03. interactions we make throughout the day. In doing so, we not only reduce the amount of time required to complete mundane tasks, but also allow more time to spent on the problem at hand instead of interfacing with the computer. We are also able to broaden the group of users able to use computers by decreasing the learning curve required to use a range of computing devices. We would like to utilize Electroencephalo- graphic (EEG) sensing equipment in order to ex- ploit common brain signal patterns which occur in tandem with our daily interactions. By harnessing these patterns in the different regions of the brain, it is possible to track different emotions and recognize evoked signals based on physical or visual stimuli. EEG sensors work by measuring the electrical sig- nals in different regions of the brain, which, when combined, allow features such as mood to be ex- tracted in real time. The benefit to EEG based brain monitoring systems is that EEG is non-invasive; all signal collection is done with sensors places on top of the head. This allows for easy integration into daily life. In order to provide a fully immersive experience for your everyday user, there must be a way to accurately disseminate the signal we get from the EEG sensors from the noise of background brain ac- tivity. While a significant amount of work has been done mapping different cortical regions of the brain to specific emotions, less has been done looking at the signals the brain produces throughout everyday interaction with a computer. More of this will be discussed in section 2 with other related works.
  • 2. MASTER’S PROJECT, THE COLLEGE OF WILLIAM AND MARY, SPRING 2015 2 This study is an attempt to see how the brain responds throughout interaction with computers. More specifically, we will look at how users af- fective and effective responses change as they are presented a series of yes and no questions. These binary questions can be assumed similar to confir- mation dialogs often encountered with a computer. In addition, we will look to determine which fa- cial expressions are best utilized as a state control trigger. The main contributions of this paper can be summarized as follows: • We present a user study which looks to analyze how users brains respond through- out interaction with a computer. This infor- mation becomes incredibly powerful when attempting to interconnect brain and com- puter. From this study we learn typical brain patterns experience by average users when they interact with a computer on an every- day basis. • We implement a simple binary controller into an EEG based brain music player. This system will allow users to make choices in program such as accepting a dialog popup using a gyroscope on their head. • We implement an on - off controller to pause and unpause the music player. Users can raise their brow twice to achieve this, which adds the ability to operate the player hands free. The rest of this paper is structured as follows. We present related work and our motivation in section (2), information on the user study in section (3), an analysis of the data in section (4), and finally the background of our implementation in the music player in section (5). We then conclude and present our future work. 2 MOTIVATION While electroencephalography has been around for a number of years, the work done in regards to human computer interaction has not been explored to its full potential. 2.1 EEG Background While many people have heard of electroen- cephalography, it certainly is not an everyday term. In order to grasp the limitations of EEG as a brain sensing solution, we will first discuss two different approaches of sensing. Before we do this, we will define what an Event Related Potential (ERP) is. In short, an ERP mea- sures the brains specific responses to some sort of specific cognitive, sensory or motor event. We go further into how ERPs are used in BCI. 2.1.1 SSVEP The first approach is the analysis of Steady-State Visual Evoked Potentials (SSVEP). SSVEP are nat- ural responses to stimulus of specific frequencies [1]. These visually evoked potentials are elicited by sudden visual stimuli and the repetitive stimuli lead to stable oscillations in EEG. These voltage oscillation patterns are called SSVEP [2]. SSVEP is evoked at the frequency of the stim- ulus. When the retina is excited by a visual cue in range of 3.5 Hz to 75 Hz, the brain generates electri- cal activity mimicking this frequency [2]. This activ- ity can be further broken down into low, medium, and high frequency bands. Because the stimulus is directly related to frequency, SSVEP is a good indicator of visual disease in a variety of patients. In relation to BCI, SSVEP functions well in ap- plications that send a large number of commands which require a high reliability. A typical setup for an SSVEP-based system involves using one or mul- tiple LED lights to flicker at varying frequencies. SSVEP is ideal for users where small eye movement is allowed, users that are capable of sustained atten- tion effort, and applications where small command delays are allowed. SSVEP could be applied to our application, but we chose to perform our study with the Emotiv headset because it is a commercial headset. In addi- tion, because we are interested in a users emotional and physical responses during ordinary computer interaction, we chose the Emotiv headset over one using SSVEP. 2.1.2 P300 The second approach is called P300 Evoked Po- tential. This wave is a component of an Event Related Potential, not limited to auditory, visual or somatosensory stimuli [1]. P300 is one of the major peaks in the ERP wave response. The presentation of stimulus in an oddball paradigm can produce a positive peak in the EEG, approximately 300ms af- ter onset of the stimulus [2]. The triggered response is called the P300 component of ERP. P300 sensing nodes are placed along the center of the skull and the back of the head. The wave captured by the P300 component ranges in the 2 to 5 µHz, and only lasts 150 to 200ms [2]. Due to the
  • 3. MASTER’S PROJECT, THE COLLEGE OF WILLIAM AND MARY, SPRING 2015 3 Fig. 1. An example of a P300 system small nature of these measurements, one can imag- ine that a significant amount of signal processing must be done in order to get access to any sort of meaningful data. We show in Figure 1 a simple setup for clas- sifying P300 data to implement a spelling system. EEG Data is first acquired, and then sent for Pre- Processing. In this step, noise is removed from the gathered signal. After that, a Principle Component Analysis is run in order to highlight the signals that contribute the most, which are then fed into a classifier. In order to understand the basis for P300-based BCI, we will look at the speller in the system shown in Figure 1. The user is presented with a six by six grid, and instructed to focus on the letter that they would like to choose. The rows of the table are then randomly flashed, which evokes a P300- response when the row the user is focusing on lights up. This process is then repeated for the columns, which allows the system to narrow down the letter the user is interested in. From this simple example, we can see that P300 is a very strong system for BCI, but unfortunately is quite slow. As a result, more recent spelling systems still utilize some variation on this simple spelling paradigm. For our purposes, P300 based sensing could be applied into any of the decision based sensing, but it does not give us the emotional responses we are looking for to determine how a user is feeling at a given time. Similarly to the reason we chose not to use SSVEP, the Emotiv headset is commercially available, which further plays into our decision to use it. 2.2 Related Work Many applications have been developed for use with EEG headsets. These applications extend into the realm of web browsers, gaming systems, and even mobility control systems [3]. In fact, there has been some work done to connect these brain sensing methods to mobile phones, in order to interact with the smaller devices [4]. Such a wide array of applications highlights the desire for a deeper understanding of the way our brains interact with computers. Using the Emotiv EPOC headset specifically, the authors in [5] utilized the gyroscope in the headset in order to control the movement of a wheelchair. The system was developed to move the chair using either one head motion or four head motions. This system shows that the headset is powerful enough to control larger scale applications, and can be effec- tive enough for day to day use. In [6], the authors develop a pointing device similar in functionality to a mouse for use by quadriplegic users. They were able to emulate stan- dard mouse movement, and easily able to teach this new system to quadriplegic individuals. This shows us that a gyro-scoped based system can have practical application both for day-to-day or average users, and can possibly help aid disabled users. Another area of interest in BCI research is the need to know when a user is attempting to access the system. Because of the inherent always-active nature of the human brain, there needs to be a way to turn the system on and off. Researchers have at- tempted to use complicated Gaussian Probabilities [7] to solve this problem, but the math required is very advanced. Instead, we attempt to show that there may be better ways of doing this, based on the natural responses of the human brain during normal interaction with a computer. Specifically, we attempt to find a facial movement to accurately and reliably turn the EEG listening system on and off. 2.3 Equipment We use the Emotiv EPOC (seen in Figure 2) Headset for our testing. The EPOC comes with fourteen sen- sors and two reference points, and transmits data wirelessly. The headset uses the sequential sampling method, which for our purposes entails using the data available at every time-step to extract features for that time-step. The EPOC headset connects via WiFi in order to transmit the data back to an ag- gregator. There, signal processing is done to reduce the noise, a simple Principal Component Analysis is done, and the classification of the features is passed to the user either via the Emotiv Control Panel, or through their open API. The Emotiv headset is capable of measuring a number of things. The built in two-axis gyro-scope
  • 4. MASTER’S PROJECT, THE COLLEGE OF WILLIAM AND MARY, SPRING 2015 4 Fig. 2. The Emotiv Epoc Headset provides an accurate sensing of any head movement from the user. The Emotiv API allows access to a number of other classifications in the emotional and physical ranges, termed Affectiv and Expressiv respectively. The headset also measures the users intent to perform specific trained tasks through the Cognitiv Suite. The Expressiv suite offers access to physical events. These events are limited to Blink, Right Wink, Left Wink, Look Right, Look Left, Look Up, Look Down, Raise Brow, Furrow Brow, Smile, Clench, Right Smirk, Left Smirk, and Smile. For each of these actions, when the headset classifies one as occurring, a signal is sent to the application, which we then log. All events operate on a binary scale, either they happen or they do not, except for the brow events, which give a measure of extent. The sensitivity of the system to each of these events can be changed, but for our study we use the default configuration in order to give the best representa- tion of a non-trained user. The Affectiv suite reports real time changes in the subjective emotions experienced by the user. This suite looks for universal brainwave charac- teristics, and stores this data so the results can be rescaled over time. We select the new user option for every study participant so not to bias our results with Emotiv’s learning system. The Affectiv suite offers analysis of five differ- ent emotions, Short-Term Excitement, Long-Term Excitement, Frustration, Engagement and Medita- tion. Short-Term Excitement is a measure of posi- tive physiological arousal. Related emotions to this include titillation, nervousness and agitation. Long- Term Excitement is similar to short-term, but the detection is fine tuned to changes in excitement over a longer period of time. The definition for the Frus- -1500 -1000 -500 0 500 1000 1500 GyroscopeReading Time (ms) Gyroscope Data Logged for Head Motion X Y Fig. 3. Gyroscope data for Positive / Negative Head Motions tration measurement parallels the emotion experi- enced in everyday life. Engagement is considered to be experienced as alertness and the conscious direction of attention towards task related stim- uli. Related emotions include alertness, vigilance, concentration, stimulation and interest. Meditation is defined as the measurement of how relaxed a person is. In the Cognitiv Suite, the headset allows for training of thirteen different actions, not limited to push, pull, rotate and disappear. This suite works by training a classifier with how a user’s brain responds when thinking about these specific ac- tions. Then, the classifier listens for these patterns to classify up to four of these actions at the same time. We do not use the Cognitiv Suite in our study, but note that for actions like clicking a mouse or turning the volume down in the music player, this suite would be very useful. While the Emotiv headset is very good, it is at a consumer price range. We wanted to assure that our results were simple enough to be achievable without requiring users to own elaborate, bulky EEG systems. As a result, some of the recognition promised by the Emotiv headset is not up to the standards we could have hoped for. Our user study looks to determine which actions are best recog- nized and therefore are best suited for integration into a reliable application. 2.4 Key Features In our testing with the Emotiv headset, we came across a few key observations. The Emotiv headset does a few things very well, and other things not
  • 5. MASTER’S PROJECT, THE COLLEGE OF WILLIAM AND MARY, SPRING 2015 5 Action Percentage of Successes Raise Eyebrows 80.00% Blink 20.00% Left Wink 10.00% Right Wink 10.00% Look in Direction 6.00% Fig. 4. True Positive Accuracy of Different Effectiv Motions using the Emotiv EPOC headset Fig. 5. The main user interface for control over the EEG logging application. quite as well. We tested the Effectiv suite for one person while learning how to use the headset for our user study. As you can see in Figure 4, the true positive rates for blinking and eye direction are well below the accuracy one might require in a real-time application. That being said, the eyebrow motion detection is much stronger. We still record data for all these features with our users, as it is possible that other users could have better results. In addition to the fourteen sensors, the Emotiv EPOC headset contains a two direction gyroscope built into the headset. In researching the ways that humans respond to questions in everyday conver- sation, we noticed that head motion played a large factor in determining a users response from phys- ical cues alone. We tested the Emotiv gyroscope and extracted data for someone nodding positively, negatively, and vigorously nodding positively and negatively. As Figure 3 clearly shows, it is not challenging to discern how a user is responding to a question based on the gyroscope data alone. As a result, a significant portion of our study will be based on utilizing the gyroscope to extract human response information. 3 USER STUDY 3.1 Study Background We implement a testing application in java with two parts. The first is a logging based system, which extracts the classification of the EEG data and logs this data. We log emotional data for Engagement, Fig. 6. A sample question a user would see throughout the study. In total the user is asked 10 questions. Frustration, Short Term Excitement, Long Term Ex- citement, and Meditation. We also log the delta of the motion in the X and Y directions of the gyroscope. Finally, we log the Effective responses of the participant, which include Eyebrow motion, blinking, winking with both eyes, and directional eye motion. Of all the Effective responses, only the Eyebrow measurement is a measure of extent, ranging from 0 to 1, while the rest are binary, either they happened or they did not at each time-frame. The second part is a simple survey, run at the same time as the logging application. Participants are asked a series of yes - no questions (presented in full in Appendix A), and they are asked to re- spond using head motions and audible cues. Upon answering ten yes - no questions, they are asked to specifically do each of the Effective physical actions. This is to both test the accuracy of the headset for multiple people and get a baseline for analyzing if any of these physical responses are visible in searching for positive or negative responses from our participants. Yes - no questions appear on screen for ten seconds, and participants have five seconds be- tween questions. Minimal instruction is given to the participants, and the questions are not seen until the session begins. The questions are specifically written so that there can be no gray area in regards to their answer. After the session, the responses of the participant are recorded so we do have a ground truth label for each question. Nine users participated in our study. All par- ticipants were willing, and signed a consent form before participating. The pool was made up of four males and five females, ranging from undergradu- ates, graduates, and faculty of The College. Most users were very excited to have their brain signals looked at, which we do note when looking at the collected data.
  • 6. MASTER’S PROJECT, THE COLLEGE OF WILLIAM AND MARY, SPRING 2015 6 4 RESULTS AND ANALYSIS Of the nine participants, eight provided quality re- sults. Only one participant’s data had to be thrown out due to insufficient signal quality throughout the experiment. The participant had long, thick hair, which may have been the reason for the poor signal quality. While the data was lost, we did learn that the Emotiv headset is not an end-all on the hard- ware level. Also, in each of our figures here, we show the data from one participant instead of all eight. This is because the trends we point out are easier to observe in one participant, and are seen across all participants. 4.1 Collected Data We plot the gyroscope data for the entire survey in Figure 7 and the Affectiv data in Figure 9. 4.1.1 Gyroscope Data From the Gyroscope figure, we can see that it is easy to discern what the user is answering. This is because nodding one’s head in affirmation con- trasted with the negative shake are actions that occur on separate axis. So from an affirmative / negative perspective, realizing a user’s answers is, as expected, quite easy. The interesting result comes from a closer analysis of the gyroscope data. When a user is more emphatic about their response, they shake their head more vigorously, which shows up in the gyroscope delta. We zoom in on two different yes responses for one participant in Figure 8. The two questions asked here were questions three and four, or ”Have you ever been to the state of Virginia?” and ”Do you like chocolate?”. Clearly the first response is less emphatic than the second, which aligns with how we expected participants to answer. Because the study was entirely conducted in the state of Virginia, it makes sense that a partic- ipant (assuming they like chocolate) would more emphatically affirm their taste for chocolate over their habitation of Virginia. This relation also manifests itself when looking at the average gyroscope delta for both responses. When looking at the two questions, the first had an average Y magnitude of 32.74, while the second had an average Y magnitude of 56.54. Because we can find such a difference, we can further divide responses into strong no, no, yes, and strong yes. We will use this information further in the imple- mentation section. -500 -400 -300 -200 -100 0 100 200 300 400 500 GyroscopeDelta(IMU) Time (seconds) Gyroscope Data Collected over Time gyroX gyroY Fig. 7. Graph of collected gyroscope data over time for one study participant. The blue lines are the readings for the X axis, and the orange for the Y axis. It is easy to tell that when the user is answering yes to a question, the Y axis reports more data, and that the X axis reports more motion when the user is answering no. -400 -300 -200 -100 0 100 200 300 400 GyroscopeDelta(IMU) Time (seconds) Y Axis Data for Questions 3 and 4 Fig. 8. Graph of one participant’s answers to two individual questions. The first answer is to the question ”Have you ever been to the state of Virginia?” and the second is to the question ”Do you like chocolate?”. 4.1.2 Affectiv Data Looking at the Affectiv Data for one participant, we can see the various emotions throughout the entire survey. As we would expect, the Long-Term Excitement and Short-Term Excitement scores both drop as the survey goes on. Initially, users are ex- cited to being using EEG sensing equipment, likely for the first time. But once the novelty wears off, the mundane task of answering simple questions and moving features of their face around becomes
  • 7. MASTER’S PROJECT, THE COLLEGE OF WILLIAM AND MARY, SPRING 2015 7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 EmotionalResponse Time (seconds) Emotional Data over Time Frustration Short Term Excitement Long Term Excitement Fig. 9. Graph of the collected Affectiv Data over the course of the study. The emotional data is normalized to the [0,1] range, with 1 being a strong feeling and 0 being no feeling. boring, and the excitement scores drop off. Frustration rises and falls throughout the survey, but peaks the most near the end for all participants. Because the frustration score can be paralleled with boredom, once the user hits the end of the study, all of the novelty has worn off, and their brain is resetting to a more natural pattern. It is expected that answering ten simple questions and repeating eight actions would wear on most people over time. We did mention meditation and engagement when discussing the Emotiv Affectiv Suite. For all participants in the study, every participant recorded constant data throughout the survey for these two emotions. At this time we are not sure if this was due to a hardware malfunction with the specific headset we tested on, a weak signal strength, or some other unforeseen error. 4.1.3 Expressiv Data In addition to logging the gyroscope data and emo- tional data, we looked to verify our hypothesis about the facial motions recognized by the Emotiv Headset. We plot the recording of blinking and eye- brow events in Figure 11 and Figure 10 respectively. From these we look to see if either motion could be used to accurately and consistently turn a music player on and off. First looking at Figure 11, we can see that there is no periodic pattern. We also note that there is no significant difference between the number of blinks recorded at a given time compared to the number recorded in the ten second period where the participant was consciously, continuously, blinking. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 MagnitudeofEyebrowRaise Time (seconds) Recorded Magnitude of Brow Raising over Time Fig. 10. Graph of recorded brow raise magnitudes throughout the study. This magnitude is on a zero to one scale, and is a measure of extent. The red outline is where we asked the user to repetitively raise their brow. We can see that there is a significant increase in brow motion in this time period. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 MagnitudeoftheBlink Time (seconds) Recorded Blinking Over Time Fig. 11. Graph of recorded blink events over the course of the survey. The blink event is recorded on a binary scale; either it happens or it does not. The red outline is where we asked the user to repetitively blink their eyes. We can see that there are only 5 or 6 blinks recorded in this time frame, far less than the number of times the user blinked. This further supports our original hypothesis that blinking would not be an accurate facial motion to be used with any sort of consistency. On the other hand, Figure 10 shows the extent to which the Emotiv headset recorded the brow being raised. By looking at the graph we can see the exact period when the participant was asked to repeatedly raise their eyebrows. Other than this one period, eyebrow extent at any one period is relatively controlled. From this we back up our hypothesis that eyebrow motion could be used as
  • 8. MASTER’S PROJECT, THE COLLEGE OF WILLIAM AND MARY, SPRING 2015 8 an accurate state transition trigger. We also looked at the recording of winking with individual eyes in addition to the recording of which way a person is looking. Neither action returned significant data so we do not include the figures in this report. We determined that neither motion would provide consistent or accurate results as a trigger for state changes. 4.2 Statistical Analysis As part of our analysis on emotions, we looked at the statistical correlation between positive answer and short-term excitement, in addition to the cor- relation between negative answers and frustration. We would expect both to have positive relation- ships. We ran a Pearson’s Correlation test on Frustra- tion, Short-Term Excitement, yes answers and no answers. We took the absolute value of the gyro- scope data to end up with a function which is larger when the participant was answering. We can see the results of our analysis in Figure 12. When the Pearson Correlation Coefficient is close to one it signifies a positive, linearly correlated relationship, and close to negative one implies the opposite. As we can see, the only relationship which exists is the fact that there is a moderately strong positive relationship between short-term excitement and yes answers. While the results are not strong enough to be used in an implementation, we note that there is a strong relationship between short-term excitement and yes answers. We also look at the average magnitude of the motion collected by the gyroscope. In Figure 13 we plot the average magnitudes of the motion we col- lect for all users. We only look at data greater than 40 inertial units because that allows us to see on average how much a user moves their head while answering a question specifically. As we can see, yes answers, on average, require more motion than no answers. This makes sense when we consider how the human body has a larger range of motion looking up and down compared to left and right. We use this information in our implementation of the results classification system to set thresholds for each rating. We also see that in developing a system, it would make sense to train a classifier for each individual. Because people move in different ways, a classifier would allow the ratings to be tailored to the individual. We leave this for future work. Frustration Excitement No Yes Frustration 1.00000 Excitement -0.16022 1.00000 X = No 0.04362 0.07417 1.00000 Y = Yes -0.05421 0.31528 0.10533 1.00000 Fig. 12. The Pearson Coefficient’s of Frustration and Short-Term Excitement compared with X and Y axis gyroscope data. 0 50 100 150 200 250 300 1 2 3 4 5 6 7 8 AverageNon-ZeroMotionDetected Participant Number Average Magnitude of Yes and No Responses xAxis yAxis Fig. 13. The average magnitude of yes answers and no answers for each user. We can see that yes answers require more motion in the head on average. 5 IMPLEMENTATION We implement our findings in a Brain-Controlled music player. 5.1 Existing Application As a result of the study, we implement our response classification system and state transition classifier on top of a music player. The existing architec- ture is shown in Figure 14. The base structure is built on top of the .NET Windows Media Player, which is shown in Figure 15. The system connects to the Emotiv EEG headset using their API. Once we have access to the headset, extracting both the Raw EEG data in addition to the classifications the Emotiv Signal Processing gives us, both of these data sets are stored in a database. This information, in addition to the rating system we discuss next, is compiled to drive a song recommendation engine. When the media player loads the first and subse- quent songs to play, the recommendation engine drives this decision process. The user has the option to select between Work, Study, and Relax modes, and music is played which
  • 9. MASTER’S PROJECT, THE COLLEGE OF WILLIAM AND MARY, SPRING 2015 9 Fig. 14. The existing architecture of the BCI controlled music player. Input is done through the Emotiv Headset which is the stored in the database and used to recommend songs for the user to listen to. Fig. 15. The user interface of the Brain Controlled Music Player. matches their mood for each mode. In addition, the player starts in a mode which matches the time of day, so the initial music played is more likely to match what the user would like to listen to. The information that drives the selection engine is extracted from a series of ratings the user gives each song once it is done playing. The rating is done on a 1 to 5 scale, which ranges from ”I hated this song” to ”I loved this song”. In addition to taking the pure rating, the users Arousal and Valence levels are recorded using the Emotiv headset, and all of this is compiled to rate the song for the given mode. This information is in turn used to select songs for the user as the player is used more. 5.2 On - Off Switch As we discussed, one of the things we were looking to evaluate in the user study was which facial event would provide reliable and accurate recognition for use turning the Emotiv system on and off. We deter- mined from the study that the Emotiv system best recognizes the motion of the brow, and this is what we add to the existing music player application. While the main idea was to look for turning the system on and off, there is no reason at the moment to stop recording EEG data while the player is running. Instead, we allow the user to pause and unpause the music player using two raises of their eyebrows. We have found this works with reliable consistency while using the application. In addition, we have shown that this can easily be extended to an application which relies more heavily on brain signals as input, and could quite easily be added to tell the EEG headset to start and stop listening. 5.3 Gyroscope Based Rating Dialog As we learned in the user study, head motions such as nodding and shaking the head are both quite easy to detect and easy to extract more information than yes and no out of. Using this information, we implement a classification scheme for the user to answer how much they enjoyed listening to a song. In the existing implementation this dialog appears when the user stops listening to a song, either by pressing the ”next” button or when the song ends. We extend this usage scenario by asking the user simply if they liked the song that was playing. This question can be see in Figure 16. Structurally, the main thread spawns off two children, one to show the prompt and one to listen for the gyroscope. Access to the EEG headset is passed through the BCIEngine which is responsible for recording the BCI data about the song playing. Once we have access to the headset, it is easy to extract the gyroscope delta and average the absolute value of this over time. Once enough data has been collected (we select 300 as a constant number of points) we classify the response into either a strong no, no, neutral, yes, or a strong yes. This is repre- sented to the user in the dialog box shown in Figure 17. The user is able to simply accept the rating by waiting five seconds or clicking the yes button, or, if the system did not correctly classify how they felt about the song, the user can change the rating. The overall rating structure works very well. For users with enough variance between a standard an- swer and a strong answer the system will accurately classify their rating of the song within one point every time. However, we do notice from our data in the user study that setting static thresholds for the different classifications is not optimal. We leave
  • 10. MASTER’S PROJECT, THE COLLEGE OF WILLIAM AND MARY, SPRING 2015 10 Fig. 16. The dialog that appears when the user finishes playing a song. The system listens for a response from the user or waits five seconds before resolving to the default neutral value. Fig. 17. The verification dialog that appears once a user has classified a song. The user is allowed to change the rating if they do not believe the system correctly rated the song. training the system for individual use as future work. 6 CONCLUSION We have explored the use of the Emotiv EEG head- set as a means for interacting with your computer. While we were unable to find significant correla- tions between reported brain activity and the an- swers to simple yes and no questions, we were able to determine that one’s head motions can provide a reasonable scale of agreement and disagreement. In addition, we learned that the Emotiv headset does best when listening for motion in the brow. Utilizing these two facts we implement an on-off switch and rating classification system on top of an existing BCI music player. Both of these contributions can easily be extended to a myriad of different applications, and have been shown to work. We are hopeful that our research will be utilized to improve future brain computer interfaces as both hardware capabilities and consumer demands increase in the years to come. 6.1 Future Work There are a few areas in which we could extend our current research. Because the user study was done using one Emotiv EEG headset, we would like to test these concepts using other commercial headsets to see if we can find more significant correlations between mood and how a user answers a question. As EEG research becomes a bigger field in com- puter science, more data will need to collected for a variety of different headsets to fully understand what parts of the brain fire for specific computer interactions. We also would like to use the emotional data and gyroscope data to improve our rating classification system in the music player. By analyzing how a users head is moving while they listen to a song, it might be possible to adjust our rating system to an even finer grained scale to truly understand when a user liked a certain song. Moreover, if we can combine these results with the fluctuations in mood, we might be able to come up with an even stronger guess of how the user felt about a particular song. APPENDIX A SURVEY GIVEN TO PARTICIPANTS The survey as presented is shown here. Each ques- tion / action was displayed to the participant for ten seconds before disappearing. Another five seconds elapsed between questions. 1) Have you had a meal in the last twenty four hours? 2) Have you left the country in the last twenty four hours? 3) Have you ever been to the state of Virginia? 4) Do you like chocolate? 5) Have you ever been to Havana? 6) Can you run a mile in under 5 minutes? 7) Can you ride a bike? 8) Can you whistle?
  • 11. MASTER’S PROJECT, THE COLLEGE OF WILLIAM AND MARY, SPRING 2015 11 9) Have you ever purchased a television? 10) Did you have coffee this morning? 11) Please move your eyebrows up and down for ten seconds. 12) Please blink slowly for ten seconds. 13) Please wink with your left eye only for ten seconds. 14) Please wink with your right eye only for ten seconds. 15) Please use your eyes to look left and back to center. Repeat this for ten seconds. 16) Please use your eyes to look right and back to center. Repeat this for ten seconds. 17) Please use your eyes to look up and back to center. Repeat this for ten seconds. 18) Please use your eyes to look down and back to center. Repeat this for ten seconds. REFERENCES [1] F. Beverina, G. Palmas, S. Silvoni, F. Piccione, and S. Giove, “User adaptive bcis: SSVEP and P300 based interfaces,” PsychNology Journal, 2003. [2] S. Amiri, A. Rabbi, L. Azinfar, and R. Fazel-Rezai, “A review of P300, SSVEP and hybrid P300/SSVEP brain- computer interface systems,” Biomedical Image and Signal Processing Laboratory, Department of Electrical Engineering, University of North Dakota, 2013. [3] M. Jackson and M. Rudolph, “Applications for brain- computer interfaces,” IEEE, 2010. [4] A. Stopczynski, J. E. Larsen, C. Stahlhut, M. K. Petersen, and L. K. Hansen, “A smartphone interface for a wireless EEG headset with real-time 3d reconstruction.” [5] E. J. Rechy-Ramirez, H. Hu, and K. McDonald-Maier, “Head movements based control of an intelligent wheelchair in an indoor environment,” IEEE, 2012. [6] C. Pereira, R. Neto, A. Reynaldo, M. Canndida de Mi- randa Luza, and R. Oliveira, “Development and evalua- tion of a head-controlled human-computer interface with mouse-like functions for physically disabled users,” Clinics, 2009. [7] S. Fazli, M. Danoczy, F. Popescu, B. Blankertz, and K.-R. Muller, “Using rest class and control paradigms for brain computer interfacing,” IWANN, 2009.