This document summarizes a student project that explored using an electroencephalography (EEG) headset for human-computer interaction. The project involved a user study analyzing brain wave patterns in response to yes/no questions. It also analyzed the accuracy of an Emotiv EEG headset in recognizing facial movements. The findings were implemented in a brain-controlled music player that could recognize head and facial movements to control playback. The study concluded that brow and head motions provided accurate, reliably recognizable data for brain-computer applications.
Teaching Techniques: Neurotechnologies the way of the future (Stotler, 2019)Jacob Stotler
Presenting alternative to drugs from nuerotechnologies and teaching about clinical use of neurothreapy and therapeutic effectiveness of biological aspects of the use of clinical technologies.
Let’s master the digital toolkit to harness lifelong neuroplasticitySharpBrains
Four leading pioneers of applied neuroplasticity helped us navigate best practices to harness most promising non-invasive neurotechnologies, such as cognitive training, mindfulness apps, EEG and virtual/ augmented reality.
--Chair: Linda Raines, CEO of the Mental Health Association of Maryland
--Dr. Michael Merzenich, winner of the 2016 Kavli Prize in Neuroscience
--Dr. Judson Brewer, Founder & Research Lead of Claritas Mindsciences
--Tan Le, CEO of Emotiv
--Dr. Andrea Serino, Head of Neuroscience at MindMaze
Learn more at sharpbrains.com
This slide is about the basic theories of Neurotechnology.
It shows
1. An overview of this area
- Market value, etc
2. Basic knowledge
- Types of neurotechnologies
- Basics of neuroscience
- software engineering.
3. Use cases with neurotechnologies.
Brain Computer Interface for User Recognition And Smart Home ControlIJTET Journal
This project discussed about a brain controlled biometric based on Brain–computer interfaces (BCI). BCIs are systems that can bypass conventional channels of communication (i.e., muscles and thoughts) to provide direct communication and control between the human brain and physical devices by translating different patterns of brain activity into commands in real time. With these commands a biometric technology can be controlled. The intention of the project work is to develop a user recognition machine that can assist the work independent on others. Here, we are analyzing the brain wave signals. Human brain consists of millions of interconnected neurons. The patterns of interaction between these neurons are represented as thoughts and emotional states. According to the human thoughts, this pattern will be changing which in turn produce different electrical waves. A muscle contraction will also generate a unique electrical signal. All these electrical waves will be sensed by the brain wave sensor and it will convert the data into packets and transmit through Bluetooth medium. Level analyzer unit (LAU) will receive the brain wave raw data and it will extract and process the signal using Mat lab platform. Then the control commands will be transmitted to the robotic module to process. With this entire system, we can operate the home application according to the human thoughts and it can be turned by blink muscle contraction.
Teaching Techniques: Neurotechnologies the way of the future (Stotler, 2019)Jacob Stotler
Presenting alternative to drugs from nuerotechnologies and teaching about clinical use of neurothreapy and therapeutic effectiveness of biological aspects of the use of clinical technologies.
Let’s master the digital toolkit to harness lifelong neuroplasticitySharpBrains
Four leading pioneers of applied neuroplasticity helped us navigate best practices to harness most promising non-invasive neurotechnologies, such as cognitive training, mindfulness apps, EEG and virtual/ augmented reality.
--Chair: Linda Raines, CEO of the Mental Health Association of Maryland
--Dr. Michael Merzenich, winner of the 2016 Kavli Prize in Neuroscience
--Dr. Judson Brewer, Founder & Research Lead of Claritas Mindsciences
--Tan Le, CEO of Emotiv
--Dr. Andrea Serino, Head of Neuroscience at MindMaze
Learn more at sharpbrains.com
This slide is about the basic theories of Neurotechnology.
It shows
1. An overview of this area
- Market value, etc
2. Basic knowledge
- Types of neurotechnologies
- Basics of neuroscience
- software engineering.
3. Use cases with neurotechnologies.
Brain Computer Interface for User Recognition And Smart Home ControlIJTET Journal
This project discussed about a brain controlled biometric based on Brain–computer interfaces (BCI). BCIs are systems that can bypass conventional channels of communication (i.e., muscles and thoughts) to provide direct communication and control between the human brain and physical devices by translating different patterns of brain activity into commands in real time. With these commands a biometric technology can be controlled. The intention of the project work is to develop a user recognition machine that can assist the work independent on others. Here, we are analyzing the brain wave signals. Human brain consists of millions of interconnected neurons. The patterns of interaction between these neurons are represented as thoughts and emotional states. According to the human thoughts, this pattern will be changing which in turn produce different electrical waves. A muscle contraction will also generate a unique electrical signal. All these electrical waves will be sensed by the brain wave sensor and it will convert the data into packets and transmit through Bluetooth medium. Level analyzer unit (LAU) will receive the brain wave raw data and it will extract and process the signal using Mat lab platform. Then the control commands will be transmitted to the robotic module to process. With this entire system, we can operate the home application according to the human thoughts and it can be turned by blink muscle contraction.
In this presentation I introduce TMS usage in neurocognitive research for the MSc course at Bangor School of Psychology. Note that some of the material comes from other useful presentations found online.
Brain fingerprinting by ankit 2017............ankitg29
information about the brain fingerprinting technology . by EEG method by neuron firing and impulse of brain wave.P300 mean is 300-1000 m-sec brain wave, is better tech. than the polygraph test and other than than PET test. is basically depand the brain wave. is not the depand on the emotions and the pulse rate
Brain Fingerprinting is scientific technique to determine whether or not specific information is stored in an individual's brain.
Ruled Admissible in one US Court as scientific evidence.
It has a record of 100% Accuracy.
EEG is the fastest emerging technology in the industrial 4.0 culture. It will be more reliable when building devices. Currently, the big tech giant company is mainly focusing on this field of EEG for connecting their tesla. In day by day, modern world my innovative idea is to connect every peripheral in this node.
This presentation was about EEG control-based wheelchair with EEG lab toolbox for MATLAB. It will more helpful for a person who is working on EEG-based projects at the beginner level also this presentation included some basic ideas for how EEG works...
Brain Fingerprinting is a controversial forensic science technique that uses electroencephalography (EEG) to determine whether specific information is stored in a subject's brain. It does this by measuring electrical brainwave responses to words, phrases, or pictures that are presented on a computer screen (Farwell & Smith 2001, Farwell, Richardson, and Richardson 2012).
Brain Fingerprinting is a technique used to determine scientifically what information is, or is not stored in a particular brain.
Brain Finger Printing was invented by Dr B .S. Farwell chief scientist and president of human brain research and laboratory , USA
In this presentation I introduce TMS usage in neurocognitive research for the MSc course at Bangor School of Psychology. Note that some of the material comes from other useful presentations found online.
Brain fingerprinting by ankit 2017............ankitg29
information about the brain fingerprinting technology . by EEG method by neuron firing and impulse of brain wave.P300 mean is 300-1000 m-sec brain wave, is better tech. than the polygraph test and other than than PET test. is basically depand the brain wave. is not the depand on the emotions and the pulse rate
Brain Fingerprinting is scientific technique to determine whether or not specific information is stored in an individual's brain.
Ruled Admissible in one US Court as scientific evidence.
It has a record of 100% Accuracy.
EEG is the fastest emerging technology in the industrial 4.0 culture. It will be more reliable when building devices. Currently, the big tech giant company is mainly focusing on this field of EEG for connecting their tesla. In day by day, modern world my innovative idea is to connect every peripheral in this node.
This presentation was about EEG control-based wheelchair with EEG lab toolbox for MATLAB. It will more helpful for a person who is working on EEG-based projects at the beginner level also this presentation included some basic ideas for how EEG works...
Brain Fingerprinting is a controversial forensic science technique that uses electroencephalography (EEG) to determine whether specific information is stored in a subject's brain. It does this by measuring electrical brainwave responses to words, phrases, or pictures that are presented on a computer screen (Farwell & Smith 2001, Farwell, Richardson, and Richardson 2012).
Brain Fingerprinting is a technique used to determine scientifically what information is, or is not stored in a particular brain.
Brain Finger Printing was invented by Dr B .S. Farwell chief scientist and president of human brain research and laboratory , USA
Social and Technical Evolution of the Ruby on Rails Software Ecosystemeconst
Presentation of the paper "Social and Technical Evolution of Software Ecosystems: A Case Study of Rails" during the Workshop on Ecosystem Architectures (WEA2016), Copenhagen, Denmark, 29 November 2016.
The Indian Dental Academy is the Leader in continuing dental education , training dentists in all aspects of dentistry and
offering a wide range of dental certified courses in different formats.for more details please visit
www.indiandentalacademy.com
Description :
The Indian Dental Academy is the Leader in continuing dental education , training dentists in all aspects of dentistry and
offering a wide range of dental certified courses in different formats.for more details please visit
www.indiandentalacademy.com
The Indian Dental Academy is the Leader in continuing dental education , training dentists in all aspects of dentistry and
offering a wide range of dental certified courses in different formats.for more details please visit
www.indiandentalacademy.com
Down's analysis/certified fixed orthodontic courses by Indian dental academy Indian dental academy
The Indian Dental Academy is the Leader in continuing dental education , training dentists in all aspects of dentistry and offering a wide range of dental certified courses in different formats.
Indian dental academy provides dental crown & Bridge,rotary endodontics,fixed orthodontics,
Dental implants courses.for details pls visit www.indiandentalacademy.com ,or call
0091-9248678078
The Indian Dental Academy is the Leader in continuing dental education , training dentists in all aspects of dentistry and
offering a wide range of dental certified courses in different formats.for more details please visit
www.indiandentalacademy.com
Indian Dental Academy: will be one of the most relevant and exciting
training center with best faculty and flexible training programs
for dental professionals who wish to advance in their dental
practice,Offers certified courses in Dental
implants,Orthodontics,Endodontics,Cosmetic Dentistry, Prosthetic
Dentistry, Periodontics and General Dentistry.
The Indian Dental Academy is the Leader in continuing dental education , training dentists in all aspects of dentistry and
offering a wide range of dental certified courses in different formats.for more details please visit
www.indiandentalacademy.com
Videoimaging /certified fixed orthodontic courses by Indian dental academy Indian dental academy
Description :
The Indian Dental Academy is the Leader in continuing dental education , training dentists in all aspects of dentistry and
offering a wide range of dental certified courses in different formats.for more details please visit
www.indiandentalacademy.com
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGSijistjournal
This work proposes to recognize a user's commands by analysing his/her brainwaves captured with single channel electroencephalogram (EEG). Whenever a user intends to issue one of the pre-defined commands, the proposed system prompts him/her all the candidate commands in turn. Then, the user is asked to be concentrated as possible as he/she can, when the desired command is shown. It is assumed that the concentration will present a certain pattern of “Yes” in the captured EEG, as opposed to a certain pattern of “No” when the user is relaxed. Accordingly, the task is to determine that the captured EEG is “Yes” or not. This work compares three recognition methods, respectively, based on Gaussian mixture models, hidden Markov models and recurrent neural network, and conducts experiments using 2400 test EEG samples recorded from 10 subjects.
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGSijistjournal
This work proposes to recognize a user's commands by analysing his/her brainwaves captured with single channel electroencephalogram (EEG). Whenever a user intends to issue one of the pre-defined commands, the proposed system prompts him/her all the candidate commands in turn. Then, the user is asked to be concentrated as possible as he/she can, when the desired command is shown. It is assumed that the concentration will present a certain pattern of “Yes” in the captured EEG, as opposed to a certain pattern of “No” when the user is relaxed. Accordingly, the task is to determine that the captured EEG is “Yes” or not. This work compares three recognition methods, respectively, based on Gaussian mixture models, hidden Markov models and recurrent neural network, and conducts experiments using 2400 test EEG samples recorded from 10 subjects.
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGSijistjournal
This work proposes to recognize a user's commands by analysing his/her brainwaves captured with single
channel electroencephalogram (EEG). Whenever a user intends to issue one of the pre-defined
commands, the proposed system prompts him/her all the candidate commands in turn. Then, the user is
asked to be concentrated as possible as he/she can, when the desired command is shown. It is assumed
that the concentration will present a certain pattern of “Yes” in the captured EEG, as opposed to a
certain pattern of “No” when the user is relaxed. Accordingly, the task is to determine that the captured
EEG is “Yes” or not. This work compares three recognition methods, respectively, based on Gaussian
mixture models, hidden Markov models and recurrent neural network, and conducts experiments using
2400 test EEG samples recorded from 10 subjects.
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGSijistjournal
This work proposes to recognize a user's commands by analysing his/her brainwaves captured with single channel electroencephalogram (EEG). Whenever a user intends to issue one of the pre-defined commands, the proposed system prompts him/her all the candidate commands in turn. Then, the user is
asked to be concentrated as possible as he/she can, when the desired command is shown. It is assumed
that the concentration will present a certain pattern of “Yes” in the captured EEG, as opposed to a
certain pattern of “No” when the user is relaxed. Accordingly, the task is to determine that the captured EEG is “Yes” or not. This work compares three recognition methods, respectively, based on Gaussian mixture models, hidden Markov models and recurrent neural network, and conducts experiments using
2400 test EEG samples recorded from 10 subjects.
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGSijistjournal
This work proposes to recognize a user's commands by analysing his/her brainwaves captured with single
channel electroencephalogram (EEG). Whenever a user intends to issue one of the pre-defined
commands, the proposed system prompts him/her all the candidate commands in turn. Then, the user is
asked to be concentrated as possible as he/she can, when the desired command is shown. It is assumed
that the concentration will present a certain pattern of “Yes” in the captured EEG, as opposed to a
certain pattern of “No” when the user is relaxed. Accordingly, the task is to determine that the captured
EEG is “Yes” or not. This work compares three recognition methods, respectively, based on Gaussian
mixture models, hidden Markov models and recurrent neural network, and conducts experiments using
2400 test EEG samples recorded from 10 subjects.
Feature Extraction Techniques and Classification Algorithms for EEG Signals t...Editor IJCATR
EEG (Electroencephalogram) signal is a neuro signal which is generated due the different electrical activities in the brain.
Different types of electrical activities correspond to different states of the brain. Every physical activity of a person is due to some
activity in the brain which in turn generates an electrical signal. These signals can be captured and processed to get the useful information
that can be used in early detection of some mental diseases. This paper focus on the usefulness of EGG signal in detecting the human
stress levels. It also includes the comparison of various preprocessing algorithms ( DCT and DWT.) and various classification algorithms
(LDA, Naive Bayes and ANN.). The paper proposes a system which will process the EEG signal and by applying the combination of
classifiers, will detect the human stress levels.
The mind-to-movement system that allows a quadriplegic man to control a computer using only his thoughts is a scientific milestone. It was reached, in large part, through the brain gate system. This system has become a boon to the paralyzed. The Brain Gate System is based on Cyber kinetics platform technology to sense, transmit, analyze and apply the language of neurons. The principle of operation behind the Brain Gate System is that with intact brain function, brain signals are generated even though they are not sent to the arms, hands and legs.The signals are interpreted and translated into cursor movements, offering the user an alternate Brain Gate pathway to control a computer with thought,just as individuals who have the ability to move their hands use a mouse. The 'Brain Gate' contains tiny spikes that will extend down about one millimetre into the brain after being implanted beneath the skull,monitoring the activity from a small group of neurons.It will now be possible for a patient with spinal cord injury to produce brain signals that relay the intention of moving the paralyzed limbs,as signals to an implanted sensor,which is then output as electronic impulses. These impulses enable the user to operate mechanical devices with the help of a computer cursor. Matthew Nagle,a 25-year-old Massachusetts man with a severe spinal cord injury,has been paralyzed from the neck down since 2001.After taking part in a clinical trial of this system,he has opened e-mail,switched TV channels,turned on lights
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
One-day system authentication could be widely achieved through brainwaves. One doesn’t need to remember that 8 or more character long strange password. Simply thinking of certain things, such as a person face, or a rotating displayed cube, or line of song would be enough to unlock a device. Electro-encephalography (EEC) sensors are behind the technique. That is where electrical activity in certain parts of the brain is recorded. These sensors are used to generate the graphical lines on charts created from wired electrodes placed on the scalp, as seen in hospitals and TV shows. They are used in hospital to diagnose epilepsy, among other things. In this case, though, one wouldn’t need to be fitted with wired electrodes —or even a headset, which is used already in some current non-muscular EEC computer controls. An ear bud will collect the signals (mental gesture) and perform secure authentication. This research could provide hands-free and wireless interaction, authentication, and user experience, all in the form-factor of a typical ear bud.
Similar to An Exploration on the Potential of an Electroencephalographic Headset for Human Computer Interaction (20)
An Exploration on the Potential of an Electroencephalographic Headset for Human Computer Interaction
1. MASTER’S PROJECT, THE COLLEGE OF WILLIAM AND MARY, SPRING 2015 1
An Exploration on the Potential of an
Electroencephalographic Headset for Human
Computer Interaction
Johnathan Savino, Peter Kemper
Department of Computer Science
The College of William and Mary
Williamsburg, VA, 23185, USA
{jesavino, kemper}@cs.wm.edu
!
Abstract—As computers become more and more integrated
into our daily life, so does the need for improved ways of
interacting with them. We look into the possibility of using an
EEG Brain Sensing Headset as means of better interfacing with
a computer. Through a user study we analyze human brain
wave patterns when responding to simple yes - no questions, in
addition to looking at the accuracy of an Emotiv EEG Headset
in recognizing different facial movement patterns. We implement
our findings into a brain controlled music player, capable of
recognizing head movements and certain facial movements to
aid turning the player on and off plus allowing rating of played
songs to be done hands free. We provide data to conclude
that both brow motion and head motion provide accurate and
reliably recognizable data for deployment into a variety of brain
computer applications.
Index Terms—EEG, Brain Computer Interface, Human Com-
puter Interaction, User Study
1 INTRODUCTION
INTERACTING with computers has become so
common in our daily lives that many people
will not go more than one day without spending
some time in front of one their devices. This inter-
action has been developed carefully over the last
fifty years, moving from keyboard only systems,
towards more advanced graphical user interfaces.
It only seems natural that if we can improve the
way that humans interact with their computers, we
can drastically improve one of the more frequent
This project was approved by the College of William and Mary
Protection of Human Subjects Committee (Phone 757-221-3966) on
2015-03-03 and expires on 2016-03-03.
interactions we make throughout the day. In doing
so, we not only reduce the amount of time required
to complete mundane tasks, but also allow more
time to spent on the problem at hand instead of
interfacing with the computer. We are also able to
broaden the group of users able to use computers
by decreasing the learning curve required to use a
range of computing devices.
We would like to utilize Electroencephalo-
graphic (EEG) sensing equipment in order to ex-
ploit common brain signal patterns which occur in
tandem with our daily interactions. By harnessing
these patterns in the different regions of the brain, it
is possible to track different emotions and recognize
evoked signals based on physical or visual stimuli.
EEG sensors work by measuring the electrical sig-
nals in different regions of the brain, which, when
combined, allow features such as mood to be ex-
tracted in real time. The benefit to EEG based brain
monitoring systems is that EEG is non-invasive; all
signal collection is done with sensors places on top
of the head. This allows for easy integration into
daily life.
In order to provide a fully immersive experience
for your everyday user, there must be a way to
accurately disseminate the signal we get from the
EEG sensors from the noise of background brain ac-
tivity. While a significant amount of work has been
done mapping different cortical regions of the brain
to specific emotions, less has been done looking at
the signals the brain produces throughout everyday
interaction with a computer. More of this will be
discussed in section 2 with other related works.
2. MASTER’S PROJECT, THE COLLEGE OF WILLIAM AND MARY, SPRING 2015 2
This study is an attempt to see how the brain
responds throughout interaction with computers.
More specifically, we will look at how users af-
fective and effective responses change as they are
presented a series of yes and no questions. These
binary questions can be assumed similar to confir-
mation dialogs often encountered with a computer.
In addition, we will look to determine which fa-
cial expressions are best utilized as a state control
trigger. The main contributions of this paper can be
summarized as follows:
• We present a user study which looks to
analyze how users brains respond through-
out interaction with a computer. This infor-
mation becomes incredibly powerful when
attempting to interconnect brain and com-
puter. From this study we learn typical brain
patterns experience by average users when
they interact with a computer on an every-
day basis.
• We implement a simple binary controller
into an EEG based brain music player. This
system will allow users to make choices in
program such as accepting a dialog popup
using a gyroscope on their head.
• We implement an on - off controller to pause
and unpause the music player. Users can
raise their brow twice to achieve this, which
adds the ability to operate the player hands
free.
The rest of this paper is structured as follows. We
present related work and our motivation in section
(2), information on the user study in section (3),
an analysis of the data in section (4), and finally
the background of our implementation in the music
player in section (5). We then conclude and present
our future work.
2 MOTIVATION
While electroencephalography has been around for
a number of years, the work done in regards to
human computer interaction has not been explored
to its full potential.
2.1 EEG Background
While many people have heard of electroen-
cephalography, it certainly is not an everyday term.
In order to grasp the limitations of EEG as a brain
sensing solution, we will first discuss two different
approaches of sensing.
Before we do this, we will define what an Event
Related Potential (ERP) is. In short, an ERP mea-
sures the brains specific responses to some sort of
specific cognitive, sensory or motor event. We go
further into how ERPs are used in BCI.
2.1.1 SSVEP
The first approach is the analysis of Steady-State
Visual Evoked Potentials (SSVEP). SSVEP are nat-
ural responses to stimulus of specific frequencies
[1]. These visually evoked potentials are elicited
by sudden visual stimuli and the repetitive stimuli
lead to stable oscillations in EEG. These voltage
oscillation patterns are called SSVEP [2].
SSVEP is evoked at the frequency of the stim-
ulus. When the retina is excited by a visual cue in
range of 3.5 Hz to 75 Hz, the brain generates electri-
cal activity mimicking this frequency [2]. This activ-
ity can be further broken down into low, medium,
and high frequency bands. Because the stimulus
is directly related to frequency, SSVEP is a good
indicator of visual disease in a variety of patients.
In relation to BCI, SSVEP functions well in ap-
plications that send a large number of commands
which require a high reliability. A typical setup for
an SSVEP-based system involves using one or mul-
tiple LED lights to flicker at varying frequencies.
SSVEP is ideal for users where small eye movement
is allowed, users that are capable of sustained atten-
tion effort, and applications where small command
delays are allowed.
SSVEP could be applied to our application, but
we chose to perform our study with the Emotiv
headset because it is a commercial headset. In addi-
tion, because we are interested in a users emotional
and physical responses during ordinary computer
interaction, we chose the Emotiv headset over one
using SSVEP.
2.1.2 P300
The second approach is called P300 Evoked Po-
tential. This wave is a component of an Event
Related Potential, not limited to auditory, visual or
somatosensory stimuli [1]. P300 is one of the major
peaks in the ERP wave response. The presentation
of stimulus in an oddball paradigm can produce a
positive peak in the EEG, approximately 300ms af-
ter onset of the stimulus [2]. The triggered response
is called the P300 component of ERP.
P300 sensing nodes are placed along the center
of the skull and the back of the head. The wave
captured by the P300 component ranges in the 2 to
5 µHz, and only lasts 150 to 200ms [2]. Due to the
3. MASTER’S PROJECT, THE COLLEGE OF WILLIAM AND MARY, SPRING 2015 3
Fig. 1. An example of a P300 system
small nature of these measurements, one can imag-
ine that a significant amount of signal processing
must be done in order to get access to any sort of
meaningful data.
We show in Figure 1 a simple setup for clas-
sifying P300 data to implement a spelling system.
EEG Data is first acquired, and then sent for Pre-
Processing. In this step, noise is removed from the
gathered signal. After that, a Principle Component
Analysis is run in order to highlight the signals
that contribute the most, which are then fed into
a classifier.
In order to understand the basis for P300-based
BCI, we will look at the speller in the system shown
in Figure 1. The user is presented with a six by
six grid, and instructed to focus on the letter that
they would like to choose. The rows of the table
are then randomly flashed, which evokes a P300-
response when the row the user is focusing on lights
up. This process is then repeated for the columns,
which allows the system to narrow down the letter
the user is interested in.
From this simple example, we can see that P300
is a very strong system for BCI, but unfortunately is
quite slow. As a result, more recent spelling systems
still utilize some variation on this simple spelling
paradigm.
For our purposes, P300 based sensing could be
applied into any of the decision based sensing, but
it does not give us the emotional responses we are
looking for to determine how a user is feeling at
a given time. Similarly to the reason we chose not
to use SSVEP, the Emotiv headset is commercially
available, which further plays into our decision to
use it.
2.2 Related Work
Many applications have been developed for use
with EEG headsets. These applications extend into
the realm of web browsers, gaming systems, and
even mobility control systems [3]. In fact, there has
been some work done to connect these brain sensing
methods to mobile phones, in order to interact
with the smaller devices [4]. Such a wide array
of applications highlights the desire for a deeper
understanding of the way our brains interact with
computers.
Using the Emotiv EPOC headset specifically, the
authors in [5] utilized the gyroscope in the headset
in order to control the movement of a wheelchair.
The system was developed to move the chair using
either one head motion or four head motions. This
system shows that the headset is powerful enough
to control larger scale applications, and can be effec-
tive enough for day to day use.
In [6], the authors develop a pointing device
similar in functionality to a mouse for use by
quadriplegic users. They were able to emulate stan-
dard mouse movement, and easily able to teach
this new system to quadriplegic individuals. This
shows us that a gyro-scoped based system can have
practical application both for day-to-day or average
users, and can possibly help aid disabled users.
Another area of interest in BCI research is the
need to know when a user is attempting to access
the system. Because of the inherent always-active
nature of the human brain, there needs to be a way
to turn the system on and off. Researchers have at-
tempted to use complicated Gaussian Probabilities
[7] to solve this problem, but the math required is
very advanced. Instead, we attempt to show that
there may be better ways of doing this, based on
the natural responses of the human brain during
normal interaction with a computer. Specifically, we
attempt to find a facial movement to accurately and
reliably turn the EEG listening system on and off.
2.3 Equipment
We use the Emotiv EPOC (seen in Figure 2) Headset
for our testing. The EPOC comes with fourteen sen-
sors and two reference points, and transmits data
wirelessly. The headset uses the sequential sampling
method, which for our purposes entails using the
data available at every time-step to extract features
for that time-step. The EPOC headset connects via
WiFi in order to transmit the data back to an ag-
gregator. There, signal processing is done to reduce
the noise, a simple Principal Component Analysis is
done, and the classification of the features is passed
to the user either via the Emotiv Control Panel, or
through their open API.
The Emotiv headset is capable of measuring a
number of things. The built in two-axis gyro-scope
4. MASTER’S PROJECT, THE COLLEGE OF WILLIAM AND MARY, SPRING 2015 4
Fig. 2. The Emotiv Epoc Headset
provides an accurate sensing of any head movement
from the user. The Emotiv API allows access to
a number of other classifications in the emotional
and physical ranges, termed Affectiv and Expressiv
respectively. The headset also measures the users
intent to perform specific trained tasks through the
Cognitiv Suite.
The Expressiv suite offers access to physical
events. These events are limited to Blink, Right
Wink, Left Wink, Look Right, Look Left, Look
Up, Look Down, Raise Brow, Furrow Brow, Smile,
Clench, Right Smirk, Left Smirk, and Smile. For each
of these actions, when the headset classifies one as
occurring, a signal is sent to the application, which
we then log. All events operate on a binary scale,
either they happen or they do not, except for the
brow events, which give a measure of extent. The
sensitivity of the system to each of these events can
be changed, but for our study we use the default
configuration in order to give the best representa-
tion of a non-trained user.
The Affectiv suite reports real time changes in
the subjective emotions experienced by the user.
This suite looks for universal brainwave charac-
teristics, and stores this data so the results can be
rescaled over time. We select the new user option
for every study participant so not to bias our results
with Emotiv’s learning system.
The Affectiv suite offers analysis of five differ-
ent emotions, Short-Term Excitement, Long-Term
Excitement, Frustration, Engagement and Medita-
tion. Short-Term Excitement is a measure of posi-
tive physiological arousal. Related emotions to this
include titillation, nervousness and agitation. Long-
Term Excitement is similar to short-term, but the
detection is fine tuned to changes in excitement over
a longer period of time. The definition for the Frus-
-1500
-1000
-500
0
500
1000
1500
GyroscopeReading
Time (ms)
Gyroscope Data Logged for Head Motion
X Y
Fig. 3. Gyroscope data for Positive / Negative Head Motions
tration measurement parallels the emotion experi-
enced in everyday life. Engagement is considered
to be experienced as alertness and the conscious
direction of attention towards task related stim-
uli. Related emotions include alertness, vigilance,
concentration, stimulation and interest. Meditation
is defined as the measurement of how relaxed a
person is.
In the Cognitiv Suite, the headset allows for
training of thirteen different actions, not limited to
push, pull, rotate and disappear. This suite works
by training a classifier with how a user’s brain
responds when thinking about these specific ac-
tions. Then, the classifier listens for these patterns
to classify up to four of these actions at the same
time. We do not use the Cognitiv Suite in our study,
but note that for actions like clicking a mouse or
turning the volume down in the music player, this
suite would be very useful.
While the Emotiv headset is very good, it is at
a consumer price range. We wanted to assure that
our results were simple enough to be achievable
without requiring users to own elaborate, bulky
EEG systems. As a result, some of the recognition
promised by the Emotiv headset is not up to the
standards we could have hoped for. Our user study
looks to determine which actions are best recog-
nized and therefore are best suited for integration
into a reliable application.
2.4 Key Features
In our testing with the Emotiv headset, we came
across a few key observations. The Emotiv headset
does a few things very well, and other things not
5. MASTER’S PROJECT, THE COLLEGE OF WILLIAM AND MARY, SPRING 2015 5
Action Percentage of Successes
Raise Eyebrows 80.00%
Blink 20.00%
Left Wink 10.00%
Right Wink 10.00%
Look in Direction 6.00%
Fig. 4. True Positive Accuracy of Different Effectiv Motions using
the Emotiv EPOC headset
Fig. 5. The main user interface for control over the EEG logging
application.
quite as well. We tested the Effectiv suite for one
person while learning how to use the headset for
our user study. As you can see in Figure 4, the true
positive rates for blinking and eye direction are well
below the accuracy one might require in a real-time
application. That being said, the eyebrow motion
detection is much stronger. We still record data for
all these features with our users, as it is possible that
other users could have better results.
In addition to the fourteen sensors, the Emotiv
EPOC headset contains a two direction gyroscope
built into the headset. In researching the ways that
humans respond to questions in everyday conver-
sation, we noticed that head motion played a large
factor in determining a users response from phys-
ical cues alone. We tested the Emotiv gyroscope
and extracted data for someone nodding positively,
negatively, and vigorously nodding positively and
negatively. As Figure 3 clearly shows, it is not
challenging to discern how a user is responding to
a question based on the gyroscope data alone. As
a result, a significant portion of our study will be
based on utilizing the gyroscope to extract human
response information.
3 USER STUDY
3.1 Study Background
We implement a testing application in java with two
parts. The first is a logging based system, which
extracts the classification of the EEG data and logs
this data. We log emotional data for Engagement,
Fig. 6. A sample question a user would see throughout the study.
In total the user is asked 10 questions.
Frustration, Short Term Excitement, Long Term Ex-
citement, and Meditation. We also log the delta
of the motion in the X and Y directions of the
gyroscope. Finally, we log the Effective responses
of the participant, which include Eyebrow motion,
blinking, winking with both eyes, and directional
eye motion. Of all the Effective responses, only
the Eyebrow measurement is a measure of extent,
ranging from 0 to 1, while the rest are binary, either
they happened or they did not at each time-frame.
The second part is a simple survey, run at the
same time as the logging application. Participants
are asked a series of yes - no questions (presented
in full in Appendix A), and they are asked to re-
spond using head motions and audible cues. Upon
answering ten yes - no questions, they are asked to
specifically do each of the Effective physical actions.
This is to both test the accuracy of the headset for
multiple people and get a baseline for analyzing
if any of these physical responses are visible in
searching for positive or negative responses from
our participants.
Yes - no questions appear on screen for ten
seconds, and participants have five seconds be-
tween questions. Minimal instruction is given to the
participants, and the questions are not seen until
the session begins. The questions are specifically
written so that there can be no gray area in regards
to their answer. After the session, the responses of
the participant are recorded so we do have a ground
truth label for each question.
Nine users participated in our study. All par-
ticipants were willing, and signed a consent form
before participating. The pool was made up of four
males and five females, ranging from undergradu-
ates, graduates, and faculty of The College. Most
users were very excited to have their brain signals
looked at, which we do note when looking at the
collected data.
6. MASTER’S PROJECT, THE COLLEGE OF WILLIAM AND MARY, SPRING 2015 6
4 RESULTS AND ANALYSIS
Of the nine participants, eight provided quality re-
sults. Only one participant’s data had to be thrown
out due to insufficient signal quality throughout the
experiment. The participant had long, thick hair,
which may have been the reason for the poor signal
quality. While the data was lost, we did learn that
the Emotiv headset is not an end-all on the hard-
ware level. Also, in each of our figures here, we
show the data from one participant instead of all
eight. This is because the trends we point out are
easier to observe in one participant, and are seen
across all participants.
4.1 Collected Data
We plot the gyroscope data for the entire survey in
Figure 7 and the Affectiv data in Figure 9.
4.1.1 Gyroscope Data
From the Gyroscope figure, we can see that it is
easy to discern what the user is answering. This
is because nodding one’s head in affirmation con-
trasted with the negative shake are actions that
occur on separate axis. So from an affirmative /
negative perspective, realizing a user’s answers is,
as expected, quite easy. The interesting result comes
from a closer analysis of the gyroscope data. When
a user is more emphatic about their response, they
shake their head more vigorously, which shows
up in the gyroscope delta. We zoom in on two
different yes responses for one participant in Figure
8. The two questions asked here were questions
three and four, or ”Have you ever been to the state
of Virginia?” and ”Do you like chocolate?”. Clearly
the first response is less emphatic than the second,
which aligns with how we expected participants to
answer. Because the study was entirely conducted
in the state of Virginia, it makes sense that a partic-
ipant (assuming they like chocolate) would more
emphatically affirm their taste for chocolate over
their habitation of Virginia.
This relation also manifests itself when looking
at the average gyroscope delta for both responses.
When looking at the two questions, the first had
an average Y magnitude of 32.74, while the second
had an average Y magnitude of 56.54. Because we
can find such a difference, we can further divide
responses into strong no, no, yes, and strong yes.
We will use this information further in the imple-
mentation section.
-500
-400
-300
-200
-100
0
100
200
300
400
500
GyroscopeDelta(IMU)
Time (seconds)
Gyroscope Data Collected over Time
gyroX gyroY
Fig. 7. Graph of collected gyroscope data over time for one study
participant. The blue lines are the readings for the X axis, and
the orange for the Y axis. It is easy to tell that when the user is
answering yes to a question, the Y axis reports more data, and
that the X axis reports more motion when the user is answering
no.
-400
-300
-200
-100
0
100
200
300
400
GyroscopeDelta(IMU)
Time (seconds)
Y Axis Data for Questions 3 and 4
Fig. 8. Graph of one participant’s answers to two individual
questions. The first answer is to the question ”Have you ever
been to the state of Virginia?” and the second is to the question
”Do you like chocolate?”.
4.1.2 Affectiv Data
Looking at the Affectiv Data for one participant,
we can see the various emotions throughout the
entire survey. As we would expect, the Long-Term
Excitement and Short-Term Excitement scores both
drop as the survey goes on. Initially, users are ex-
cited to being using EEG sensing equipment, likely
for the first time. But once the novelty wears off,
the mundane task of answering simple questions
and moving features of their face around becomes
7. MASTER’S PROJECT, THE COLLEGE OF WILLIAM AND MARY, SPRING 2015 7
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
EmotionalResponse
Time (seconds)
Emotional Data over Time
Frustration Short Term Excitement Long Term Excitement
Fig. 9. Graph of the collected Affectiv Data over the course of
the study. The emotional data is normalized to the [0,1] range,
with 1 being a strong feeling and 0 being no feeling.
boring, and the excitement scores drop off.
Frustration rises and falls throughout the survey,
but peaks the most near the end for all participants.
Because the frustration score can be paralleled with
boredom, once the user hits the end of the study,
all of the novelty has worn off, and their brain is
resetting to a more natural pattern. It is expected
that answering ten simple questions and repeating
eight actions would wear on most people over time.
We did mention meditation and engagement
when discussing the Emotiv Affectiv Suite. For all
participants in the study, every participant recorded
constant data throughout the survey for these two
emotions. At this time we are not sure if this was
due to a hardware malfunction with the specific
headset we tested on, a weak signal strength, or
some other unforeseen error.
4.1.3 Expressiv Data
In addition to logging the gyroscope data and emo-
tional data, we looked to verify our hypothesis
about the facial motions recognized by the Emotiv
Headset. We plot the recording of blinking and eye-
brow events in Figure 11 and Figure 10 respectively.
From these we look to see if either motion could
be used to accurately and consistently turn a music
player on and off.
First looking at Figure 11, we can see that there
is no periodic pattern. We also note that there is
no significant difference between the number of
blinks recorded at a given time compared to the
number recorded in the ten second period where the
participant was consciously, continuously, blinking.
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
MagnitudeofEyebrowRaise
Time (seconds)
Recorded Magnitude of Brow Raising over Time
Fig. 10. Graph of recorded brow raise magnitudes throughout
the study. This magnitude is on a zero to one scale, and is a
measure of extent. The red outline is where we asked the user to
repetitively raise their brow. We can see that there is a significant
increase in brow motion in this time period.
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
MagnitudeoftheBlink
Time (seconds)
Recorded Blinking Over Time
Fig. 11. Graph of recorded blink events over the course of the
survey. The blink event is recorded on a binary scale; either it
happens or it does not. The red outline is where we asked the
user to repetitively blink their eyes. We can see that there are
only 5 or 6 blinks recorded in this time frame, far less than the
number of times the user blinked.
This further supports our original hypothesis that
blinking would not be an accurate facial motion to
be used with any sort of consistency.
On the other hand, Figure 10 shows the extent
to which the Emotiv headset recorded the brow
being raised. By looking at the graph we can see
the exact period when the participant was asked
to repeatedly raise their eyebrows. Other than this
one period, eyebrow extent at any one period is
relatively controlled. From this we back up our
hypothesis that eyebrow motion could be used as
8. MASTER’S PROJECT, THE COLLEGE OF WILLIAM AND MARY, SPRING 2015 8
an accurate state transition trigger.
We also looked at the recording of winking
with individual eyes in addition to the recording
of which way a person is looking. Neither action
returned significant data so we do not include the
figures in this report. We determined that neither
motion would provide consistent or accurate results
as a trigger for state changes.
4.2 Statistical Analysis
As part of our analysis on emotions, we looked at
the statistical correlation between positive answer
and short-term excitement, in addition to the cor-
relation between negative answers and frustration.
We would expect both to have positive relation-
ships.
We ran a Pearson’s Correlation test on Frustra-
tion, Short-Term Excitement, yes answers and no
answers. We took the absolute value of the gyro-
scope data to end up with a function which is larger
when the participant was answering. We can see
the results of our analysis in Figure 12. When the
Pearson Correlation Coefficient is close to one it
signifies a positive, linearly correlated relationship,
and close to negative one implies the opposite. As
we can see, the only relationship which exists is
the fact that there is a moderately strong positive
relationship between short-term excitement and yes
answers. While the results are not strong enough to
be used in an implementation, we note that there is
a strong relationship between short-term excitement
and yes answers.
We also look at the average magnitude of the
motion collected by the gyroscope. In Figure 13 we
plot the average magnitudes of the motion we col-
lect for all users. We only look at data greater than
40 inertial units because that allows us to see on
average how much a user moves their head while
answering a question specifically. As we can see,
yes answers, on average, require more motion than
no answers. This makes sense when we consider
how the human body has a larger range of motion
looking up and down compared to left and right.
We use this information in our implementation of
the results classification system to set thresholds for
each rating. We also see that in developing a system,
it would make sense to train a classifier for each
individual. Because people move in different ways,
a classifier would allow the ratings to be tailored to
the individual. We leave this for future work.
Frustration Excitement No Yes
Frustration 1.00000
Excitement -0.16022 1.00000
X = No 0.04362 0.07417 1.00000
Y = Yes -0.05421 0.31528 0.10533 1.00000
Fig. 12. The Pearson Coefficient’s of Frustration and Short-Term
Excitement compared with X and Y axis gyroscope data.
0
50
100
150
200
250
300
1 2 3 4 5 6 7 8
AverageNon-ZeroMotionDetected
Participant Number
Average Magnitude of Yes and No
Responses
xAxis yAxis
Fig. 13. The average magnitude of yes answers and no answers
for each user. We can see that yes answers require more motion
in the head on average.
5 IMPLEMENTATION
We implement our findings in a Brain-Controlled
music player.
5.1 Existing Application
As a result of the study, we implement our response
classification system and state transition classifier
on top of a music player. The existing architec-
ture is shown in Figure 14. The base structure is
built on top of the .NET Windows Media Player,
which is shown in Figure 15. The system connects
to the Emotiv EEG headset using their API. Once
we have access to the headset, extracting both the
Raw EEG data in addition to the classifications the
Emotiv Signal Processing gives us, both of these
data sets are stored in a database. This information,
in addition to the rating system we discuss next, is
compiled to drive a song recommendation engine.
When the media player loads the first and subse-
quent songs to play, the recommendation engine
drives this decision process.
The user has the option to select between Work,
Study, and Relax modes, and music is played which
9. MASTER’S PROJECT, THE COLLEGE OF WILLIAM AND MARY, SPRING 2015 9
Fig. 14. The existing architecture of the BCI controlled music
player. Input is done through the Emotiv Headset which is the
stored in the database and used to recommend songs for the
user to listen to.
Fig. 15. The user interface of the Brain Controlled Music Player.
matches their mood for each mode. In addition, the
player starts in a mode which matches the time
of day, so the initial music played is more likely
to match what the user would like to listen to.
The information that drives the selection engine is
extracted from a series of ratings the user gives each
song once it is done playing. The rating is done on
a 1 to 5 scale, which ranges from ”I hated this song”
to ”I loved this song”. In addition to taking the
pure rating, the users Arousal and Valence levels
are recorded using the Emotiv headset, and all of
this is compiled to rate the song for the given mode.
This information is in turn used to select songs for
the user as the player is used more.
5.2 On - Off Switch
As we discussed, one of the things we were looking
to evaluate in the user study was which facial event
would provide reliable and accurate recognition for
use turning the Emotiv system on and off. We deter-
mined from the study that the Emotiv system best
recognizes the motion of the brow, and this is what
we add to the existing music player application.
While the main idea was to look for turning the
system on and off, there is no reason at the moment
to stop recording EEG data while the player is
running. Instead, we allow the user to pause and
unpause the music player using two raises of their
eyebrows. We have found this works with reliable
consistency while using the application. In addition,
we have shown that this can easily be extended to
an application which relies more heavily on brain
signals as input, and could quite easily be added to
tell the EEG headset to start and stop listening.
5.3 Gyroscope Based Rating Dialog
As we learned in the user study, head motions such
as nodding and shaking the head are both quite
easy to detect and easy to extract more information
than yes and no out of. Using this information, we
implement a classification scheme for the user to
answer how much they enjoyed listening to a song.
In the existing implementation this dialog appears
when the user stops listening to a song, either by
pressing the ”next” button or when the song ends.
We extend this usage scenario by asking the user
simply if they liked the song that was playing. This
question can be see in Figure 16.
Structurally, the main thread spawns off two
children, one to show the prompt and one to listen
for the gyroscope. Access to the EEG headset is
passed through the BCIEngine which is responsible
for recording the BCI data about the song playing.
Once we have access to the headset, it is easy to
extract the gyroscope delta and average the absolute
value of this over time. Once enough data has been
collected (we select 300 as a constant number of
points) we classify the response into either a strong
no, no, neutral, yes, or a strong yes. This is repre-
sented to the user in the dialog box shown in Figure
17. The user is able to simply accept the rating by
waiting five seconds or clicking the yes button, or,
if the system did not correctly classify how they felt
about the song, the user can change the rating.
The overall rating structure works very well. For
users with enough variance between a standard an-
swer and a strong answer the system will accurately
classify their rating of the song within one point
every time. However, we do notice from our data
in the user study that setting static thresholds for
the different classifications is not optimal. We leave
10. MASTER’S PROJECT, THE COLLEGE OF WILLIAM AND MARY, SPRING 2015 10
Fig. 16. The dialog that appears when the user finishes playing
a song. The system listens for a response from the user or waits
five seconds before resolving to the default neutral value.
Fig. 17. The verification dialog that appears once a user has
classified a song. The user is allowed to change the rating if
they do not believe the system correctly rated the song.
training the system for individual use as future
work.
6 CONCLUSION
We have explored the use of the Emotiv EEG head-
set as a means for interacting with your computer.
While we were unable to find significant correla-
tions between reported brain activity and the an-
swers to simple yes and no questions, we were able
to determine that one’s head motions can provide a
reasonable scale of agreement and disagreement. In
addition, we learned that the Emotiv headset does
best when listening for motion in the brow. Utilizing
these two facts we implement an on-off switch and
rating classification system on top of an existing BCI
music player. Both of these contributions can easily
be extended to a myriad of different applications,
and have been shown to work. We are hopeful that
our research will be utilized to improve future brain
computer interfaces as both hardware capabilities
and consumer demands increase in the years to
come.
6.1 Future Work
There are a few areas in which we could extend our
current research. Because the user study was done
using one Emotiv EEG headset, we would like to
test these concepts using other commercial headsets
to see if we can find more significant correlations
between mood and how a user answers a question.
As EEG research becomes a bigger field in com-
puter science, more data will need to collected for
a variety of different headsets to fully understand
what parts of the brain fire for specific computer
interactions.
We also would like to use the emotional data and
gyroscope data to improve our rating classification
system in the music player. By analyzing how a
users head is moving while they listen to a song, it
might be possible to adjust our rating system to an
even finer grained scale to truly understand when
a user liked a certain song. Moreover, if we can
combine these results with the fluctuations in mood,
we might be able to come up with an even stronger
guess of how the user felt about a particular song.
APPENDIX A
SURVEY GIVEN TO PARTICIPANTS
The survey as presented is shown here. Each ques-
tion / action was displayed to the participant for ten
seconds before disappearing. Another five seconds
elapsed between questions.
1) Have you had a meal in the last twenty four
hours?
2) Have you left the country in the last twenty
four hours?
3) Have you ever been to the state of Virginia?
4) Do you like chocolate?
5) Have you ever been to Havana?
6) Can you run a mile in under 5 minutes?
7) Can you ride a bike?
8) Can you whistle?
11. MASTER’S PROJECT, THE COLLEGE OF WILLIAM AND MARY, SPRING 2015 11
9) Have you ever purchased a television?
10) Did you have coffee this morning?
11) Please move your eyebrows up and down
for ten seconds.
12) Please blink slowly for ten seconds.
13) Please wink with your left eye only for ten
seconds.
14) Please wink with your right eye only for ten
seconds.
15) Please use your eyes to look left and back to
center. Repeat this for ten seconds.
16) Please use your eyes to look right and back
to center. Repeat this for ten seconds.
17) Please use your eyes to look up and back to
center. Repeat this for ten seconds.
18) Please use your eyes to look down and back
to center. Repeat this for ten seconds.
REFERENCES
[1] F. Beverina, G. Palmas, S. Silvoni, F. Piccione, and S. Giove,
“User adaptive bcis: SSVEP and P300 based interfaces,”
PsychNology Journal, 2003.
[2] S. Amiri, A. Rabbi, L. Azinfar, and R. Fazel-Rezai, “A
review of P300, SSVEP and hybrid P300/SSVEP brain-
computer interface systems,” Biomedical Image and Signal
Processing Laboratory, Department of Electrical Engineering,
University of North Dakota, 2013.
[3] M. Jackson and M. Rudolph, “Applications for brain-
computer interfaces,” IEEE, 2010.
[4] A. Stopczynski, J. E. Larsen, C. Stahlhut, M. K. Petersen,
and L. K. Hansen, “A smartphone interface for a wireless
EEG headset with real-time 3d reconstruction.”
[5] E. J. Rechy-Ramirez, H. Hu, and K. McDonald-Maier,
“Head movements based control of an intelligent
wheelchair in an indoor environment,” IEEE, 2012.
[6] C. Pereira, R. Neto, A. Reynaldo, M. Canndida de Mi-
randa Luza, and R. Oliveira, “Development and evalua-
tion of a head-controlled human-computer interface with
mouse-like functions for physically disabled users,” Clinics,
2009.
[7] S. Fazli, M. Danoczy, F. Popescu, B. Blankertz, and K.-R.
Muller, “Using rest class and control paradigms for brain
computer interfacing,” IWANN, 2009.