SlideShare a Scribd company logo
1 of 18
Download to read offline
1
Developing a Smarter BCI through Task Discrimination
Magena Fura
Under the Guidance of Professor Howard Chizeck
BioRobotics Laboratory
The University of Washington
June 3, 2016
Design and Research Proposal for 402 Track
Abstract
The sensorimotor system is vital for translating brain signals into movement, yet
individuals with neuromuscular disorders or amputations suffer from a loss of sensorimotor
control. Brain-Computer Interfaces (BCIs) are devices which restore this missing link by
obtaining control signals directly from the brain or other sources and using them to control
movement of devices, such as a computer cursor or prosthetic limb. Electroencephalography
(EEG) signals are usually used in BCIs, however EEG has significant limitations in its effectiveness
as a control signal for natural movement since it is difficult to learn to use, cannot capture signals
from all neurons responsible for movement, and neglects the nonlinear nature of brain-controlled
limb movement. Combining EEG signals with another control source, such as eye movement,
provides additional information about a user’s intent and thus offers increased control
capabilities. However, current BCIs are optimized for controlling movement associated with a
single task, such as accurately guiding a cursor to a specific spot on a screen; they rarely address
the situation where a user wants to engage in and switch quickly between several different tasks,
which happens frequently in daily life. My project aims to address this deficit by determining
whether BCIs can distinguish between different tasks in order to recognize the user’s goal. I will
focus on tasks involved in cursor movement including randomly searching a screen, planning a
cursor’s trajectory before initiating movement, and guiding cursor movement. My project will
involve designing a testing interface in Unity in order to study characteristic EEG and eye-
movement signals associated with these tasks. The results of this study have the potential to lead
to “smarter” BCIs which can recognize task signatures as well as identify conditions under which
EEG and eye-movement signals are more or less useful as a control source.
2
Table of Contents
Background and Significance ......................................................................................................... 3
Previous Approaches ...................................................................................................................... 4
Consequences of Success ................................................................................................................5
Constraints ..................................................................................................................................... 6
Economic ................................................................................................................................. 6
Ethical...................................................................................................................................... 6
Social........................................................................................................................................ 6
Legal..........................................................................................................................................7
Plan of Work ...................................................................................................................................7
Overview ..................................................................................................................................7
Phase I: Prototype Interface with EM + Haptics hybrid BCI ................................................10
Phase II: EM + EMG hybrid BCI ........................................................................................... 11
Phase III: EM + EEG hybrid BCI ..........................................................................................12
Engineering Design Standards ..............................................................................................13
Key Personnel ................................................................................................................................14
Equipment, Facilities, and Resources ...........................................................................................14
Appendix .......................................................................................................................................15
Timeline .................................................................................................................................16
Additional Figures .................................................................................................................16
References ..................................................................................................................................... 17
3
Background and Significance
In healthy individuals, the sensorimotor system relays brain signals to control muscular
movement, however this function is compromised in the more than 11.5 million people suffering
from neuromuscular disorders worldwide [1]. Brain-Computer Interfaces (BCIs) offer the
potential to restore a level of sensorimotor control as they obtain control signals directly from the
brain in order to guide movement. Individuals with amputations, surgeons performing a remote
robotic surgery, or pilots wanting to fly hands-free may also benefit from a brain-computer
interface. Electroencephalography (EEG) signals offer the ability to infer a user’s intent and are
popular for use in BCIs since EEG is well-characterized, low-cost, and provides high temporal
resolution [2]. BCIs have been successfully used to map EEG signals to cursor movement on a
screen, movement of robot arms, wheelchairs, and prosthetics, and to allow communication via a
spelling device [3] [4]. However, there are significant limitations involved in EEG-based BCI
control. Firstly, BCI requires an explicitly defined mapping between each subset of neurons and
the output movement of the device or cursor. Thus, the burden is placed on the subject to “learn”
to control the device by activating these specific locations in the brain, a task which requires
significant mental effort and which is not always successful [4]. Furthermore, BCIs can capture
signals only from a small subset of the many neurons responsible, both directly and indirectly, for
driving movement. Mimicking sensorimotor control with a BCI is also complicated by the highly
nonlinear nature of brain-controlled arm movements, thus these are frequently simplified to
linear mappings for BCI control [5]. Low spatial resolution and artefacts caused by muscle and
eye movement limit EEG accuracy [4]. EEG-based BCI control is thus limited in its ability to
accurately mimic natural movement.
Tracking eye movements (EMs) presents an alternative strategy for controlling movement
of a cursor or robotic arm. Saccadic eye movements – rapid changes in the location of focus –
have been shown to be a rich source of information about an individual’s intended movements
and constitute a potential control signal for cursor movement [6] [7]. For instance, individuals
tend to look at an object before making a motion toward it, and duration of fixation on one object
over another during a decision-making task is a good indicator of a person’s decision [8] [9].
However, using gaze to inform movement is problematic since it is not a natural sensorimotor
connection: users are not naturally accustomed to moving or selecting items with their eyes. Eye
tracking techniques are also highly susceptible to misconstruing random eye motion (“searching”
eye movements) for meaningful control signals (“intentional” eye movements).
4
The challenge of how to best control movement given a limited set of control signals
obtained using BCI may be addressed through a combined approach that uses gaze and EEG
signals as well as information about the signatures of different tasks. A BCI that can predict the
user’s intended goal based on eye movement and EEG signals could, for instance, tailor its own
control algorithm to better assist the user in carrying out that goal. A BCI might be able to infer
whether the user is intending to select an object or simply note its location, by recognizing specific
task signatures in the user’s eye movements and EEG signals. The first step towards producing
such a “smart” BCI is to determine whether such task-specific signatures exist in the combination
of eye-movement and EEG signals. This project will focus on tasks involved in cursor movement,
including planning a path before initiating movement and guiding the cursor along the pre-
planned trajectory using BCI control. The goal of this project is to determine whether EEG and
eye-movement data together provide a way to distinguish “planning” and “guiding” tasks, two
tasks which utilize eye movements generally categorized as “intentional” eye movements.
Evaluation of Previous Approaches
Previous strategies for improving EEG-based control of BCI have focused on obtaining
better spatial resolution of brain patterns using electrocorticogram (ECoG), a technique which
uses electrodes surgically implanted on or near the neocortex [3]. However, this method is not a
viable long-term solution as it is highly invasive and has persistent problems with electrode
contact and biocompatibility [3] [4].
Gaze-only BCI control offers a more intuitive and non-invasive approach to controlling
movement, but have the drawback of being unable to distinguish between casual glances at an
object or an intention to move in that direction [3]. The possibility of using a blink to indicate
intention of selection has been rejected because users are unable to prevent natural blinking which
are subsequently mistaken for an intention to select an object [10].
Hybrid EEG-gaze BCIs have recently been gaining interest [11]. For instance, previous
studies have compared the efficiency of eye-tracking and EEG as control signals for cursor
movement. EEG signals have been incorporated in hybrid BCIs to simulate the “selection” of an
object, with gaze data controlling movement, to obtain better accuracy, however this has been
shown to be slower than traditional (solo) gaze-driven cursor movement [10]. Studies have also
looked at “random” EMs (which occur, for instance, when a user is casually looking around a
screen) compared to “intentional” EMs (when a user is directing a cursor using gaze information),
as well as utilizing a combination of eye-tracking with EEG signals to obtain better control [3] [6].
These have certainly improved BCI control with regards to carrying out a single task more
5
accurately; however, little advancement has been made in exploring BCI control of multiple tasks
in which the user’s eye movements are always “intentional” but the user is intending to carry out
distinct tasks. For instance, both “planning” EMs (which occur prior to initiation of cursor
movement) and “guiding” EMs (where the user is actively guiding the cursor using gaze
information and imagined hand-movements) may be considered to be “intentional” EMs, but the
distinction between these tasks is unclear.
This project aims to answer the question of whether eye movement and EEG signals can
be combined in order to reliably distinguish between the “planning” and “guiding” stages of
controlled cursor movement. If so, these signatures may be used to provide better control of a BCI
by a user carrying out a multiple tasks. In this project, the user will plan a route and guide an
object through a maze designed in Unity, using a combination of EEG and gaze information, while
data about the number and magnitude of saccades, dwell time (time spend focused on one area of
the screen), EEG amplitude and frequency will be collected. Analysis of these parameters will be
carried out in order to determine whether the “planning” and “guiding” stages have different
characteristic signatures.
Consequences of Success
The ultimate goal in enhancing BCI control is to facilitate the interaction between user and
machine. Ideally, a BCI should be minimally frustrating, non-invasive, and convenient, with
additional criteria including how quickly a task is carried out, accuracy, and elimination of
unintended movements. While BCI control has seen huge improvements following the advent of
hybrid systems combining EEG and eye-movement information, significant gaps remain in the
knowledge of the different stages of movement-related tasks [10] [11]. “Planning” versus “guiding”
eye movements have not been studied in concert with EEG for the purpose of enhancing BCI
control, and this represents a potential improvement in current hybrid eye-tracking BCIs.
Successful incorporation of “planning” and “guiding” phase signatures into BCI control will lead
to easier and more intuitive control of motion by allowing the BCI to dynamically respond to task
signatures and tailor its control algorithm to improve speed and accuracy of movement tasks.
Improved BCIs have the potential to greatly benefit individuals with amputations or who suffer
from neuromuscular diseases. BCI technology is also relevant for healthy individuals, for instance
to allow for control of a video game by thought, piloting of an airplane hands-free, or performance
of a remote surgery using a robotic arm.
6
Constraints
Economic
Limitations in the practicality and applicability of this research include economic
constraints, since brain-computer interfaces are expensive and require extensive training to learn
to use. One goal in improving BCI control is to facilitate motor control and communication as well
as develop better prosthetic arms and other devices; however, this technology may be
prohibitively expensive for many individuals. Thus, improving BCI technology may only benefit
the most wealthy individuals able to afford these high prices.
Ethical
Several ethical issues come to mind when considering the implications of brain-computer
interface research. Firstly, learning to control BCIs with EEG is quite difficult and can be
impossible to master depending on a patient’s cognitive challenges [12]. It is plausible that
patients and/or their caregivers may have unreasonably high expectations of how much control a
BCI will provide, resulting in psychological harm. Additionally, a natural application of BCI is in
allowing locked-in patients – who are fully aware though paralyzed – to communicate. A potential
issue in this situation would arise around communicating the patient’s wish to continue or
discontinue life-support, since there is no clear answer of what level of communication a BCI
could offer concerning life or death decisions or at what level the patient was mentally capable to
make that decision. In these two situations, using BCI should be approached with extreme caution
and all parties clearly informed about the limitations of BCI in restoring mobility or
communication.
Social
The issues of “mind-reading” and the invasiveness of certain EEG techniques (such as
ECoG) contribute to social issues surrounding brain-computer interfaces. Resistance to BCI is
often due to misperceptions about technology that has the potential for combining human and
machine, so education about BCI use is important, especially for subjects involved in BCI
research. Precautions will be taken to preclude any possible spread of misinformation which could
contribute to public resistance of BCI research.
Legal
Because BCIs obtain signals directly from the brain, there are distinct possibilities of legal
and privacy issues arising from BCI research and popular use. Brain signals contain intimate
information about an individual’s intentions, emotions, thoughts, and interests, and these could
7
potentially be mishandled or deliberately misused. For instance, BCIs can be used maliciously to
infer an individual’s familiarity with certain faces, religious beliefs, sexual preference, as well as
information including name and PIN by presenting a list of random names and numbers [13]. The
issue of limiting such “brain spyware” will likely be a significant problem in the near future and
should be addressed through openly discussing these privacy issues, but being aware of these
privacy issues is a crucial first step.
Specific Phases
Overview of Phases
This project aims to develop an interface for studying EEG and eye-tracking signals to
investigate whether the “planning” and “guiding” stages of a movement-control task have
different characteristic EEG and eye-movement signatures. Functionally, the interface will
present the subject with a simple game that incorporates “searching”, “planning”, and “guiding”
stages to move a cursor on a screen, while collecting eye-movement and EEG signals (see Figure
1 for a Functional Decomposition Diagram). Incorporating “searching” provides a method to
verify eye-movement data since “searching” EMs are well-characterized [6]. In order to collect
this data, I will develop an interface in Unity and iterate the design in each phase by testing it with
the specifications found in Table 1. At minimum, the interface must achieve an 80% success rate
of all three tasks within three minutes as well as statistically significant differences in eye
movements between the “searching” and “planning” phases. Qualitative aspects of the interface
will also be tested including clarity of the goal (to eliminate subject confusion) and whether the
subject has enough time to complete each task without becoming distracted. This will validate
that the interface is collecting data primarily focused on the signatures in “searching”, “planning”,
and “guiding” tasks. I will be starting with a simple and easily-obtained control signal from the
Phantom Omni haptics device (which provides touch feedback of virtual forces) in order to
initially develop and test the interface.
This project will consist of three phases, each characterized by increased complexity in the
type of control signal used. First, and as a proof-of-concept of the testing interface, I will
implement signals from the Omni haptics device along with eye movement signals. Next, I will
replace the haptics signals with EMG (electromyography) signals, which provide intermediate
complexity as a control signal. Finally, and as a stretch goal, I will replace EMG signals with EEG
signals. The last phase is a stretch goal since it is not guaranteed that subjects will successfully
learn to control the BCI and so obtaining EEG signals is not guaranteed. Thus, answering the
8
question of whether “planning” and “guiding” tasks differ significantly in their EEG and EM
signatures (which is contingent on obtaining EEG signals) is also a stretch goal.
Figure 1: Functional Decomposition Diagram. Inputs are 1) Data Collection Equipment; 2) Test
Subject; 3) Unity Software; and 4) Algorithm to Control the Cursor. The output is the collected data
of Eye-Movement and EEG signals corresponding to the “searching”, “planning”, and “guiding”
tasks carried out by the subject.
Metric Specification Target Values Validation Tool
1 Collection of Control Signal Data 100% Software saves or fails to save data
2 Achieve difference between
“searching” EMs and “planning” EMs
p <.05 t-test between saccade frequencies
during searching and planning
3 Total time to Completion All tasks < 3 min total Time between onset and
completion
4 Accuracy of Control Avoid 80% of obstacles Interface monitors collision
5 Interface Usability Proceed or alter interface User Feedback
6 Clear Goal Proceed or alter interface User Feedback
Table 1: Needs specifications table. Both quantitative aspects of the game (time to completion,
accuracy, successful data collection, different EMs between “searching” and “guiding”) as well as
quantitative specifications (usability and clarity) will determine “success” of the interface at each
phase.
Input 1: Data
Collection
Equipment for
eye movements
+ haptics signal/
EMG/ EEG
Input 4:
Algorithm to
Control Cursor
Movement
Input 2: Test
Subject / User
Output: Collected data
(Eye Movements,
haptics signal, EMG,
EEG)
Control Signal
User’s Action/
Response
Cursor Position
Input 3: Unity
Software
Testing
Interface
User prompted to
carry out a task
9
Figure 2: Design strategy for each phase, showing quantitative specifications as well as qualitative
feedback from the user, which must be met before moving to the next phase.
.
Phase I: Proof-of-Concept of Testing Interface using Haptics Signals
The aim of Phase I is to develop an interface in Unity and provide a proof-of-concept test
of the interface that demonstrates successful collection of eye-movement and signals from the
Phantom Omni (3D location in space as well as force exerted by the user’s hand). In this phase,
eye-movements and the Omni’s positional data are the control signals that guide the cursor
through the on-screen interface. This interface will present the subject with a task that involves
randomly searching the screen, planning a route to move a cursor between different points, and
then attempting to guide the cursor between those locations on-screen. By designing and refining
the testing interface, I will arrive at a design that achieves these aims.
Phase II: Integration of EMG Signals
In Phase II, EMG signals will be collected along with eye-movement data while the subject
completes the task on the screen. The motivation to use EMG is that it is more similar to EEG
signals than a haptics signal, however it is relatively more complex than the haptics device to set
up as a control signal while also being easier to obtain than EEG signals. Therefore integrating
EMG signals is a natural intermediate step before using EEG.
Phase III: Integration of EEG Signals
In Phase III, EEG signals will be collected along with eye-movement data as the subject
completes the tasks on-screen. Similar to Phases I and II, Phase III will verify that the interface
designed in Unity is appropriate for collecting eye-movement and, here, EEG signals. The goal of
10
Phase III is to demonstrate collection of eye-movement and EEG signals in order to answer the
question of whether the different “searching”, “planning”, and “guiding” stages of a movement-
control task induce distinct eye-movement + EEG signatures from the user.
Design and Research Strategy
Phase I: Proof-of-Concept of Testing Interface using Haptics Signals
Approach:
In Phase I a test interface will be designed in Unity and verified using eye-movement and
haptics signals from the Phantom Omni. The Phantom Omni is an intuitive device that allows the
user to physically “write” in the air and displays the result on a screen (Figure 3). It provides six
degrees of positional sensing (x, y, z, pitch, roll, and yaw) and is already functionalized with Unity
in the BioRobotics Lab.
The Phantom Omni mimics a natural writing device so it is easy to use and provides a way
to quickly test the interface without the added complexity of obtaining EMG signals or requiring
a user to learn EEG control. The testing interface will present the user with a game involving 1)
searching the screen for various objects, 2) planning a route between two or more objects as well
as around obstacles, and 3) guiding a cursor using a combination of eye-movements and haptics
signals from the Phantom Omni. The preliminary design will be based on the “center-out” game
scheme, common in gaze-driven movement, which places a cursor at the center of several objects
and requires the user to move the cursor toward one or more of these objects (Figure 3).
Initially, a circle of different objects will be placed on the screen and the user will be
allowed to look around and note their locations. Next, a “goal” object will be revealed indicating
which objects the user must select with the cursor, allowing the user to plan a route between the
objects. Finally, the cursor will be released to the user’s control allowing the user to guide it from
object-to-object along their pre-planned path. Eye-movement signals and positional information
Figure 3: The Phantom Omni
device allows the user to write
in the air and displays the
result in Unity as well as
controls cursor movement
11
from the Phantom Omni will be collected throughout the task. The design process for the testing
interface will involve varying the size, location, number, and type of objects to be collected, the
number of obstacles, the time devoted to the “searching”, “planning”, and “guiding” tasks, and the
arrangement of icons. The “goal object” image will be placed in order to make it the least
distracting as possible to the user. Although the center-out scheme is a well-tested technique in
studying movement control, other arrangements of icons and obstacles may be considered in
order to determine the best method for obtaining “searching”, “planning”, and “guiding” eye-
movement and haptics signals. At this phase, testing will primarily be done on lab members.
Deliverables:
Phase I will produce an interface in Unity designed to promote eye-movement and haptics
signals related to “searching”, “planning”, and “guiding” tasks. Success in this phase requires a
functional testing interface that presents these distinct tasks. Ideally, the interface will reliably
demonstrate that eye-movements (characterized by dwell time and saccade frequency) and user
control of the Omni device differ between each task; however as long as the user is able to carry
out the searching, planning, and guiding tasks set out in the Unity game, this phase will be
considered complete. A trial will be deemed “successful” if the cursor collects all objects and
avoids all obstacles, and three users must pass 8 of 10 trials for this stage to be complete.
Anticipated Outcome
Designing the testing interface is expected to be successful. The main issues will likely be
in integrating the eye-tracking device with Unity, since the myGaze eye-tracking software has not
yet been set up with Unity, as well as making design decisions involving the location, number, and
size of icons as well as the time allocated to each of the three tasks. Feedback about these
parameters will be obtained from lab members in order to make the game as easy to use as
possible, while the number of collisions with obstacles will be recorded. The eye-tracking device
requires a distance of 60 cm between the user’s eyes and the IR sensor, so the icons should be
large enough to be easily seen from that distance. At this stage it is not required for the signatures
related to each task to be demonstrably different, since EEG has not yet been obtained.
Phase II: Integration of EMG Signals
Approach:
In Phase II, electromyography (EMG) signals will be obtained from the bicep muscle in
order to guide cursor movement. This phase is an important preliminary step in testing the
12
interface designed in Stage I since EMG signals are voltage-based and thus similar in nature to
EEG signals. EMG is a relevant test for the eventual use of EEG since it is a voltage-based signal
however it has the benefit of producing a larger signal that does not require the placement of
multiple electrodes as does EEG. The BioRobotics Lab has a method to import EMG signals into
MATLAB, and from there they may be integrated into the control system in Unity for moving the
cursor. In this stage the accessibility of the Unity interface will be evaluated using EMG as a
control signal, in concert with eye movement. The goal is to show that signals with lower
resolution and more noise than the Phantom Omni (i.e. EMG and EEG) can still be used to
maneuver the cursor. Three members of the BioRobotics Lab will be chosen to complete 10 trials
of the Unity game while eye movement and EMG signals are obtained. As in the first phase, Phase
II will be successfully completed when the three test subjects pass 80% of trials using eye
movement and EMG as control signals. A trial will be deemed “successful” if the cursor collects
all objects and avoids all obstacles.
Deliverables:
This phase will demonstrate the functionality of the testing interface designed in Phase I
when eye-movement and EMG signals are used as the control signals. The main goal of this phase
is to provide an intermediate step in complexity with regards to the control signal (EMG vs EEG)
in order to streamline the troubleshooting process.
Anticipated Outcomes:
Implementing EMG signal in the Unity interface is likely to be straightforward. Since EMG
signals are relatively more difficult to control than the Phantom Omni device, it may be that the
current design of the test interface is too difficult to use with EMG. In this case, iteration of the
current design will occur, with each iteration tested first with the Omni device and second with
EMG signals. It may be that using EMG is more mentally challenging than “drawing” with the
Omni haptics device, in which case the complexity of the game will be reduced by decreasing the
number of symbols to be collected and obstacles to be avoided and giving more time for each of
the “searching”, “planning”, and “guiding” stages.
Phase III: Introducing EEG signals
Approach:
The aim of this phase is to obtain electroencephalography (EEG) signals (in addition to
eye-movement signals) from a user carrying out the three coupled tasks in the Unity interface.
13
Furthermore, I hope to show that the characteristic eye-movement and EEG signals differ
significantly from stage to stage, and that these “signatures” can be implemented in a smart BCI
that can respond to the intended task of its user. Non-invasive scalp EEG signals will be obtained
in phase III along with eye-movement data and used to control the cursor’s movement. Electrodes
will be placed according to the international 10-20 system [14]. First, the interface will be
improved through design iterations until three test subjects are able to average an 80% success
rate. Next, eye-movement and EEG signals will be analyzed for consistent differences between the
“searching”, “planning”, and “guiding” stages of the Unity game. The EEG signals will be analyzed
for characteristic fluctuations in power associated with different frequency bands: for instance,
the Mu rhythm at 9-13 Hz is typically desynchronized during planning and execution of hand
movement while it is synchronized when real or imagined hand-movement is suppressed [15].
Eye-movement data will be analyzed for characteristic dwell times (time spent focusing on a single
location) and saccade frequency.
Deliverables:
In this phase I aim to integrate EEG signals into the control system of the Unity game and
use eye movements and EEG to verify the interface according to Figure 2. Furthermore, I will
analyze eye-movements and EEG signals to determine whether there are significant differences
that distinguish the “planning” and “guiding” tasks.
Anticipated Outcome:
This phase is likely to be the most challenging since it relies on the ability of the subjects
to learn to control BCI in order to use EEG as a control signal. If subjects are unable to learn BCI
sufficiently, my fall-back option is to use EMG as the control signal and analyze EMG + eye-
movement signals to determine whether characteristic events occur in the three stages. I
hypothesize that there are characteristic differences between the “planning” and “guiding” tasks
that will be revealed in the user’s eye movements and EEG signals.
Engineering Design Standards:
The design portion of this project involves designing a testing interface in Unity in order
to obtain eye-movement data and haptics/EMG/EEG data (as appropriate, for each phase). Since
the testing interface is computerized, the potential safety issues associated with this design is very
low. This project contains very few potential safety risks, as EMG is completely non-invasive and
the type of EEG we will be using is also non-invasive. A standard electrode placement will be used
(see Figure 4) and all subjects’ voluntary consent will be obtained before collecting their EMG and
14
EEG signals while they test the design interface in Unity. EEG recordings must follow standards
set out by the International Federation of Clinical Neurophysiology [14]. This involve labeling
basic patient information including name, date of birth, date of the test, laboratory numbers,
current medication, and additional comments. Further, any computer analysis of EEG recordings
should not stand alone but should be accompanied by human visual analysis. While we are not
engaging in clinical EEG, we also plan to follow these labeling practices.
Figure 4: Standard EEG electrode placement [14]
Key Personnel:
Margaret Thompson and Andrew Haddock will oversee my project and advise me on at
least a weekly basis. Margaret Thompson is a graduate student working with BCIs and has
knowledge about the process involved in learning to use a BCI. Andrew Haddock is a graduate
student working on modeling and control of Dynamic Neural Systems and will oversee my use of
the eye-tracker. I have been meeting with Margaret and Andrew on a weekly basis. Professor
Howard Chizeck will meet with me for regular progress reports –no less than monthly—and will
assign me my grade for the 402 project.
Facilities, Equipment, and Resources
The BioRobotics Lab is located in the Electrical Engineering building on the UW campus,
and has the resources necessary to conduct BCI and eye-tracking research. The BioRobotics Lab
will provide all equipment including the MyGaze Eye-Tracker and software, EMG equipment,
EEG equipment, Omni haptics device, and computer for designing the Unity interface and signal
processing. I will use both MATLAB and Unity which I have downloaded on my own computer for
work both inside and outside of lab.
15
Appendix:
Timeline: My anticipated timeline is as follows. I will begin work during June 2016 and plan to
complete each phase in approximately 3 months, leaving me 2 months as a buffer in the event of
unforeseen circumstances, which will also give me time to write my report.
Additional Figures
Figure 5: Three possible testing interfaces developed in Unity to induce “searching”, “planning”, and
“guiding” tasks. A) A center-out reaching control scheme [16]. B) A maze containing various obstacles.
C) A random word search.
Lorem ipsum dolor
sit amet,
consectetur
adipiscing elit. Ut
leo felis, rutrum a
risus sit amet,
tristique dictum
neque. Mauris.
16
Figure 6: A prototype design based on the center-out movement-control test. First, a group of
symbols are displayed on a screen, allowing the user to search the screen and note symbol location.
Next, a “goal” shape is revealed, allowing the user to plan a path of movement to collect the correct
symbols. Finally, a cursor is revealed at the center location allowing the user to control its
movement using a combination of control signals (eye-movement and haptics/EMG/EEG). This
design incorporates “searching”, “planning”, and “guiding” tasks to move the cursor.
Symbols appear, prompting
“searching” behavior in user
Goal object appears to allow
user to plan a path for
cursor movement
Cursor appears, allowing
user to guide it along the
planned path
17
References
[1] J. e. a. Deenen, "The Epidemiology of Neuromuscular Disorders: A Comprehensive Overview of the
Literature," Journal of Neuromuscular Diseases, vol. 2, pp. 73-85, 2015.
[2] T. K. C. Zander, "Toward Passive Brain-Computer Interfaces: Applying Brain-Computer Interface
Technology to Human-Machine Systems in General," J. Neural Eng., vol. 8, no. 2, 2011.
[3] B. L. A. S. B. Huang, "Integrating EEG Information Improves Performance of Gaze Based Cursor
Control," in 6th International IEEE/EMBS Conference on Neural Engineering (NER), San Diego, 2013.
[4] M. e. a. Gerven, " The Brain-Computer Interface Cycle," J Neural Eng, vol. 6, no. 4, pp. 1-10, 2009.
[5] M. C. S. B. A. Y. B. Golub, "Brain-Computer Interfaces for Dissecting Cognitive Processes Underlying
Sensorimotor Control," Current Opinion in Neurobiology, vol. 37, pp. 53-58, 2016.
[6] S. e. a. 5. 1.-1. Lee, "Effects of Search Intent on Eye-Movement Patterns in a Change Detection
Task," Journal of Eye Movement Research, vol. 8, no. 2, pp. 5,1-10, 2015.
[7] E. A. e. a. Corbett, " Real-Time Evaluation of a Noninvasive Neuroprosthetic Interface for Control of
Reach," IEEE Transactions on Neural Systems and Rehabilitation Engineering,, vol. 21, no. 4, pp.
674-682, 2013.
[8] G. e. a. Bird, "The Role of Eye Movements in Decision Making and the Prospect of Exposure
Effects," Vision Research, vol. 60, pp. 16-21, 2012.
[9] S. B. H. Neggers, "Coordinated Control of Eye and Hand Movements in Dynamic Reaching," Human
Movement Science, vol. 21, no. 3, pp. 349-376, 2002.
[10] Z. T. Vilimek R., " BCI: combining eye-gaze input with brain–computer interaction," in Conference
on Universal Access in Human–Computer Interaction, San Diego, 2009.
[11] G. e. a. Pfurtscheller, "The Hybrid BCI," Front Neurosci., vol. 4, no. 42, pp. 593-602, 2010.
[12] W. Glannon, "Ethical Issues with Brain-Computer Interfaces," Front Syst Neurosci, vol. 8, no. 136,
2014.
[13] T. a. C. H. Bonaci, "Privay by Design in Brain-Computer Interfaces," UW Dept. Elec. Eng., Seattle,
2013.
[14] M. e. a. Nuwer, "FCN Guidelines for Topographic and Frequency Analysis of EEGs and EPs. Report
of an IFCN Committee," Electroencephalogr Clin Neurophysiol Suppl, Los Angeles, 1994.
[15] G. e. a. Pfurtscheller, "Mu Rhythm (de)Synchronization and EEG Single-Trial Classification of
Different Motor Imagery Tasks," NeuroImage, vol. 31, pp. 153-159, 2006.
18
[16] A. e. a. Hewitt, "Representation of Limb Kinematics in Purkinje Cells Simple Spike Discharge is
Conserved Across Multiple Tasks," Journal of Neurophysiology, vol. 106, no. 5, pp. 2232-2247,
2011.

More Related Content

What's hot

Brain computer interface
Brain computer interfaceBrain computer interface
Brain computer interfaceSivapradeep R
 
Brain Computer Interface WORD FILE
Brain Computer Interface WORD FILEBrain Computer Interface WORD FILE
Brain Computer Interface WORD FILEDevendra Singh Tomar
 
Brain Computer Interface-BCI
Brain Computer Interface-BCIBrain Computer Interface-BCI
Brain Computer Interface-BCISejal Anand
 
Brain computer interface based
Brain computer interface basedBrain computer interface based
Brain computer interface basedMohammed Saeed
 
Brain computing interface
Brain computing interfaceBrain computing interface
Brain computing interfaceMainak Shil
 
Bci controlled calls
Bci controlled callsBci controlled calls
Bci controlled callsTejash Popate
 
Brain computer interface based smart keyboard using neurosky mindwave headset
Brain computer interface based smart keyboard using neurosky mindwave headsetBrain computer interface based smart keyboard using neurosky mindwave headset
Brain computer interface based smart keyboard using neurosky mindwave headsetTELKOMNIKA JOURNAL
 
Cognitive and Emotional Neuro-Prostheses
Cognitive and Emotional Neuro-ProsthesesCognitive and Emotional Neuro-Prostheses
Cognitive and Emotional Neuro-ProsthesesOmer Ali
 
15 Trends In Neurotechnologies That Will Change The World
15 Trends In Neurotechnologies That Will Change The World15 Trends In Neurotechnologies That Will Change The World
15 Trends In Neurotechnologies That Will Change The WorldNikita Lukianets
 
Final thesis presentation on bci
Final thesis presentation on bciFinal thesis presentation on bci
Final thesis presentation on bciRedwan Islam
 
Brain Computer Interface by Vipin Yadav
Brain Computer Interface by Vipin YadavBrain Computer Interface by Vipin Yadav
Brain Computer Interface by Vipin YadavVipinYadav191
 
brain-computerinterface-SUBHAM KAR
brain-computerinterface-SUBHAM KARbrain-computerinterface-SUBHAM KAR
brain-computerinterface-SUBHAM KARSubham Kar
 
Neural interfacing
Neural interfacingNeural interfacing
Neural interfacingKirtan Shah
 
Brain computerinterface-by jyot virk
Brain computerinterface-by jyot virkBrain computerinterface-by jyot virk
Brain computerinterface-by jyot virkjudge singh
 
Brain computer interfaces in medicine
Brain computer interfaces in medicineBrain computer interfaces in medicine
Brain computer interfaces in medicineKarlos Svoboda
 

What's hot (20)

BCI Paper
BCI PaperBCI Paper
BCI Paper
 
Brain computer interface
Brain computer interfaceBrain computer interface
Brain computer interface
 
Inroduction to BCI
Inroduction to BCIInroduction to BCI
Inroduction to BCI
 
Brain Computer Interface WORD FILE
Brain Computer Interface WORD FILEBrain Computer Interface WORD FILE
Brain Computer Interface WORD FILE
 
Thesis by muhammad sharif on bci brain computer interface
Thesis by muhammad sharif on bci brain computer interfaceThesis by muhammad sharif on bci brain computer interface
Thesis by muhammad sharif on bci brain computer interface
 
Open BCI
Open BCIOpen BCI
Open BCI
 
Brain Computer Interface-BCI
Brain Computer Interface-BCIBrain Computer Interface-BCI
Brain Computer Interface-BCI
 
Brain computer interface based
Brain computer interface basedBrain computer interface based
Brain computer interface based
 
Brain computing interface
Brain computing interfaceBrain computing interface
Brain computing interface
 
Bci controlled calls
Bci controlled callsBci controlled calls
Bci controlled calls
 
Brain computer interface based smart keyboard using neurosky mindwave headset
Brain computer interface based smart keyboard using neurosky mindwave headsetBrain computer interface based smart keyboard using neurosky mindwave headset
Brain computer interface based smart keyboard using neurosky mindwave headset
 
Cognitive and Emotional Neuro-Prostheses
Cognitive and Emotional Neuro-ProsthesesCognitive and Emotional Neuro-Prostheses
Cognitive and Emotional Neuro-Prostheses
 
15 Trends In Neurotechnologies That Will Change The World
15 Trends In Neurotechnologies That Will Change The World15 Trends In Neurotechnologies That Will Change The World
15 Trends In Neurotechnologies That Will Change The World
 
Final thesis presentation on bci
Final thesis presentation on bciFinal thesis presentation on bci
Final thesis presentation on bci
 
Braingate
BraingateBraingate
Braingate
 
Brain Computer Interface by Vipin Yadav
Brain Computer Interface by Vipin YadavBrain Computer Interface by Vipin Yadav
Brain Computer Interface by Vipin Yadav
 
brain-computerinterface-SUBHAM KAR
brain-computerinterface-SUBHAM KARbrain-computerinterface-SUBHAM KAR
brain-computerinterface-SUBHAM KAR
 
Neural interfacing
Neural interfacingNeural interfacing
Neural interfacing
 
Brain computerinterface-by jyot virk
Brain computerinterface-by jyot virkBrain computerinterface-by jyot virk
Brain computerinterface-by jyot virk
 
Brain computer interfaces in medicine
Brain computer interfaces in medicineBrain computer interfaces in medicine
Brain computer interfaces in medicine
 

Viewers also liked

Creacion e innovacion
Creacion e innovacionCreacion e innovacion
Creacion e innovaciondavid cuautle
 
Economía Solidaria en Oriente Antioqueño
Economía Solidaria en Oriente AntioqueñoEconomía Solidaria en Oriente Antioqueño
Economía Solidaria en Oriente AntioqueñoCarolina Acosta Durango
 
Etica en la formacion de los profecionales
Etica en la formacion de los  profecionalesEtica en la formacion de los  profecionales
Etica en la formacion de los profecionalesdavid cuautle
 
Apollo pd clo magazine_employee engagement webinar_1-19-2016
Apollo pd clo magazine_employee engagement webinar_1-19-2016Apollo pd clo magazine_employee engagement webinar_1-19-2016
Apollo pd clo magazine_employee engagement webinar_1-19-2016ApolloPD
 
si no lo conocen no existe "Etica en la formacion de la profecion"
si no lo conocen no existe "Etica en la formacion de la profecion"si no lo conocen no existe "Etica en la formacion de la profecion"
si no lo conocen no existe "Etica en la formacion de la profecion"david cuautle
 
Resume New 2016- Mustapha A. Jubeily
Resume New 2016- Mustapha A. JubeilyResume New 2016- Mustapha A. Jubeily
Resume New 2016- Mustapha A. JubeilyMustapha Jubeily
 
Proyecto para electromedicina
Proyecto para electromedicinaProyecto para electromedicina
Proyecto para electromedicinacamilo vivancos
 
resume kulot 2-17--16
resume kulot 2-17--16resume kulot 2-17--16
resume kulot 2-17--16ervin regado
 
Digital transformation - decoding the industrial 4.0 revolution
Digital transformation - decoding the industrial 4.0 revolutionDigital transformation - decoding the industrial 4.0 revolution
Digital transformation - decoding the industrial 4.0 revolutionAnnamaria Porzioli
 

Viewers also liked (12)

Creacion e innovacion
Creacion e innovacionCreacion e innovacion
Creacion e innovacion
 
Economía Solidaria en Oriente Antioqueño
Economía Solidaria en Oriente AntioqueñoEconomía Solidaria en Oriente Antioqueño
Economía Solidaria en Oriente Antioqueño
 
Etica en la formacion de los profecionales
Etica en la formacion de los  profecionalesEtica en la formacion de los  profecionales
Etica en la formacion de los profecionales
 
Economía Solidaria
Economía SolidariaEconomía Solidaria
Economía Solidaria
 
Apollo pd clo magazine_employee engagement webinar_1-19-2016
Apollo pd clo magazine_employee engagement webinar_1-19-2016Apollo pd clo magazine_employee engagement webinar_1-19-2016
Apollo pd clo magazine_employee engagement webinar_1-19-2016
 
Actividad 7 Momento 2
Actividad 7 Momento 2Actividad 7 Momento 2
Actividad 7 Momento 2
 
si no lo conocen no existe "Etica en la formacion de la profecion"
si no lo conocen no existe "Etica en la formacion de la profecion"si no lo conocen no existe "Etica en la formacion de la profecion"
si no lo conocen no existe "Etica en la formacion de la profecion"
 
Resume New 2016- Mustapha A. Jubeily
Resume New 2016- Mustapha A. JubeilyResume New 2016- Mustapha A. Jubeily
Resume New 2016- Mustapha A. Jubeily
 
Proyecto para electromedicina
Proyecto para electromedicinaProyecto para electromedicina
Proyecto para electromedicina
 
resume kulot 2-17--16
resume kulot 2-17--16resume kulot 2-17--16
resume kulot 2-17--16
 
Digital transformation - decoding the industrial 4.0 revolution
Digital transformation - decoding the industrial 4.0 revolutionDigital transformation - decoding the industrial 4.0 revolution
Digital transformation - decoding the industrial 4.0 revolution
 
Gdpr security services
Gdpr security servicesGdpr security services
Gdpr security services
 

Similar to Capstone Proposal

Brain computer interfacing for controlling wheelchair movement
Brain computer interfacing for controlling wheelchair movementBrain computer interfacing for controlling wheelchair movement
Brain computer interfacing for controlling wheelchair movementIRJET Journal
 
A SMART BRAIN CONTROLLED WHEELCHAIR BASED MICROCONTROLLER SYSTEM
A SMART BRAIN CONTROLLED WHEELCHAIR BASED MICROCONTROLLER SYSTEMA SMART BRAIN CONTROLLED WHEELCHAIR BASED MICROCONTROLLER SYSTEM
A SMART BRAIN CONTROLLED WHEELCHAIR BASED MICROCONTROLLER SYSTEMijaia
 
A SMART BRAIN CONTROLLED WHEELCHAIR BASED MICROCONTROLLER SYSTEM
A SMART BRAIN CONTROLLED WHEELCHAIR BASED MICROCONTROLLER SYSTEMA SMART BRAIN CONTROLLED WHEELCHAIR BASED MICROCONTROLLER SYSTEM
A SMART BRAIN CONTROLLED WHEELCHAIR BASED MICROCONTROLLER SYSTEMgerogepatton
 
Eye tracker based HCI
Eye tracker based HCIEye tracker based HCI
Eye tracker based HCISaswati
 
Impact of adaptive filtering-based component analysis method on steady-state ...
Impact of adaptive filtering-based component analysis method on steady-state ...Impact of adaptive filtering-based component analysis method on steady-state ...
Impact of adaptive filtering-based component analysis method on steady-state ...IAESIJAI
 
CSNEPoster_MagenaFura
CSNEPoster_MagenaFuraCSNEPoster_MagenaFura
CSNEPoster_MagenaFuraMagena Fura
 
45891026 brain-computer-interface-seminar-report
45891026 brain-computer-interface-seminar-report45891026 brain-computer-interface-seminar-report
45891026 brain-computer-interface-seminar-reportkapilpanwariet
 
Brain Computer Interface
Brain Computer Interface Brain Computer Interface
Brain Computer Interface Deepti Singh
 
A LOW COST EEG BASED BCI PROSTHETIC USING MOTOR IMAGERY
A LOW COST EEG BASED BCI PROSTHETIC USING MOTOR IMAGERY A LOW COST EEG BASED BCI PROSTHETIC USING MOTOR IMAGERY
A LOW COST EEG BASED BCI PROSTHETIC USING MOTOR IMAGERY ijitcs
 
Study and analysis of motion artifacts for ambulatory electroencephalography
Study and analysis of motion artifacts for ambulatory  electroencephalographyStudy and analysis of motion artifacts for ambulatory  electroencephalography
Study and analysis of motion artifacts for ambulatory electroencephalographyIJECEIAES
 
ICCE 2016 NCTU Courses Papers
ICCE 2016 NCTU Courses PapersICCE 2016 NCTU Courses Papers
ICCE 2016 NCTU Courses PapersChun-Feng Chen
 
An overview on Advanced Research Works on Brain-Computer Interface
An overview on Advanced Research Works on Brain-Computer InterfaceAn overview on Advanced Research Works on Brain-Computer Interface
An overview on Advanced Research Works on Brain-Computer InterfaceWaqas Tariq
 
Communication via-brain-computer-interface
Communication via-brain-computer-interfaceCommunication via-brain-computer-interface
Communication via-brain-computer-interfaceAmerican TESOL Institute
 
Detection Of Saccadic Eye Movements to Switch the Devices For Disables
Detection Of Saccadic Eye Movements to Switch the Devices For DisablesDetection Of Saccadic Eye Movements to Switch the Devices For Disables
Detection Of Saccadic Eye Movements to Switch the Devices For Disablesijsrd.com
 

Similar to Capstone Proposal (20)

Brain computer interfacing for controlling wheelchair movement
Brain computer interfacing for controlling wheelchair movementBrain computer interfacing for controlling wheelchair movement
Brain computer interfacing for controlling wheelchair movement
 
A SMART BRAIN CONTROLLED WHEELCHAIR BASED MICROCONTROLLER SYSTEM
A SMART BRAIN CONTROLLED WHEELCHAIR BASED MICROCONTROLLER SYSTEMA SMART BRAIN CONTROLLED WHEELCHAIR BASED MICROCONTROLLER SYSTEM
A SMART BRAIN CONTROLLED WHEELCHAIR BASED MICROCONTROLLER SYSTEM
 
A SMART BRAIN CONTROLLED WHEELCHAIR BASED MICROCONTROLLER SYSTEM
A SMART BRAIN CONTROLLED WHEELCHAIR BASED MICROCONTROLLER SYSTEMA SMART BRAIN CONTROLLED WHEELCHAIR BASED MICROCONTROLLER SYSTEM
A SMART BRAIN CONTROLLED WHEELCHAIR BASED MICROCONTROLLER SYSTEM
 
Eye tracker based HCI
Eye tracker based HCIEye tracker based HCI
Eye tracker based HCI
 
Adaptive filters based efficient EEG classification for steady state visuall...
Adaptive filters based efficient EEG classification for steady  state visuall...Adaptive filters based efficient EEG classification for steady  state visuall...
Adaptive filters based efficient EEG classification for steady state visuall...
 
Impact of adaptive filtering-based component analysis method on steady-state ...
Impact of adaptive filtering-based component analysis method on steady-state ...Impact of adaptive filtering-based component analysis method on steady-state ...
Impact of adaptive filtering-based component analysis method on steady-state ...
 
CSNEPoster_MagenaFura
CSNEPoster_MagenaFuraCSNEPoster_MagenaFura
CSNEPoster_MagenaFura
 
45891026 brain-computer-interface-seminar-report
45891026 brain-computer-interface-seminar-report45891026 brain-computer-interface-seminar-report
45891026 brain-computer-interface-seminar-report
 
F0932733
F0932733F0932733
F0932733
 
Brain Computer Interface
Brain Computer Interface Brain Computer Interface
Brain Computer Interface
 
forney_techrep2015
forney_techrep2015forney_techrep2015
forney_techrep2015
 
A LOW COST EEG BASED BCI PROSTHETIC USING MOTOR IMAGERY
A LOW COST EEG BASED BCI PROSTHETIC USING MOTOR IMAGERY A LOW COST EEG BASED BCI PROSTHETIC USING MOTOR IMAGERY
A LOW COST EEG BASED BCI PROSTHETIC USING MOTOR IMAGERY
 
Study and analysis of motion artifacts for ambulatory electroencephalography
Study and analysis of motion artifacts for ambulatory  electroencephalographyStudy and analysis of motion artifacts for ambulatory  electroencephalography
Study and analysis of motion artifacts for ambulatory electroencephalography
 
BRAIN GATE
BRAIN GATEBRAIN GATE
BRAIN GATE
 
ICCE 2016 NCTU Courses Papers
ICCE 2016 NCTU Courses PapersICCE 2016 NCTU Courses Papers
ICCE 2016 NCTU Courses Papers
 
An overview on Advanced Research Works on Brain-Computer Interface
An overview on Advanced Research Works on Brain-Computer InterfaceAn overview on Advanced Research Works on Brain-Computer Interface
An overview on Advanced Research Works on Brain-Computer Interface
 
Bci report
Bci reportBci report
Bci report
 
Communication via-brain-computer-interface
Communication via-brain-computer-interfaceCommunication via-brain-computer-interface
Communication via-brain-computer-interface
 
NON INVASIVE BCI FOR AAC.pdf
NON INVASIVE BCI FOR AAC.pdfNON INVASIVE BCI FOR AAC.pdf
NON INVASIVE BCI FOR AAC.pdf
 
Detection Of Saccadic Eye Movements to Switch the Devices For Disables
Detection Of Saccadic Eye Movements to Switch the Devices For DisablesDetection Of Saccadic Eye Movements to Switch the Devices For Disables
Detection Of Saccadic Eye Movements to Switch the Devices For Disables
 

Capstone Proposal

  • 1. 1 Developing a Smarter BCI through Task Discrimination Magena Fura Under the Guidance of Professor Howard Chizeck BioRobotics Laboratory The University of Washington June 3, 2016 Design and Research Proposal for 402 Track Abstract The sensorimotor system is vital for translating brain signals into movement, yet individuals with neuromuscular disorders or amputations suffer from a loss of sensorimotor control. Brain-Computer Interfaces (BCIs) are devices which restore this missing link by obtaining control signals directly from the brain or other sources and using them to control movement of devices, such as a computer cursor or prosthetic limb. Electroencephalography (EEG) signals are usually used in BCIs, however EEG has significant limitations in its effectiveness as a control signal for natural movement since it is difficult to learn to use, cannot capture signals from all neurons responsible for movement, and neglects the nonlinear nature of brain-controlled limb movement. Combining EEG signals with another control source, such as eye movement, provides additional information about a user’s intent and thus offers increased control capabilities. However, current BCIs are optimized for controlling movement associated with a single task, such as accurately guiding a cursor to a specific spot on a screen; they rarely address the situation where a user wants to engage in and switch quickly between several different tasks, which happens frequently in daily life. My project aims to address this deficit by determining whether BCIs can distinguish between different tasks in order to recognize the user’s goal. I will focus on tasks involved in cursor movement including randomly searching a screen, planning a cursor’s trajectory before initiating movement, and guiding cursor movement. My project will involve designing a testing interface in Unity in order to study characteristic EEG and eye- movement signals associated with these tasks. The results of this study have the potential to lead to “smarter” BCIs which can recognize task signatures as well as identify conditions under which EEG and eye-movement signals are more or less useful as a control source.
  • 2. 2 Table of Contents Background and Significance ......................................................................................................... 3 Previous Approaches ...................................................................................................................... 4 Consequences of Success ................................................................................................................5 Constraints ..................................................................................................................................... 6 Economic ................................................................................................................................. 6 Ethical...................................................................................................................................... 6 Social........................................................................................................................................ 6 Legal..........................................................................................................................................7 Plan of Work ...................................................................................................................................7 Overview ..................................................................................................................................7 Phase I: Prototype Interface with EM + Haptics hybrid BCI ................................................10 Phase II: EM + EMG hybrid BCI ........................................................................................... 11 Phase III: EM + EEG hybrid BCI ..........................................................................................12 Engineering Design Standards ..............................................................................................13 Key Personnel ................................................................................................................................14 Equipment, Facilities, and Resources ...........................................................................................14 Appendix .......................................................................................................................................15 Timeline .................................................................................................................................16 Additional Figures .................................................................................................................16 References ..................................................................................................................................... 17
  • 3. 3 Background and Significance In healthy individuals, the sensorimotor system relays brain signals to control muscular movement, however this function is compromised in the more than 11.5 million people suffering from neuromuscular disorders worldwide [1]. Brain-Computer Interfaces (BCIs) offer the potential to restore a level of sensorimotor control as they obtain control signals directly from the brain in order to guide movement. Individuals with amputations, surgeons performing a remote robotic surgery, or pilots wanting to fly hands-free may also benefit from a brain-computer interface. Electroencephalography (EEG) signals offer the ability to infer a user’s intent and are popular for use in BCIs since EEG is well-characterized, low-cost, and provides high temporal resolution [2]. BCIs have been successfully used to map EEG signals to cursor movement on a screen, movement of robot arms, wheelchairs, and prosthetics, and to allow communication via a spelling device [3] [4]. However, there are significant limitations involved in EEG-based BCI control. Firstly, BCI requires an explicitly defined mapping between each subset of neurons and the output movement of the device or cursor. Thus, the burden is placed on the subject to “learn” to control the device by activating these specific locations in the brain, a task which requires significant mental effort and which is not always successful [4]. Furthermore, BCIs can capture signals only from a small subset of the many neurons responsible, both directly and indirectly, for driving movement. Mimicking sensorimotor control with a BCI is also complicated by the highly nonlinear nature of brain-controlled arm movements, thus these are frequently simplified to linear mappings for BCI control [5]. Low spatial resolution and artefacts caused by muscle and eye movement limit EEG accuracy [4]. EEG-based BCI control is thus limited in its ability to accurately mimic natural movement. Tracking eye movements (EMs) presents an alternative strategy for controlling movement of a cursor or robotic arm. Saccadic eye movements – rapid changes in the location of focus – have been shown to be a rich source of information about an individual’s intended movements and constitute a potential control signal for cursor movement [6] [7]. For instance, individuals tend to look at an object before making a motion toward it, and duration of fixation on one object over another during a decision-making task is a good indicator of a person’s decision [8] [9]. However, using gaze to inform movement is problematic since it is not a natural sensorimotor connection: users are not naturally accustomed to moving or selecting items with their eyes. Eye tracking techniques are also highly susceptible to misconstruing random eye motion (“searching” eye movements) for meaningful control signals (“intentional” eye movements).
  • 4. 4 The challenge of how to best control movement given a limited set of control signals obtained using BCI may be addressed through a combined approach that uses gaze and EEG signals as well as information about the signatures of different tasks. A BCI that can predict the user’s intended goal based on eye movement and EEG signals could, for instance, tailor its own control algorithm to better assist the user in carrying out that goal. A BCI might be able to infer whether the user is intending to select an object or simply note its location, by recognizing specific task signatures in the user’s eye movements and EEG signals. The first step towards producing such a “smart” BCI is to determine whether such task-specific signatures exist in the combination of eye-movement and EEG signals. This project will focus on tasks involved in cursor movement, including planning a path before initiating movement and guiding the cursor along the pre- planned trajectory using BCI control. The goal of this project is to determine whether EEG and eye-movement data together provide a way to distinguish “planning” and “guiding” tasks, two tasks which utilize eye movements generally categorized as “intentional” eye movements. Evaluation of Previous Approaches Previous strategies for improving EEG-based control of BCI have focused on obtaining better spatial resolution of brain patterns using electrocorticogram (ECoG), a technique which uses electrodes surgically implanted on or near the neocortex [3]. However, this method is not a viable long-term solution as it is highly invasive and has persistent problems with electrode contact and biocompatibility [3] [4]. Gaze-only BCI control offers a more intuitive and non-invasive approach to controlling movement, but have the drawback of being unable to distinguish between casual glances at an object or an intention to move in that direction [3]. The possibility of using a blink to indicate intention of selection has been rejected because users are unable to prevent natural blinking which are subsequently mistaken for an intention to select an object [10]. Hybrid EEG-gaze BCIs have recently been gaining interest [11]. For instance, previous studies have compared the efficiency of eye-tracking and EEG as control signals for cursor movement. EEG signals have been incorporated in hybrid BCIs to simulate the “selection” of an object, with gaze data controlling movement, to obtain better accuracy, however this has been shown to be slower than traditional (solo) gaze-driven cursor movement [10]. Studies have also looked at “random” EMs (which occur, for instance, when a user is casually looking around a screen) compared to “intentional” EMs (when a user is directing a cursor using gaze information), as well as utilizing a combination of eye-tracking with EEG signals to obtain better control [3] [6]. These have certainly improved BCI control with regards to carrying out a single task more
  • 5. 5 accurately; however, little advancement has been made in exploring BCI control of multiple tasks in which the user’s eye movements are always “intentional” but the user is intending to carry out distinct tasks. For instance, both “planning” EMs (which occur prior to initiation of cursor movement) and “guiding” EMs (where the user is actively guiding the cursor using gaze information and imagined hand-movements) may be considered to be “intentional” EMs, but the distinction between these tasks is unclear. This project aims to answer the question of whether eye movement and EEG signals can be combined in order to reliably distinguish between the “planning” and “guiding” stages of controlled cursor movement. If so, these signatures may be used to provide better control of a BCI by a user carrying out a multiple tasks. In this project, the user will plan a route and guide an object through a maze designed in Unity, using a combination of EEG and gaze information, while data about the number and magnitude of saccades, dwell time (time spend focused on one area of the screen), EEG amplitude and frequency will be collected. Analysis of these parameters will be carried out in order to determine whether the “planning” and “guiding” stages have different characteristic signatures. Consequences of Success The ultimate goal in enhancing BCI control is to facilitate the interaction between user and machine. Ideally, a BCI should be minimally frustrating, non-invasive, and convenient, with additional criteria including how quickly a task is carried out, accuracy, and elimination of unintended movements. While BCI control has seen huge improvements following the advent of hybrid systems combining EEG and eye-movement information, significant gaps remain in the knowledge of the different stages of movement-related tasks [10] [11]. “Planning” versus “guiding” eye movements have not been studied in concert with EEG for the purpose of enhancing BCI control, and this represents a potential improvement in current hybrid eye-tracking BCIs. Successful incorporation of “planning” and “guiding” phase signatures into BCI control will lead to easier and more intuitive control of motion by allowing the BCI to dynamically respond to task signatures and tailor its control algorithm to improve speed and accuracy of movement tasks. Improved BCIs have the potential to greatly benefit individuals with amputations or who suffer from neuromuscular diseases. BCI technology is also relevant for healthy individuals, for instance to allow for control of a video game by thought, piloting of an airplane hands-free, or performance of a remote surgery using a robotic arm.
  • 6. 6 Constraints Economic Limitations in the practicality and applicability of this research include economic constraints, since brain-computer interfaces are expensive and require extensive training to learn to use. One goal in improving BCI control is to facilitate motor control and communication as well as develop better prosthetic arms and other devices; however, this technology may be prohibitively expensive for many individuals. Thus, improving BCI technology may only benefit the most wealthy individuals able to afford these high prices. Ethical Several ethical issues come to mind when considering the implications of brain-computer interface research. Firstly, learning to control BCIs with EEG is quite difficult and can be impossible to master depending on a patient’s cognitive challenges [12]. It is plausible that patients and/or their caregivers may have unreasonably high expectations of how much control a BCI will provide, resulting in psychological harm. Additionally, a natural application of BCI is in allowing locked-in patients – who are fully aware though paralyzed – to communicate. A potential issue in this situation would arise around communicating the patient’s wish to continue or discontinue life-support, since there is no clear answer of what level of communication a BCI could offer concerning life or death decisions or at what level the patient was mentally capable to make that decision. In these two situations, using BCI should be approached with extreme caution and all parties clearly informed about the limitations of BCI in restoring mobility or communication. Social The issues of “mind-reading” and the invasiveness of certain EEG techniques (such as ECoG) contribute to social issues surrounding brain-computer interfaces. Resistance to BCI is often due to misperceptions about technology that has the potential for combining human and machine, so education about BCI use is important, especially for subjects involved in BCI research. Precautions will be taken to preclude any possible spread of misinformation which could contribute to public resistance of BCI research. Legal Because BCIs obtain signals directly from the brain, there are distinct possibilities of legal and privacy issues arising from BCI research and popular use. Brain signals contain intimate information about an individual’s intentions, emotions, thoughts, and interests, and these could
  • 7. 7 potentially be mishandled or deliberately misused. For instance, BCIs can be used maliciously to infer an individual’s familiarity with certain faces, religious beliefs, sexual preference, as well as information including name and PIN by presenting a list of random names and numbers [13]. The issue of limiting such “brain spyware” will likely be a significant problem in the near future and should be addressed through openly discussing these privacy issues, but being aware of these privacy issues is a crucial first step. Specific Phases Overview of Phases This project aims to develop an interface for studying EEG and eye-tracking signals to investigate whether the “planning” and “guiding” stages of a movement-control task have different characteristic EEG and eye-movement signatures. Functionally, the interface will present the subject with a simple game that incorporates “searching”, “planning”, and “guiding” stages to move a cursor on a screen, while collecting eye-movement and EEG signals (see Figure 1 for a Functional Decomposition Diagram). Incorporating “searching” provides a method to verify eye-movement data since “searching” EMs are well-characterized [6]. In order to collect this data, I will develop an interface in Unity and iterate the design in each phase by testing it with the specifications found in Table 1. At minimum, the interface must achieve an 80% success rate of all three tasks within three minutes as well as statistically significant differences in eye movements between the “searching” and “planning” phases. Qualitative aspects of the interface will also be tested including clarity of the goal (to eliminate subject confusion) and whether the subject has enough time to complete each task without becoming distracted. This will validate that the interface is collecting data primarily focused on the signatures in “searching”, “planning”, and “guiding” tasks. I will be starting with a simple and easily-obtained control signal from the Phantom Omni haptics device (which provides touch feedback of virtual forces) in order to initially develop and test the interface. This project will consist of three phases, each characterized by increased complexity in the type of control signal used. First, and as a proof-of-concept of the testing interface, I will implement signals from the Omni haptics device along with eye movement signals. Next, I will replace the haptics signals with EMG (electromyography) signals, which provide intermediate complexity as a control signal. Finally, and as a stretch goal, I will replace EMG signals with EEG signals. The last phase is a stretch goal since it is not guaranteed that subjects will successfully learn to control the BCI and so obtaining EEG signals is not guaranteed. Thus, answering the
  • 8. 8 question of whether “planning” and “guiding” tasks differ significantly in their EEG and EM signatures (which is contingent on obtaining EEG signals) is also a stretch goal. Figure 1: Functional Decomposition Diagram. Inputs are 1) Data Collection Equipment; 2) Test Subject; 3) Unity Software; and 4) Algorithm to Control the Cursor. The output is the collected data of Eye-Movement and EEG signals corresponding to the “searching”, “planning”, and “guiding” tasks carried out by the subject. Metric Specification Target Values Validation Tool 1 Collection of Control Signal Data 100% Software saves or fails to save data 2 Achieve difference between “searching” EMs and “planning” EMs p <.05 t-test between saccade frequencies during searching and planning 3 Total time to Completion All tasks < 3 min total Time between onset and completion 4 Accuracy of Control Avoid 80% of obstacles Interface monitors collision 5 Interface Usability Proceed or alter interface User Feedback 6 Clear Goal Proceed or alter interface User Feedback Table 1: Needs specifications table. Both quantitative aspects of the game (time to completion, accuracy, successful data collection, different EMs between “searching” and “guiding”) as well as quantitative specifications (usability and clarity) will determine “success” of the interface at each phase. Input 1: Data Collection Equipment for eye movements + haptics signal/ EMG/ EEG Input 4: Algorithm to Control Cursor Movement Input 2: Test Subject / User Output: Collected data (Eye Movements, haptics signal, EMG, EEG) Control Signal User’s Action/ Response Cursor Position Input 3: Unity Software Testing Interface User prompted to carry out a task
  • 9. 9 Figure 2: Design strategy for each phase, showing quantitative specifications as well as qualitative feedback from the user, which must be met before moving to the next phase. . Phase I: Proof-of-Concept of Testing Interface using Haptics Signals The aim of Phase I is to develop an interface in Unity and provide a proof-of-concept test of the interface that demonstrates successful collection of eye-movement and signals from the Phantom Omni (3D location in space as well as force exerted by the user’s hand). In this phase, eye-movements and the Omni’s positional data are the control signals that guide the cursor through the on-screen interface. This interface will present the subject with a task that involves randomly searching the screen, planning a route to move a cursor between different points, and then attempting to guide the cursor between those locations on-screen. By designing and refining the testing interface, I will arrive at a design that achieves these aims. Phase II: Integration of EMG Signals In Phase II, EMG signals will be collected along with eye-movement data while the subject completes the task on the screen. The motivation to use EMG is that it is more similar to EEG signals than a haptics signal, however it is relatively more complex than the haptics device to set up as a control signal while also being easier to obtain than EEG signals. Therefore integrating EMG signals is a natural intermediate step before using EEG. Phase III: Integration of EEG Signals In Phase III, EEG signals will be collected along with eye-movement data as the subject completes the tasks on-screen. Similar to Phases I and II, Phase III will verify that the interface designed in Unity is appropriate for collecting eye-movement and, here, EEG signals. The goal of
  • 10. 10 Phase III is to demonstrate collection of eye-movement and EEG signals in order to answer the question of whether the different “searching”, “planning”, and “guiding” stages of a movement- control task induce distinct eye-movement + EEG signatures from the user. Design and Research Strategy Phase I: Proof-of-Concept of Testing Interface using Haptics Signals Approach: In Phase I a test interface will be designed in Unity and verified using eye-movement and haptics signals from the Phantom Omni. The Phantom Omni is an intuitive device that allows the user to physically “write” in the air and displays the result on a screen (Figure 3). It provides six degrees of positional sensing (x, y, z, pitch, roll, and yaw) and is already functionalized with Unity in the BioRobotics Lab. The Phantom Omni mimics a natural writing device so it is easy to use and provides a way to quickly test the interface without the added complexity of obtaining EMG signals or requiring a user to learn EEG control. The testing interface will present the user with a game involving 1) searching the screen for various objects, 2) planning a route between two or more objects as well as around obstacles, and 3) guiding a cursor using a combination of eye-movements and haptics signals from the Phantom Omni. The preliminary design will be based on the “center-out” game scheme, common in gaze-driven movement, which places a cursor at the center of several objects and requires the user to move the cursor toward one or more of these objects (Figure 3). Initially, a circle of different objects will be placed on the screen and the user will be allowed to look around and note their locations. Next, a “goal” object will be revealed indicating which objects the user must select with the cursor, allowing the user to plan a route between the objects. Finally, the cursor will be released to the user’s control allowing the user to guide it from object-to-object along their pre-planned path. Eye-movement signals and positional information Figure 3: The Phantom Omni device allows the user to write in the air and displays the result in Unity as well as controls cursor movement
  • 11. 11 from the Phantom Omni will be collected throughout the task. The design process for the testing interface will involve varying the size, location, number, and type of objects to be collected, the number of obstacles, the time devoted to the “searching”, “planning”, and “guiding” tasks, and the arrangement of icons. The “goal object” image will be placed in order to make it the least distracting as possible to the user. Although the center-out scheme is a well-tested technique in studying movement control, other arrangements of icons and obstacles may be considered in order to determine the best method for obtaining “searching”, “planning”, and “guiding” eye- movement and haptics signals. At this phase, testing will primarily be done on lab members. Deliverables: Phase I will produce an interface in Unity designed to promote eye-movement and haptics signals related to “searching”, “planning”, and “guiding” tasks. Success in this phase requires a functional testing interface that presents these distinct tasks. Ideally, the interface will reliably demonstrate that eye-movements (characterized by dwell time and saccade frequency) and user control of the Omni device differ between each task; however as long as the user is able to carry out the searching, planning, and guiding tasks set out in the Unity game, this phase will be considered complete. A trial will be deemed “successful” if the cursor collects all objects and avoids all obstacles, and three users must pass 8 of 10 trials for this stage to be complete. Anticipated Outcome Designing the testing interface is expected to be successful. The main issues will likely be in integrating the eye-tracking device with Unity, since the myGaze eye-tracking software has not yet been set up with Unity, as well as making design decisions involving the location, number, and size of icons as well as the time allocated to each of the three tasks. Feedback about these parameters will be obtained from lab members in order to make the game as easy to use as possible, while the number of collisions with obstacles will be recorded. The eye-tracking device requires a distance of 60 cm between the user’s eyes and the IR sensor, so the icons should be large enough to be easily seen from that distance. At this stage it is not required for the signatures related to each task to be demonstrably different, since EEG has not yet been obtained. Phase II: Integration of EMG Signals Approach: In Phase II, electromyography (EMG) signals will be obtained from the bicep muscle in order to guide cursor movement. This phase is an important preliminary step in testing the
  • 12. 12 interface designed in Stage I since EMG signals are voltage-based and thus similar in nature to EEG signals. EMG is a relevant test for the eventual use of EEG since it is a voltage-based signal however it has the benefit of producing a larger signal that does not require the placement of multiple electrodes as does EEG. The BioRobotics Lab has a method to import EMG signals into MATLAB, and from there they may be integrated into the control system in Unity for moving the cursor. In this stage the accessibility of the Unity interface will be evaluated using EMG as a control signal, in concert with eye movement. The goal is to show that signals with lower resolution and more noise than the Phantom Omni (i.e. EMG and EEG) can still be used to maneuver the cursor. Three members of the BioRobotics Lab will be chosen to complete 10 trials of the Unity game while eye movement and EMG signals are obtained. As in the first phase, Phase II will be successfully completed when the three test subjects pass 80% of trials using eye movement and EMG as control signals. A trial will be deemed “successful” if the cursor collects all objects and avoids all obstacles. Deliverables: This phase will demonstrate the functionality of the testing interface designed in Phase I when eye-movement and EMG signals are used as the control signals. The main goal of this phase is to provide an intermediate step in complexity with regards to the control signal (EMG vs EEG) in order to streamline the troubleshooting process. Anticipated Outcomes: Implementing EMG signal in the Unity interface is likely to be straightforward. Since EMG signals are relatively more difficult to control than the Phantom Omni device, it may be that the current design of the test interface is too difficult to use with EMG. In this case, iteration of the current design will occur, with each iteration tested first with the Omni device and second with EMG signals. It may be that using EMG is more mentally challenging than “drawing” with the Omni haptics device, in which case the complexity of the game will be reduced by decreasing the number of symbols to be collected and obstacles to be avoided and giving more time for each of the “searching”, “planning”, and “guiding” stages. Phase III: Introducing EEG signals Approach: The aim of this phase is to obtain electroencephalography (EEG) signals (in addition to eye-movement signals) from a user carrying out the three coupled tasks in the Unity interface.
  • 13. 13 Furthermore, I hope to show that the characteristic eye-movement and EEG signals differ significantly from stage to stage, and that these “signatures” can be implemented in a smart BCI that can respond to the intended task of its user. Non-invasive scalp EEG signals will be obtained in phase III along with eye-movement data and used to control the cursor’s movement. Electrodes will be placed according to the international 10-20 system [14]. First, the interface will be improved through design iterations until three test subjects are able to average an 80% success rate. Next, eye-movement and EEG signals will be analyzed for consistent differences between the “searching”, “planning”, and “guiding” stages of the Unity game. The EEG signals will be analyzed for characteristic fluctuations in power associated with different frequency bands: for instance, the Mu rhythm at 9-13 Hz is typically desynchronized during planning and execution of hand movement while it is synchronized when real or imagined hand-movement is suppressed [15]. Eye-movement data will be analyzed for characteristic dwell times (time spent focusing on a single location) and saccade frequency. Deliverables: In this phase I aim to integrate EEG signals into the control system of the Unity game and use eye movements and EEG to verify the interface according to Figure 2. Furthermore, I will analyze eye-movements and EEG signals to determine whether there are significant differences that distinguish the “planning” and “guiding” tasks. Anticipated Outcome: This phase is likely to be the most challenging since it relies on the ability of the subjects to learn to control BCI in order to use EEG as a control signal. If subjects are unable to learn BCI sufficiently, my fall-back option is to use EMG as the control signal and analyze EMG + eye- movement signals to determine whether characteristic events occur in the three stages. I hypothesize that there are characteristic differences between the “planning” and “guiding” tasks that will be revealed in the user’s eye movements and EEG signals. Engineering Design Standards: The design portion of this project involves designing a testing interface in Unity in order to obtain eye-movement data and haptics/EMG/EEG data (as appropriate, for each phase). Since the testing interface is computerized, the potential safety issues associated with this design is very low. This project contains very few potential safety risks, as EMG is completely non-invasive and the type of EEG we will be using is also non-invasive. A standard electrode placement will be used (see Figure 4) and all subjects’ voluntary consent will be obtained before collecting their EMG and
  • 14. 14 EEG signals while they test the design interface in Unity. EEG recordings must follow standards set out by the International Federation of Clinical Neurophysiology [14]. This involve labeling basic patient information including name, date of birth, date of the test, laboratory numbers, current medication, and additional comments. Further, any computer analysis of EEG recordings should not stand alone but should be accompanied by human visual analysis. While we are not engaging in clinical EEG, we also plan to follow these labeling practices. Figure 4: Standard EEG electrode placement [14] Key Personnel: Margaret Thompson and Andrew Haddock will oversee my project and advise me on at least a weekly basis. Margaret Thompson is a graduate student working with BCIs and has knowledge about the process involved in learning to use a BCI. Andrew Haddock is a graduate student working on modeling and control of Dynamic Neural Systems and will oversee my use of the eye-tracker. I have been meeting with Margaret and Andrew on a weekly basis. Professor Howard Chizeck will meet with me for regular progress reports –no less than monthly—and will assign me my grade for the 402 project. Facilities, Equipment, and Resources The BioRobotics Lab is located in the Electrical Engineering building on the UW campus, and has the resources necessary to conduct BCI and eye-tracking research. The BioRobotics Lab will provide all equipment including the MyGaze Eye-Tracker and software, EMG equipment, EEG equipment, Omni haptics device, and computer for designing the Unity interface and signal processing. I will use both MATLAB and Unity which I have downloaded on my own computer for work both inside and outside of lab.
  • 15. 15 Appendix: Timeline: My anticipated timeline is as follows. I will begin work during June 2016 and plan to complete each phase in approximately 3 months, leaving me 2 months as a buffer in the event of unforeseen circumstances, which will also give me time to write my report. Additional Figures Figure 5: Three possible testing interfaces developed in Unity to induce “searching”, “planning”, and “guiding” tasks. A) A center-out reaching control scheme [16]. B) A maze containing various obstacles. C) A random word search. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut leo felis, rutrum a risus sit amet, tristique dictum neque. Mauris.
  • 16. 16 Figure 6: A prototype design based on the center-out movement-control test. First, a group of symbols are displayed on a screen, allowing the user to search the screen and note symbol location. Next, a “goal” shape is revealed, allowing the user to plan a path of movement to collect the correct symbols. Finally, a cursor is revealed at the center location allowing the user to control its movement using a combination of control signals (eye-movement and haptics/EMG/EEG). This design incorporates “searching”, “planning”, and “guiding” tasks to move the cursor. Symbols appear, prompting “searching” behavior in user Goal object appears to allow user to plan a path for cursor movement Cursor appears, allowing user to guide it along the planned path
  • 17. 17 References [1] J. e. a. Deenen, "The Epidemiology of Neuromuscular Disorders: A Comprehensive Overview of the Literature," Journal of Neuromuscular Diseases, vol. 2, pp. 73-85, 2015. [2] T. K. C. Zander, "Toward Passive Brain-Computer Interfaces: Applying Brain-Computer Interface Technology to Human-Machine Systems in General," J. Neural Eng., vol. 8, no. 2, 2011. [3] B. L. A. S. B. Huang, "Integrating EEG Information Improves Performance of Gaze Based Cursor Control," in 6th International IEEE/EMBS Conference on Neural Engineering (NER), San Diego, 2013. [4] M. e. a. Gerven, " The Brain-Computer Interface Cycle," J Neural Eng, vol. 6, no. 4, pp. 1-10, 2009. [5] M. C. S. B. A. Y. B. Golub, "Brain-Computer Interfaces for Dissecting Cognitive Processes Underlying Sensorimotor Control," Current Opinion in Neurobiology, vol. 37, pp. 53-58, 2016. [6] S. e. a. 5. 1.-1. Lee, "Effects of Search Intent on Eye-Movement Patterns in a Change Detection Task," Journal of Eye Movement Research, vol. 8, no. 2, pp. 5,1-10, 2015. [7] E. A. e. a. Corbett, " Real-Time Evaluation of a Noninvasive Neuroprosthetic Interface for Control of Reach," IEEE Transactions on Neural Systems and Rehabilitation Engineering,, vol. 21, no. 4, pp. 674-682, 2013. [8] G. e. a. Bird, "The Role of Eye Movements in Decision Making and the Prospect of Exposure Effects," Vision Research, vol. 60, pp. 16-21, 2012. [9] S. B. H. Neggers, "Coordinated Control of Eye and Hand Movements in Dynamic Reaching," Human Movement Science, vol. 21, no. 3, pp. 349-376, 2002. [10] Z. T. Vilimek R., " BCI: combining eye-gaze input with brain–computer interaction," in Conference on Universal Access in Human–Computer Interaction, San Diego, 2009. [11] G. e. a. Pfurtscheller, "The Hybrid BCI," Front Neurosci., vol. 4, no. 42, pp. 593-602, 2010. [12] W. Glannon, "Ethical Issues with Brain-Computer Interfaces," Front Syst Neurosci, vol. 8, no. 136, 2014. [13] T. a. C. H. Bonaci, "Privay by Design in Brain-Computer Interfaces," UW Dept. Elec. Eng., Seattle, 2013. [14] M. e. a. Nuwer, "FCN Guidelines for Topographic and Frequency Analysis of EEGs and EPs. Report of an IFCN Committee," Electroencephalogr Clin Neurophysiol Suppl, Los Angeles, 1994. [15] G. e. a. Pfurtscheller, "Mu Rhythm (de)Synchronization and EEG Single-Trial Classification of Different Motor Imagery Tasks," NeuroImage, vol. 31, pp. 153-159, 2006.
  • 18. 18 [16] A. e. a. Hewitt, "Representation of Limb Kinematics in Purkinje Cells Simple Spike Discharge is Conserved Across Multiple Tasks," Journal of Neurophysiology, vol. 106, no. 5, pp. 2232-2247, 2011.