SlideShare a Scribd company logo
1 of 42
1. INTRODUCTION
For generations, humans have fantasized about the ability to communicate and inter-
act with machines through thought alone or to create devices that can peer into per- son’s
mind and thoughts.The idea of interfacing minds with machines has long captured the human
imagination in the form of ancient myths and modern science fiction stories. Recent advances
in neuroscience and engineering are making this a reality, opening the door to restoring and
potentially augmenting human physical and mental capabilities. Medical applications such as
cochlear implants for the deaf and deep brain stimulation for Parkinson's disease are
becoming increasingly commonplace.
Brain- computer interfaces (BCIs) are now being explored in applications as diverse
as security, lie detection, alertness monitoring, telepresence, gaming, education, art, and
human augmentation. Research on BCIs began in the 1970s at the University of California
Los Angeles (UCLA) under a grant from the National Science Foundation, followed by a
contract from DARPA. The papers published after this research also mark the first
appearance of the expression brain–computer interface in scientific literature.
The field of BCI research and development has since focused primarily on
neuroprosthetics applications that aim at restoring damaged hearing, sight and movement.
Thanks to the remarkable cortical plasticity of the brain, signals from implanted prostheses
can, after adaptation, be handled by the brain like natural sensor or effector channels.
Following years of animal experimentation, the first neuroprosthetic devices implanted in
humans appeared in the mid-1990s.
However, it is only recently that advances in cognitive neuroscience and brain
imaging technologies have started to provide us with the ability to interface directly with the
human brain. This ability is made possible through the use of sensors that can monitor some
of the physical processes that occur within the brain that correspond with certain forms of
thought. Primarily driven by growing societal recognition for the needs of people with
physical disabilities, researchers have used these technologies to build brain- computer
interfaces (BCIs), communication systems that do not depend on the brain’s normal output
pathways of peripheral nerves and muscles. In these systems, users explicitly manipulate their
brain activity instead of using motor movements to produce signals that can be used to
control computers or communication devices. The impact of this work is extremely high,
especially to those who suffer from devastating neuromuscular injuries and
neurodegenerative diseases such as amy- otrophic lateral sclerosis, which eventually strips
individuals of voluntary muscular activity while leaving cognitive function intact.
Meanwhile, and largely independent of these efforts, Human-Computer Interac- tion
(HCI) researchers continually work to increase the communication bandwidth and quality
between humans and computers. They have explored visualizations and multimodal
presentations so that computers may use as many sensory channels as possible to send
information to a human. Similarly, they have devised hardware and software innovations to
increase the information a human can quickly input into the computer. Since we have
traditionally interacted with the external world only through our physical bodies, these input
mechanisms have mostly required perform- ing some form of motor activity, be it moving a
mouse, hitting buttons, using hand gestures, or speaking.Additionally, these researchers have
started to consider implicit forms of input, that is, input that is not explicitly performed to
direct a computer to do some- thing. In an area of exploration referred to by names such as
perceptual com- puting or contextual computing, researchers attempt to infer information
about user state and intent by observing their physiology, behavior, or even the envi- ronment
in which they operate. Using this information, systems can dynamically adapt themselves in
useful ways in order to better support the user in the task at hand.
We believe that there exists a large opportunity to bridge the burgeoning research in
Brain-Computer Interfaces and Human Computer Interaction, and this book at- tempts to do
just that. We believe that BCI researchers would benefit greatly from the body of expertise
built in the HCI field as they construct systems that rely solely on interfacing with the brain
as the control mechanism. Likewise, BCIs are now mature enough that HCI researchers must
add them to our tool belt when designing novel input techniques (especially in environments
with constraints on normal motor movement), when measuring traditionally elusive cognitive
or emotional phenomena in evaluating our interfaces, or when trying to infer user state to
build adaptive systems. Each chapter in this book was selected to present the novice reader
with an overview of some aspect of BCI or HCI, and in many cases the union of the two, so
that they not only get a flavor of work that currently exists, but are hopefully inspired by the
opportunities that remain.
2. THE EVOLUTION OF BRAIN COMPUTER INTERFACING
The evolution of any technology can generally be broken into three phases. The initial
phase, or proof-of-concept, demonstrates the basic functionality of a technology. In this
phase, even trivially functional systems are impressive and stimulate imagination. They are
also sometimes misunderstood and doubted. As an example, when moving pictures were first
developed, people were amazed by simple footage shot with stationary cameras of flowers
blowing in the wind or waves crashing on the beach. Similarly, when the computer mouse
was first invented, people were intrigued by the ability to move a physical device small
distances on a tabletop in order to control a pointer in two dimensions on a computer screen.
In brain sensing work, this represents the ability to extract any bit of information directly
from the brain without utilizing normal muscular channels.
In the second phase, or emulation, the technology is used to mimic existing
technologies. The first movies were simply recorded stage plays, and computer mice were
used to select from lists of items much as they would have been with the numeric pad on a
keyboard. Similarly, early brain-computer interfaces have aimed to emulate functionality of
mice and keyboards, with very few fundamental changes to the interfaces on which they
operated. It is in this phase that the technology starts to be driven less by its novelty and starts
to interest a wider audience interested by the science of understanding and developing it more
deeply.
Finally, the technology hits the third phase, in which it attains maturity in its own
right. In this phase, designers understand and exploit the intricacies of the new technology to
build unique experiences that provide us with capabilities never before available. For
example, the flashback and crosscut, as well as “bullet-time” introduced more recently by the
movie the Matrix have become well-acknowledged idioms of the medium of film. Although
most researchers accept the term “BCI” and its definition, other terms has been used to
describe this special form of human–machine interface.
Similarly, the mouse has become so well integrated into our notions of computing that
it is extremely hard to imagine using current interfaces without such a device attached. It
should be noted that in both these cases, more than forty years passed between the
introduction of the technology and the widespread development and usage of these methods.
The human computer interaction field continues to work toward expanding the effective
information bandwidth between human and machine, and more importantly to design
technologies that integrate seamlessly into our everyday tasks. Specifically, we believe there
are several opportunities, though we believe our views are necessarily constrained and hope
that this book inspires further crossover and discussion.
For example:
 While the BCI community has largely focused on the very difficult mechanics of
acquiring data from the brain, HCI researchers could add experience designing
interfaces that make the most out of the scanty bits of information they have about the
user and their intent. They also bring in a slightly different viewpoint which may
result in interesting innovation on the existing applications of interest.
For example, while BCI researchers maintain admirable focus on providing
patients who have lost muscular control an alternate input device, HCI researchers
might complement the efforts by considering the entire locked-in experience,
including such factors as preparation, communication, isolation, and awareness, etc.
 Beyond the traditional definition of Brain-Computer Interfaces, HCI researchers have
already started to push the boundaries of what we can do if we can peer into the user’s
brain, if even ever so roughly. Considering how these devices apply to healthy users
in addition to the physically disabled, and how adaptive system may take advantage of
them could push analysis methods as well as application areas.
 The HCI community has also been particularly successful at systematically exploring
and creating whole new application areas. In addition to thinking about using
technology to fix existing pain points, or to alleviate difficult work, this community
has sought scenarios in which technology can augment everyday human life in some
way.
3. HISTORY OF BRAIN COMPUTER INTERFACING
Back in the 60s, con-trolling devices with brain waves was considered pure science
fiction, as wild and fantastic as warp drive and transporters. Although recording brain signals
from the human scalp gained some attention in 1929, when the German scientist Hans Berger
recorded the electrical brain activity from the human scalp, the required technolo-gies for
measuring and processing brain signals as well as our understanding of brain function were
still too limited. Nowadays, the situation has changed. Neuroscience research over the last
decades has led to a much better understanding of the brain. Signal processing algorithms and
computing power have advanced so rapidly that complex real-time processing of brain
signals does not require expensive or bulky equipment anymore.
The first BCI was described by Dr. Grey Walter in 1964. Ironically, this was shortly
before the first Star Trek episode aired. Dr. Walter connected electrodes directly to the motor
areas of a patient’s brain. (The patient was undergoing surgery for other reasons.) The patient
was asked to press a button to advance a slide projec-tor while Dr. Walter recorded the
relevant brain activity. Then, Dr. Walter connected the system to the slide projector so that
the slide projector advanced whenever the patient’s brain activity indicated that he wanted to
press the button. Interestingly, Dr. Walter found that he had to introduce a delay from the
detection of the brain activity until the slide projector advanced because the slide projector
would oth-erwise advance before the patient pressed the button! Control before the actual
movement happens, that is, control without movement – the first BCI!
Unfortunately, Dr. Walter did not publish this major breakthrough. He only pre-sented
a talk about it to a group called the Ostler Society in London. There was little progress in BCI
research for most of the time since then. BCI research advanced slowly for many more years.
By the turn of the century, there were only one or two dozen labs doing serious BCI research.
However, BCI research devel-oped quickly after that, particularly during the last few years.
Every year, there are more BCI-related papers, conference talks, products, and media articles.
There are at least 100 BCI research groups active today, and this number is growing.
More importantly, BCI research has succeeded in its initial goal: proving that BCIs
can work with patients who need a BCI to communicate. Indeed, BCI researchers have used
many different kinds of BCIs with several different patients. Furthermore, BCIs are moving
beyond communication tools for people who cannot otherwise communicate. BCIs are
gaining attention for healthy users and new goals such as rehabilitation or hands-free gaming.
BCIs are not science fiction anymore. On the other hand, BCIs are far from mainstream tools.
Most people today still do not know that BCIs are even possible. There are still many
practical challenges before a typical person can use a BCI without expert help. There is a
long way to go from providing communication for some specific patients, with considerable
expert help, to providing a range of functions for any user without help.
In Hans Berger's discovery of the electrical activity of the human brain and the
development of electroencephalography (EEG). In 1924 Berger was the first to record human
brain activity by means of EEG. Berger was able to identify oscillatory activity in the brain
by analyzing EEG traces. One wave he identified was the alpha wave (8–13 Hz), also known
as Berger's wave. Berger's first recording device was very rudimentary. He inserted silver
wires under the scalps of his patients. These were later replaced by silver foils attached to the
patients' head by rubber bandages. Berger connected these sensors to a Lippmann capillary
electrometer, with disappointing results. More sophisticated measuring devices, such as the
Siemens double-coil recording galvanometer, which displayed electric voltages as small as
one ten thousandth of a volt, led to success. Berger analyzed the interrelation of alternations
in his EEG wave diagrams with brain diseases. EEGs permitted completely new possibilities
for the research of human brain activities.
4. BRAIN COMPUTER INTERFACING VERSUS NEUROPROSTHETICS
Neuroprosthetics is an area of neuroscience concerned with neural prostheses. That is,
using artificial devices to replace the function of impaired nervous systems and brain related
problems, or of sensory organs. The most widely used neuroprosthetic device is the cochlear
implant which, as of December 2010, had been implanted in approximately 220,000 people
worldwide.There are also several neuroprosthetic devices that aim to restore vision, including
retinal implants.
The difference between BCIs and neuroprosthetics is mostly in how the terms are
used: neuroprosthetics typically connect the nervous system to a device, whereas BCIs
usually connect the brain (or nervous system) with a computer system. A BCI is an artificial
output channel, a direct interface from the brain to a computer or machine, which can accept
voluntary commands directly from the brain without requiring physical movements. Practical
neuroprosthetics can be linked to any part of the nervous system. For example, peripheral
nerves while the term "BCI" usually designates a narrower class of systems which interface
with the central nervous system.
The terms are sometimes, however, used interchangeably. Neuroprosthetics and BCIs
seek to achieve the same aims, such as restoring sight, hearing, movement, ability to
communicate, and even cognitive function. Both use similar experimental methods and
surgical techniques. Any natural form of communication or control requires peripheral nerves
and mus- cles. The process begins with the user’s intent. This intent triggers a complex
process in which certain brain areas are activated, and hence signals are sent via the
peripheral nervous system (specifically, the motor pathways) to the correspond- ing muscles,
which in turn perform the movement necessary for the communication or control task. The
activity resulting from this process is often called motor out- put or efferent output. Efferent
means conveying impulses from the central to the peripheral nervous system and further to an
effector (muscle). Afferent, in con- trast, describes communication in the other direction,
from the sensory receptors to the central nervous system. For motion control, the motor
(efferent) pathway is essential. The sensory (afferent) pathway is particularly important for
learning motor skills and dexterous tasks, such as typing or playing a musical instrument.
A BCI offers an alternative to natural communication and control. A BCI is an
artificial system that bypasses the body’s normal efferent pathways, which are the
neuromuscular output channels.Instead of depending on peripheral nerves and muscles, a BCI
directly mea- sures brain activity associated with the user’s intent and translates the recorded
brain activity into corresponding control signals for BCI applications. This translation
involves signal processing and pattern recognition, which is typically done by a computer.
Since the measured activity originates directly from the brain and not from the peripheral
systems or muscles, the system is called a Brain–ComputerInterface. A BCI must have four
components. It must record activity directly from the brain (invasively or non-invasively). It
must provide feedback to the user, and must do so in realtime. Finally, the system must rely
on intentional control. That is, the user must choose to perform a mental task whenever s/he
wants to accomplish a goal with the BCI. Devices that only passively detect changes in brain
activity that occur without any intent, such as EEG activity associated with workload,
arousal, or sleep, are not BCIs.
“Neuroprosthesis,” however, is a more general term. Neuroprostheses (also called
neural prostheses) are devices that cannot only receive output from the ner- vous system, but
can also provide input. Moreover, they can interact with the peripheral and the central
nervous systems. BCIs are a special category of neuroprostheses.
5. BCI PERFORMANCE
The performance of a BCI can be measured in various ways . A simple measure is
classification performance (also termed classification accuracy or classification rate). It is the
ratio of the number of correctly classified trials (successful attempts to perform the required
mental tasks) and the total number of trials. The error rate is also easy to calculate, since it is
just the ratio of incorrectly classified trials and the total number of trials.
Although classification or error rates are easy to calculate, application dependent
measures are often more meaningful. For instance, in a mental typewriting applica- tion the
user is supposed to write a particular sentence by performing a sequence of mental tasks.
Again, classification performance could be calculated, but the number of letters per minute
the users can convey is a more appropriate measure. Letters per minute is an application
dependent measure that assesses (indirectly) not only the classification performance but also
the time that was necessary to perform the required tasks.
A more general performance measure is the so-called information transfer rate (ITR) .
It depends on the number of different brain patterns (classes) used; the time the BCI needs to
classify these brain patterns, and the classification accu- racy. ITR is measured in bits per
minute. Since ITR depends on the number of brain patterns that can be reliably and quickly
detected and classified by a BCI, the information transfer rate depends on the mental strategy
employed.
Typically, BCIs with selective attention strategies have higher ITRs than those using,
for instance, motor imagery. A major reason is that BCIs based on selective attention usually
provide a larger number of classes (e.g. number of light sources). Motor imagery, for
instance, is typically restricted to four or less motor imagery tasks. More imagery tasks are
possible but often only to the expense of decreased classification accuracy, which in turn
would decrease in the information transfer rate as well.
There are a few papers that report BCIs with a high ITR, ranging from 30 bits/min to
slightly above 60 bits/min and, most recently, over 90 bits per minute. Such performance,
however, is not typical for most users in real world settings. In fact, these record values are
often obtained under laboratory conditions by indi- vidual healthy subjects who are the top
performing subjects in a lab. In addition, high ITRs are usually reported when people only
use a BCI for short periods. Of course, it is interesting to push the limits and learn the best
possible performance of current BCI technology, but it is no less important to estimate
realistic performance in more practical settings. Unfortunately, there is currently no study
available that investigates the average information transfer rate for various BCI systems over
a larger user population and over a longer time period so that a general estimate of average
BCI performance can be derived. The closest such study is the excellent work by Kübler and
Birbaumer.
Furthermore, a minority of subjects exhibit little or no control .The reason is not clear,
but even long sustained training cannot improve performance for those subjects. In any case,
a BCI provides an alternative communication channel, but this channel is slow. It certainly
does not provide high-speed interaction. It can- not compete with natural communication
(such as speaking or writing) or traditionalman-machine interfaces in terms of ITR. However,
it has important applications for the most severely disabled. There are also new emerging
applications for less severely disabled or even healthy people, as detailed in the next section.
BCIs measure brain activity, process it, and produce control signals that reflect the
user’s intent. To understand BCI operation better, one has to understand how brain activity
can be measured and which brain signals can be utilized.
Brain activity produces electrical and magnetic activity. Therefore, sensors can detect
different types of changes in electrical or magnetic activity, at different times over different
areas of the brain, to study brain activity.Most BCIs rely on electrical measures of brain
activity, and rely on sensors placed over the head to measure this activity.
Electroencephalography (EEG) refers to recording electrical activity from the scalp with
electrodes. It is a very well estab- lished method, which has been used in clinical and research
settings for decades. Temporal resolution, meaning the ability to detect changes within a
certain time interval, is very good. However, the EEG is not with- out disadvantages: The
spatial (topographic) resolution and the frequency range are limited. The EEG is susceptible
to so-called artifacts, which are contamina- tions in the EEG caused by other electrical
activities. Examples are bioelectrical activities caused by eye movements or eye blinks
(electrooculographic activity, EOG) and from muscles (electromyographic activity, EMG)
close to the recording sites. External electromagnetic sources such as the power line can also
contaminate the EEG.
Furthermore, although the EEG is not very technically demanding, the setup pro-
cedure can be cumbersome. To achieve adequate signal quality, the skin areas that are
contacted by the electrodes have to be carefully prepared with special abrasive electrode gel.
Because gel is required, these electrodes are also called wet elec- trodes. The number of
electrodes required by current BCI systems range from only a few to more than 100
electrodes. Most groups try to minimize the number of elec- trodes to reduce setup time and
hassle. Since electrode gel can dry out and wearing the EEG cap with electrodes is not
convenient or fashionable, the setting up pro- cedure usually has to be repeated before each
session of BCI use.
From a practical viewpoint, this is one of largest drawbacks of EEG-based BCIs. A
possible solution is a technology called dry electrodes. Dry electrodes do not require skin
prepara- tion nor electrode gel. This technology is currently being researched, but a practical
solution that can provide signal quality comparable to wet electrodes is not in sight at the
moment.A BCI analyzes ongoing brain activity for brain patterns that originate from specific
brain areas. To get consistent recordings from specific regions of the head, scientists rely on a
standard system for accurately placing electrodes, which is called the International 10–
20 System. It is widely used in clinical EEG recording and EEG research as well as BCI
research.
Contrary to popular simplifications, the brain is not a general-purpose computerwith a
unified central processor. Rather, it is a complex assemblage of competing sub-systems; each
highly specialized for particular tasks (Carey 2002). By studying the effects of brain injuries
and, more recently, by using new brain imaging technologies, neuroscientists have built
detailed topographical maps associating different parts of the physical brain with distinct
cognitive functions.
The brain can be roughly divided into two main parts: the cerebral cortex and sub-
cortical regions. Sub-cortical regions are phylogenetically older and include areas associated
with controlling basic functions including vital functions such as respiration, heart rate, and
temperature regulation, basic emotional and instinctive responses such as fear and reward,
reflexes, as well as learning and memory. The cerebral cortex is evolutionarily much newer.
Since this is the largest and most complex part of the brain in the human, this is usually the
part of the brain people notice in pictures. The cortex supports most sensory and motor
processing as well as “higher” level functions including reasoning, planning, language
processing, and pattern recognition. This is the region that current BCI work has largely
focused on.
GEOGRAPHY OF THOUGHT
The cerebral cortex is split into two hemispheres that often have very different
functions.For instance; most language functions lie primarily in the left hemisphere, while the
right hemisphere controls many abstract and spatial reasoning skills. Also, most motor and
sensory signals to and from the brain cross hemispheres, meaning that the right brain senses
and controls the left side of the body and vice versa. The brain can be further divided into
separate regions specialized for different functions.
For example, occipital regions at the very back of the head are largely devoted to
processing of visual information. Areas in the temporal regions, roughly along the sides and
lower areas of the cortex are involved in memory, pattern matching, language processing and
auditory processing. Still other areas of the cortex are devoted to diverse functions such as
spatial representation and processing, attention orienting, arithmetic, voluntary muscle
movement, planning, reasoning and even enigmatic aspects of human behavior such as moral
sense and ambition. We should emphasize that our understanding of brain structure and
activity is still fairly shallow. These topographical maps are not definitive assignments of
location to function. In fact, some areas process multiple functions, and many functions are
processed in more than one area.
BRAIN IMAGING TECHNOLOGIES
There are two general classes of brain imaging technologies: invasive technologies, in
which sensors are implanted directly on or in the brain, and non-invasive technologies, which
measure brain activity using external sensors. Although invasive technologies provide high
temporal and spatial resolution, they usually cover only very small regions of the brain.
Additionally, these techniques require surgical procedures that often lead to medical
complications as the body adapts, or does not adapt, to the implants.
International 10-20 system
Furthermore, once implanted, these technologies cannot be moved to measure
different regions of the brain.While many researchers are experimenting with such implants,
we will not review this research in detail as we believe these techniques are unsuitable for
human-computer interaction work and general consumer use.
MEASURING BRAIN ACTIVITY
The techniques discussed in the last section are all non-invasive recording tech-
niques. That is, there is no need to perform surgery or even break the skin. In contrast,
invasive recording methods require surgery to implant the necessary sensors.
This surgery includes opening the skull through a surgical procedure called a
craniotomy and cutting the membranes that cover the brain. When the electrodes are placed
on the surface of the cortex, the signal recorded from these electrodes is called the
electrocorticogram (ECoG). ECoG does not damage any neurons because no electrodes
penetrate the brain. The signal recorded from electrodes that penetrate brain tissue is called
intracortical recording.
Three different ways to detect brain’s electrical activities-EEG, EcoG, Intracortical recordings
Invasive recording techniques combine excellent signal quality, very good spatial
resolution, and a higher frequency range. Artifacts are less problematic with invasive
recordings. Further, the cumbersome application and re-application of electrodes as described
above is unnecessary for invasive approaches. Intracortical electrodes can record the neural
activity of a single brain cell or small assemblies of brain cells. The ECoG records the
integrated activity of a much larger number of neurons that are in the proximity of the ECoG
electrodes. However, any invasive technique has better spatial resolution than the EEG.
Clearly, invasive methods have some advantages over non-invasive methods.
However, these advantages come with the serious drawback of requiring surgery. Ethical,
financial, and other considerations make neurosurgery impractical except for some users who
need a BCI to communicate. Even then, some of these users may find that a noninvasive BCI
meets their needs. It is also unclear whether both ECoG and intracortical recordings can
provide safe and stable recording over years. Long- term stability may be especially
problematic in the case of intracortical recordings. Electrodes implanted into the cortical
tissue can cause tissue reactions that lead to deteriorating signal quality or even complete
electrode failure. Research on invasive BCIs is difficult because of the cost and risk of
neurosurgery. For ethical reasons, some invasive research efforts rely on patients who
undergo neurosurgery for other reasons, such as treatment of epilepsy. Studies with these
patients can be very infor- mative, but it is impossible to study the effects of training and long
term use because these patients typically have an ECoG system for only a few days before it
is removed.
ELECTROENCEPHALOGRAPHY (EEG)
EEG uses electrodes placed directly on the scalp to measure the weak (5–100 μV)
electrical potentials generated by activity in the brain, Because of the fluid, bone, and skin
that separate the electrodes from the actual electrical activity, signals tend to be smoothed and
rather noisy. Hence, while EEG measurements have good temporal resolution with delays in
the tens of milliseconds, spatial resolution tends to be poor, ranging about 2–3 cm accuracy at
best, but usually worse. Two centimeters on the cerebral cortex could be the difference
between inferring that the user is listening to music when they are in fact moving their hands.
Electroencephalography (EEG) is the most studied potential non-invasive interface,
mainly due to its fine temporal resolution, ease of use, portability and low set-up cost. The
technology is highly susceptible to noise however. Another substantial barrier to using EEG
as a brain–computer interface is the extensive training required before users can work the
technology. For example, in experiments beginning in the mid-1990s, Niels Birbaumer at the
University of Tübingen in Germany trained severely paralysed people to self-regulate the
slow cortical potentials in their EEG to such an extent that these signals could be used as a
binary signal to control a computer cursor. The experiment saw ten patients trained to move a
computer cursor by controlling their brainwaves. The process was slow, requiring more than
an hour for patients to write 100 characters with the cursor, while training often took many
months.
Electroencephalogram
Another research parameter is the type of oscillatory activity that is measured.
Birbaumer's later research with Jonathan Wolpaw at New York State University has focused
on developing technology that would allow users to choose the brain signals they found
easiest to operate a BCI, including mu and beta rhythms.A further parameter is the method of
feedback used and this is shown in studies of P300 signals. Patterns of P300 waves are
generated involuntarily when people see something they recognize and may allow BCIs to
decode categories of thoughts without training patients first. By contrast, the biofeedback
methods described above require learning to control brainwaves so the resulting brain activity
can be detected.
Lawrence Farwell and Emanuel Donchin developed an EEG-based brain–computer
interface in the 1980s. Their "mental prosthesis" used the P300 brainwave response to allow
subjects, including one paralyzed Locked-In syndrome patient, to communicate words, letters
and simple commands to a computer and thereby to speak through a speech synthesizer
driven by the computer. A number of similar devices have been developed since then. In
2000, for example, research by Jessica Bayliss at the University of Rochester showed that
volunteers wearing virtual reality helmets could control elements in a virtual world using
their P300 EEG readings, including turning lights on and off and bringing a mock-up car to a
stop.
While an EEG based brain-computer interface has been pursued extensively by a
number of research labs, recent advancements made by Bin He and his team at the University
of Minnesota suggest the potential of an EEG based brain-computer interface to accomplish
tasks close to invasive brain-computer interface. Using advanced functional neuroimaging
including BOLD functional MRI and EEG source imaging, Bin He and co-workers identified
the co-variation and co-localization of electrophysiological and hemodynamic signals
induced by motor imagination.Refined by a neuroimaging approach and by a training
protocol, Bin He and co-workers demonstrated the ability of a non-invasive EEG based brain-
computer interface to control the flight of a virtual helicopter in 3-dimensional space, based
upon motor imagination.In June 2013 it was announced that Bin He had developed the
technique to enable a remote-control helicopter to be guided through an obstacle course.
In addition to a brain-computer interface based on brain waves, as recorded from
scalp EEG electrodes, Bin He and co-workers explored a virtual EEG signal-based brain-
computer interface by first solving the EEG inverse problem and then used the resulting
virtual EEG for brain-computer interface tasks. Well-controlled studies suggested the merits
of such a source analysis based brain-computer interface.
DRY ACTIVE ELECTRODE ARRAYS
In the early 1990s Babak Taheri, at University of California, Davis demonstrated the
first single and also multichannel dry active electrode arrays using micro-machining. The
single channel dry EEG electrode construction and results were published in 1994. The
arrayed electrode was also demonstrated to perform well compared to Silver/Silver Chloride
electrodes. The device consisted of four sites of sensors with integrated electronics to reduce
noise by impedance matching.
The advantages of such electrodes are:
I. No electrolyte used,
II. No skin preparation,
III. Significantly reduced sensor size, and
IV. Compatibility with eeg monitoring systems.
The active electrode array is an integrated system made of an array of capacitive
sensors with local integrated circuitry housed in a package with batteries to power the
circuitry. This level of integration was required to achieve the functional performance
obtained by the electrode.
The electrode was tested on an electrical test bench and on human subjects in four
modalities of EEG activity, namely:
I. Spontaneous EEG,
II. Sensory event-related potentials,
III. Brain stem potentials, and
IV. Cognitive event-related potentials.
The performance of the dry electrode compared favorably with that of the standard
wet electrodes in terms of skin preparation, no gel requirements and higher signal-to-noise
ratio.
In 1999 researchers at Case Western Reserve University, in Cleveland, Ohio, led by
Hunter Peckham, used 64-electrode EEG skullcap to return limited hand movements to
quadriplegic Jim Jatich. As Jatich concentrated on simple but opposite concepts like up and
down, his beta-rhythm EEG output was analysed using software to identify patterns in the
noise. A basic pattern was identified and used to control a switch: Above average activity was
set to on, below average off. As well as enabling Jatich to control a computer cursor the
signals were also used to drive the nerve controllers embedded in his hands, restoring some
movement.
PROSTHESIS CONTROL
Non-invasive BCIs have also been applied to enable brain-control of prosthetic upper
and lower extremity devices in people with paralysis. For example, Gert Pfurtscheller of Graz
University of Technology and colleagues demonstrated a BCI-controlled functional electrical
stimulation system to restore upper extremity movements in a person with tetraplegia due to
spinal cord injury. Between 2012 and 2013, researchers at the University of California, Irvine
demonstrated for the first time that it is possible to use BCI technology to restore brain-
controlled walking after spinal cord injury. In their study, a person with paraplegia due to
spinal cord injury was able to operate a BCI-robotic gait orthosis to regain basic brain-
controlled ambulation.
MENTAL STRATEGIES AND BRAIN PATTERNS
Measuring brain activity effectively is a critical first step for brain–computer com-
munication. However, measuring activity is not enough, because a BCI cannot read the mind
or decipher thoughts in general. A BCI can only detect and classify spe- cific patterns of
activity in the ongoing brain signals that are associated with specific tasks or events. What the
BCI user has to do to produce these patterns is determined by the mental strategy the BCI
system employs.
The mental strategy is the foundation of any brain computer communication. The
mental strategy determines what the user has to do to volition- ally produce brain patterns
that the BCI can interpret. The mental strategy also sets certain constraints on the hardware
and software of a BCI, such as the signal processing techniques to be employed later. The
amount of training required to successfully use a BCI also depends on the mental strategy.
The most common mental strategies are selective attention and motor imagery. In the
following, we briefly explain these different BCIs. More detailed information about different
BCI approaches and associated brain signals can be found in chapter “Brain Signals
for Brain–Computer Interfaces”.
SELECTIVE ATTENTION
BCIs based on selective attention require external stimuli provided by the BCI sys-
tem. The stimuli can be auditory or somatosensory. Most BCIs, however, are based on visual
stimuli. That is, the stimuli could be different tones, different tactile stimulations, or flashing
lights with different frequencies. In a typical BCI setting, each stimulus is associated with a
command that controls the BCI appli- cation. In order to select a command, the users have to
focus their attention to the corresponding stimulus. Let’s consider an example of a
navigation/selection appli- cation, in which we want to move a cursor to items on a computer
screen and then we want to select them. A BCI based on selective attention could rely on five
stimuli. Four stimuli are associated with the commands for cursor movement: left, right, up,
and down.
The fifth stimulus is for the select command. This system would allow two
dimensional navigation and selection on a computer screen. Users operate this BCI by
focusing their attention on the stimulus that is associated with the intended command.
Assume the user wants to select an item on the computer screen which is one position above
and left of the current cursor position. The user would first need to focus on the stimulus that
is associated with the up command, then on the one for the left command, then select the item
by focusing on the stimulus associated with the select command. The items could represent a
wide variety of desired messages or commands, such as letters, words, and instructions to
move a wheelchair or robot arm, or signals to control a smart home.
A 5-choice BCI like this could be based on visual stimuli. In fact, visual attention can
be implemented with two different BCI approaches, which rely on somewhat different
stimuli, mental strategies, and signal processing. These approaches are named after the brain
patterns they produce, which are called P300 potentials andsteady-state visual evoked
potentials (SSVEP). The BCIs employing these brain patterns are called P300 BCI and
SSVEP BCI, respectively.
A P300 based BCI relies on stimuli that flash in succession. These stimuli are usually
letters, but can be symbols that represent other goals, such as controlling a cursor, robot arm,
or mobile robot. Selective attention to a specific flashing symbol/letter elicits a brain pattern
called P300, which develops in centro-parietalbrain areas about 300 ms after the presen-
tation of the stimulus. The BCI can detect this P300. The BCI can then determine the
symbol/letter that the user intends to select.
Like a P300 BCI, an SSVEP based BCI requires a number of visual stimuli. Each
stimulus is associated with a specific command, which is associated with an output the BCI
can produce. In contrast to the P300 approach, however, these stim- uli do not flash
successively, but flicker continuously with different frequencies in the range of about 6–
30 Hz. Paying attention to one of the flickering stimuli elicits an SSVEP in the visual cortex
that has the same frequency as the tar- get flicker. That is, if the targeted stimulus flickers at
16 Hz, the resulting SSVEP will also flicker at 16 Hz. Therefore, an SSVEP BCI can
determine which stimulus occupies the user’s attention by looking for SSVEP activity in the
visual cortex at a specific frequency. The BCI knows the flickering frequencies of all light
sources, and when an SSVEP is detected, it can determine the corresponding light source and
its associated command.
BCI approaches using selective attention are quite reliable across different users and
usage sessions, and can allow fairly rapid communication. Moreover, these approaches do not
require significant training. Users can produce P300s and SSVEPs without any training at all.
Almost all subjects can learn the simple task of focusing on a target letter or symbol within a
few minutes. Many types of P300 and SSVEP BCIs have been developed. For example, the
task described above, in which users move a cursor to a target and then select it, has been
validated with both P300 and SSVEP BCIs .One drawback of both P300 and SSVEP BCIs is
that they may require the user to shift gaze. This is relevant because completelylocked-
in patients are not able to shift gaze anymore. Although P300 and SSVEP BCIs without gaze
shifting are possible as well, BCIs that rely on visual attention seem to work best when users
can shift gaze. Another concern is that some people may dislike the external stimuli that are
necessary to elicit P300 or SSVEP activity.
MOTOR IMAGERY
Moving a limb or even contracting a single muscle changes brain activity in the cor-
tex. In fact, already the preparation of movement or the imagination of movement also
changes the so-called sensorymotor rhythms. Sensorimotor rhythms (SMR) refer to
oscillations in brain activity recorded from somatosensory and motor areas.
Brain oscillations are typically categorized according to specific fre- quency bands
which are named after Greek letters (delta: < 4 Hz, theta: 4–7 Hz, alpha: 8–12Hz, beta: 12–
30 Hz, gamma: > 30 Hz). Alpha activity recorded from sensorimotor areas is also called mu
activity. The decrease of oscillatory activ- ity in a specific frequency band is called event-
related desynchronization (ERD). Correspondingly, the increase of oscillatory activity in a
specific frequency band is called event-relatedsynchronization (ERS). ERD/ERS patterns can
be volition- ally produced by motor imagery, which is the imagination of movement without
actually performing the movement. The frequency bands that are most important for motor
imagery are mu and beta in EEG signals. Invasive BCIs often also use gamma activity, which
is hard to detect with electrodes mounted outside the head.
In contrast to BCIs based on selective attention, BCIs based on motor imagery do not
depend on external stimuli. However, motor imagery is a skill that has to be learned. BCIs
based on motor imagery usually do not work very well during the first session. Instead, unlike
BCIs on selective attention, some training is necessary. While performance and training time
vary across subjects, most subjects can attain good control in a 2-choice task with 1–4 h of
training
However, longer training is often necessary to achieve sufficient control. Therefore,
training is an important component of many BCIs. Users learn through a process called
operant conditioning, which is a fundamental term in psychology. In operant conditioning,
people learn to associate a certain action with a response or effect. For example, people learn
that touching a hot stove is painful, and never do it again. In a BCI, a user who wants to move
the cursor up may learn that mentally visualizing a certain motor task such as a clenching
one’s fist is less effective than thinking about the kinaesthetic experience of such a
movement. BCI learning is a special case of operant conditioning because the user is not
performing an action in the classical sense, since s/he does not move. Nonetheless, if
imagined actions produce effects, then conditioning can still occur. During BCI use, operant
conditioning involves training with feedback that is usually presented on a computer screen.
Positive feedback indicates that the brain signals are modulated in a desired way. Negative or
no feedback is given when the user was not able to perform the desired task. BCI learning is a
type of feedback called neurofeedback.
The feedback indicates whether the user performed the mental task well or failed to
achieve the desired goal through the BCI. Users can utilize this feedback to optimize their
mental tasks and improve BCI performance. The feedback can be tactile or auditory, but most
often it is visual. Chapter “Neurofeedback Training for BCI Control” in this book presents
more details about neuro-feedbackand its importance in BCI research.
PROSTHESIS CONTROL
Non-invasive BCIs have also been applied to enable brain-control of prosthetic upper
and lower extremity devices in people with paralysis. For example, Gert Pfurtscheller of Graz
University of Technology and colleagues demonstrated a BCI-controlled functional electrical
stimulation system to restore upper extremity movements in a person with tetraplegia due to
spinal cord injury. Between 2012 and 2013, researchers at the University of California, Irvine
demonstrated for the first time that it is possible to use BCI technology to restore brain-
controlled walking after spinal cord injury. In their study, a person with paraplegia due to
spinal cord injury was able to operate a BCI-robotic gait orthosis to regain basic brain-
controlled ambulation.
SIGNAL PROCESSING
A BCI measures brain signals and processes them in real time to detect certain
patterns that reflect the user’s intent. This signal processing can have three stages:
preprocessing, feature extraction, and detection and classification. Preprocessing aims at
simplifying subsequent processing operations without los- ing relevant information. An
important goal of preprocessing is to improve signal quality by improving the so-
called signal-to-noise ratio (SNR). A bad or small SNR means that the brain patterns are
buried in the rest of the signal, which makes relevant patterns hard to detect. A good or large
SNR, on the other hand, simplifies the BCI’s detection and classification task.
Transformations combined with filtering techniques are often employed during preprocessing
in a BCI. Scientists use these techniques to transform the signals so unwanted signal
components can be eliminated or at least reduced. These techniques can improve the SNR.
The brain patterns used in BCIs are characterized by certain features or proper- ties.
For instance, amplitudes and frequencies are essential features of sensorimotor rhythms and
SSVEPs. The firing rate of individual neurons is an important feature of invasive BCIs using
intracortical recordings. The feature extraction algorithms of a BCI calculate (extract) these
features. Feature extraction can be seen as another step in preparing the signals to facilitate
the subsequent and last signal processing stage, detection and classification.
Detection and classification of brain patterns is the core signal processing task in
BCIs. The user elicits certain brain patterns by performing mental tasks according to mental
strategies and the BCI detects and classifies these patterns and translates them into
appropriate commands for BCI applications.
This detection and classification process can be simplified when the user com-
municates with the BCI only in well defined time frames. Such a time frame is indicated by
the BCI by visual or acoustic cues. For example, a beep informs the user that s/he could send
a command during the upcoming time frame, which might last 2–6 s. During this time, the
user is supposed to perform a specific mental task. The BCI tries to classify the brain signals
recorded in this time frame. This type of BCI does not consider the possibility that the user
does not wish to communicate anything during one of these time frames, or that s/he wants to
communicate outside of a specified time frame.
This mode of operation is called synchronous or cue-paced. Correspondingly, a BCI
employing this mode of operation is called a synchronous BCI or a cue-pacedBCI. Although
these BCIs are relatively easy to develop and use, they are impractical in many real-
world settings. A cue-pased BCI is somewhat like a keyboard that can only be used at certain
times.
In an asynchronous or self-paced BCI, users can interact with a BCI at their leisure,
without worrying about well defined time frames. Users may send a signal, or choose not to
use a BCI, whenever they want. Therefore, asynchronous BCIs or self-paced BCIs have to
analyse the brain signals continuously. This mode of operation is technically more
demanding, but it offers a more natural and convenient form of interaction with a BCI.
6. ANIMAL BCI RESEARCH
Several laboratories have managed to record signals from monkey and rat cerebral
cortices to operate BCIs to produce movement. Monkeys have navigated computer cursors on
screen and commanded robotic arms to perform simple tasks simply by thinking about the
task and seeing the visual feedback, but without any motor output. In May 2008 photographs
that showed a monkey at the University of Pittsburgh Medical Center operating a robotic arm
by thinking were published in a number of well known science journals and magazines.Other
research on cats has decoded their neural visual signals.
In 1969 the operant conditioning studies of Fetz and colleagues, at the Regional
Primate Research Center and Department of Physiology and Biophysics, University of
Washington School of Medicine in Seattle, showed for the first time that monkeys could
learn to control the deflection of a biofeedback meter arm with neural activity.Similar work
in the 1970s established that monkeys could quickly learn to voluntarily control the firing
rates of individual and multiple neurons in the primary motor cortex if they were rewarded
for generating appropriate patterns of neural activity.
Studies that developed algorithms to reconstruct movements from motor cortex
neurons, which control movement, date back to the 1970s. In the 1980s, Apostolos
Georgopoulos at Johns Hopkins University found a mathematical relationship between the
electrical responses of single motor cortex neurons in rhesus macaque monkeys and the
direction in which they moved their arms .He also found that dispersed groups of neurons, in
different areas of the monkey's brains, collectively controlled motor commands, but was able
to record the firings of neurons in only one area at a time, because of the technical limitations
imposed by his equipment.
There has been rapid development in BCIs since the mid-1990s. Several groups have
been able to capture complex brain motor cortex signals by recording from neural ensembles
(groups of neurons) and using these to control external devices.
Phillip Kennedy (who later founded Neural Signals in 1987) and colleagues built the
first intracortical brain–computer interface by implanting neurotrophic-cone electrodes into
monkeys.
Yang Dan and colleagues' recordings of cat vision using a BCI implanted in the lateral geniculate
nucleus
In 1999, researchers led by Yang Dan at the University of California, Berkeley
decoded neuronal firings to reproduce images seen by cats. The team used an array of
electrodes embedded in the thalamus, which integrates all of the brain’s sensory input of
sharp-eyed cats. Researchers targeted 177 brain cells in the thalamus lateral geniculate
nucleus area, which decodes signals from the retina. The cats were shown eight short movies,
and their neuron firings were recorded. Using mathematical filters, the researchers decoded
the signals to generate movies of what the cats saw and were able to reconstruct recognizable
scenes and moving objects. Similar results in humans have since been achieved by
researchers in Japan.
Miguel Nicolelis, a professor at Duke University, in Durham, North Carolina, has
been a prominent proponent of using multiple electrodes spread over a greater area of the
brain to obtain neuronal signals to drive a BCI. Such neural ensembles are said to reduce the
variability in output produced by single electrodes, which could make it difficult to operate a
BCI.
After conducting initial studies in rats during the 1990s, Nicolelis and his colleagues
developed BCIs that decoded brain activity in owl monkeys and used the devices to
reproduce monkey movements in robotic arms. Monkeys have advanced reaching and
grasping abilities and good hand manipulation skills, making them ideal test subjects for this
kind of work.
By 2000 the group succeeded in building a BCI that reproduced owl monkey
movements while the monkey operated a joystick or reached for food.The BCI operated in
real time and could also control a separate robot remotely over Internet protocol. But the
monkeys could not see the arm moving and did not receive any feedback, a so-called open-
loop BCI.
Diagram of the BCI developed by Miguel Nicolelis and colleagues for use on Rhesus
monkeys
Later experiments by Nicolelis using rhesus monkeys succeeded in closing the
feedback loop and reproduced monkey reaching and grasping movements in a robot arm.
With their deeply cleft and furrowed brains, rhesus monkeys are considered to be better
models for human neurophysiology than owl monkeys. The monkeys were trained to reach
and grasp objects on a computer screen by manipulating a joystick while corresponding
movements by a robot arm were hidden. The monkeys were later shown the robot directly
and learned to control it by viewing its movements. The BCI used velocity predictions to
control reaching movements and simultaneously predicted handgripping force.
Other laboratories which have developed BCIs and algorithms that decode neuron
signals include those run by John Donoghue at Brown University, Andrew Schwartz at the
University of Pittsburgh and Richard Andersen at Caltech. These researchers have been able
to produce working BCIs, even using recorded signals from far fewer neurons than did
Nicolelis.
Donoghue's group reported training rhesus monkeys to use a BCI to track visual
targets on a computer screen (closed-loop BCI) with or without assistance of a joystick.
Schwartz's group created a BCI for three-dimensional tracking in virtual reality and also
reproduced BCI control in a robotic arm. The same group also created headlines when they
demonstrated that a monkey could feed itself pieces of fruit and marshmallows using a
robotic arm controlled by the animal's own brain signals.
Andersen's group used recordings of premovement activity from the posterior parietal
cortex in their BCI, including signals created when experimental animals anticipated
receiving a reward.
In addition to predicting kinematic and kinetic parameters of limb movements, BCIs
that predict electromyographic or electrical activity of the muscles of primates are being
developed.Such BCIs could be used to restore mobility in paralyzed limbs by electrically
stimulating muscles.
Miguel Nicolelis and colleagues demonstrated that the activity of large neural
ensembles can predict arm position. This work made possible creation of BCIs that read arm
movement intentions and translate them into movements of artificial actuators. Carmena and
colleagues programmed the neural coding in a BCI that allowed a monkey to control reaching
and grasping movements by a robotic arm. Lebedev and colleagues argued that brain
networks reorganize to create a new representation of the robotic appendage in addition to the
representation of the animal's own limbs.
The biggest impediment to BCI technology at present is the lack of a sensor modality
that provides safe, accurate and robust access to brain signals. It is conceivable or even likely;
however, that such a sensor will be developed within the next twenty years. The use of such a
sensor should greatly expand the range of communication functions that can be provided
using a BCI.
Development and implementation of a BCI system is complex and time consuming.
In response to this problem, Gerwin Schalk has been developing a general-purpose system for
BCI research, called BCI2000. BCI2000 has been in development since 2000 in a project led
by the Brain–Computer Interface R&D Program at the Wadsworth Center of the New York
State Department of Health in Albany, New York, USA.
A new 'wireless' approach uses light-gated ion channels such as Channelrhodopsin to
control the activity of genetically defined subsets of neurons in vivo. In the context of a
simple learning task, illumination of transfected cells in the somatosensory cortex influenced
the decision making process of freely moving mice.
The use of BMIs has also led to a deeper understanding of neural networks and the
central nervous system. Research has shown that despite the inclination of neuroscientists to
believe that neurons have the most effect when working together, single neurons can be
conditioned through the use of BMIs to fire at a pattern that allows primates to control motor
outputs. The use of BMIs has led to development of the single neuron insufficiency principle
which states that even with a well tuned firing rate single neurons can only carry a narrow
amount of information and therefore the highest level of accuracy is achieved by recording
firings of the collective ensemble. Other principles discovered with the use of BMIs include
the neuronal multitasking principle, the neuronal mass principle, the neural degeneracy
principle, and the plasticity principle.
7. HUMAN BCI RESEARCH
INVASIVE BCIs
Invasive BCI research has targeted repairing damaged sight and providing new
functionality for people with paralysis. Invasive BCIs are implanted directly into the grey
matter of the brain during neurosurgery. Because they lie in the grey matter, invasive devices
produce the highest quality signals of BCI devices but are prone to scar-tissue build-up,
causing the signal to become weaker, or even non-existent, as the body reacts to a foreign
object in the brain.
In vision science, direct brain implants have been used to treat non-congenital
(acquired) blindness. One of the first scientists to produce a working brain interface to restore
sight was private researcher William Dobelle.
Dobelle's first prototype was implanted into "Jerry", a man blinded in adulthood, in
1978. A single-array BCI containing 68 electrodes was implanted onto Jerry’s visual cortex
and succeeded in producing phosphenes, the sensation of seeing light. The system included
cameras mounted on glasses to send signals to the implant. Initially, the implant allowed
Jerry to see shades of grey in a limited field of vision at a low frame-rate. This also required
him to be hooked up to a mainframe computer, but shrinking electronics and faster computers
made his artificial eye more portable and now enable him to perform simple tasks unassisted.
In 2002, Jens Naumann, also blinded in adulthood, became the first in a series of 16
paying patients to receive Dobelle’s second generation implant, marking one of the earliest
commercial uses of BCIs. The second generation device used a more sophisticated implant
enabling better mapping of phosphenes into coherent vision. Phosphenes are spread out
across the visual field in what researchers call "the starry-night effect". Immediately after his
implant, Jens was able to use his imperfectly restored vision to drive an automobile slowly
around the parking area of the research institute. Unfortunately, Dobelle died in 2004 before
his processes and developments were documented. Subsequently, when Mr. Naumann and
the other patients in the program began having problems with their vision, there was no relief
and they eventually lost their "sight" again. Naumann wrote about his experience with
Dobelle's work in Search for Paradise: A Patient's Account of the Artificial Vision
Experiment and has returned to his farm in Southeast Ontario, Canada, to resume his normal
activities.
BCIs focusing on motor neuroprosthetics aim to either restore movement in
individuals with paralysis or provide devices to assist them, such as interfaces with computers
or robot arms.
Researchers at Emory University in Atlanta, led by Philip Kennedy and Roy Bakay,
were first to install a brain implant in a human that produced signals of high enough quality
to simulate movement. Their patient, Johnny Ray (1944–2002), suffered from ‘locked-in
syndrome’ after suffering a brain-stem stroke in 1997. Ray’s implant was installed in 1998
and he lived long enough to start working with the implant, eventually learning to control a
computer cursor; he died in 2002 of a brain aneurysm.
Tetraplegic Matt Nagle became the first person to control an artificial hand using a
BCI in 2005 as part of the first nine-month human trial of Cyberkinetics’s BrainGate chip-
implant. Implanted in Nagle’s right precentral gyrus (area of the motor cortex for arm
movement), the 96-electrode BrainGate implant allowed Nagle to control a robotic arm by
thinking about moving his hand as well as a computer cursor, lights and TV. One year later,
professor Jonathan Wolpaw received the prize of the Altran Foundation for Innovation to
develop a Brain Computer Interface with electrodes located on the surface of the skull,
instead of directly in the brain.
More recently, research teams led by the Braingate group at Brown University and a
group led by University of Pittsburgh Medical Center, both in collaborations with the United
States Department of Veterans Affairs, have demonstrated further success in direct control of
robotic prosthetic limbs with many degrees of freedom using direct connections to arrays of
neurons in the motor cortex of patients with tetraplegia.
PARTIALLY INVASIVE BCIs
Partially invasive BCI devices are implanted inside the skull but rest outside the brain
rather than within the grey matter. They produce better resolution signals than non-invasive
BCIs where the bone tissue of the cranium deflects and deforms signals and have a lower risk
of forming scar-tissue in the brain than fully invasive BCIs.
Electrocorticography (ECoG) measures the electrical activity of the brain taken from
beneath the skull in a similar way to non-invasive electroencephalography, but the electrodes
are embedded in a thin plastic pad that is placed above the cortex, beneath the dura mater
ECoG technologies were first trialed in humans in 2004 by Eric Leuthardt and Daniel Moran
from Washington University in St Louis. In a later trial, the researchers enabled a teenage
boy to play Space Invaders using his ECoG implant.This research indicates that control is
rapid, requires minimal training, and may be an ideal tradeoff with regards to signal fidelity
and level of invasiveness.
Signals can be either subdural or epidural, but are not taken from within the brain
parenchyma itself. It has not been studied extensively until recently due to the limited access
of subjects. Currently, the only manner to acquire the signal for study is through the use of
patients requiring invasive monitoring for localization and resection of an epileptogenic
focus.
ECoG is a very promising intermediate BCI modality because it has higher spatial
resolution, better signal-to-noise ratio, wider frequency range, and less training requirements
than scalp-recorded EEG, and at the same time has lower technical difficulty, lower clinical
risk, and probably superior long-term stability than intracortical single-neuron recording. This
feature profile and recent evidence of the high level of control with minimal training
requirements shows potential for real world application for people with motor disabilities.
Light Reactive Imaging BCI devices are still in the realm of theory. These would
involve implanting a laser inside the skull. The laser would be trained on a single neuron and
the neuron's reflectance measured by a separate sensor. When the neuron fires, the laser light
pattern and wavelengths it reflects would change slightly. This would allow researchers to
monitor single neurons but require less contact with tissue and reduce the risk of scar-tissue
build-up.
NON-INVASIVE BCIs
As well as invasive experiments, there have also been experiments in humans using
non-invasive neuroimaging technologies as interfaces. Signals recorded in this way have been
used to power muscle implants and restore partial movement in an experimental volunteer.
Although they are easy to wear, non-invasive implants produce poor signal resolution
because the skull dampens signals, dispersing and blurring the electromagnetic waves created
by the neurons. Although the waves can still be detected it is more difficult to determine the
area of the brain that created them or the actions of individual neurons.
NEUROGAMING
Currently, there is a new field of gaming called Neurogaming, which uses non-
invasive BCI in order to improve gameplay so that users can interact with a console without
the use of a traditional controller.Some Neurogaming software use a player's brain waves,
heart rate, expressions, pupil dilation, and even emotions to complete tasks or affect the mood
of the game.For example, game developers at Emotiv have created non-invasive BCI that will
determine the mood of a player and adjust music or scenery accordingly. This new form of
interaction between player and software will enable a player to have a more realistic gaming
experience. Because there will be less disconnect between a player and console,
Neurogaming will allow individuals to utilize their "psychological state" and have their
reactions transfer to games in real-time. However, since Neurogaming is still in its first
stages, not much is written about the new industry. The first NeuroGaming Conference was
held in San Francisco on May 1–2, 2013.
Biofeedback is used to monitor a subject’s mental relaxation. In some cases,
biofeedback does not monitor electroencephalography (EEG), but instead bodily parameters
such as electromyography (EMG), galvanic skin resistance (GSR), and heart rate variability
(HRV).Many biofeedback systems are used to treat certain disorders such as attention deficit
hyperactivity disorder (ADHD), sleep problems in children, teeth grinding, and chronic pain.
EEG biofeedback systems typically monitor four different bands (theta: 4–7 Hz, alpha:8–12
Hz, SMR: 12–15 Hz, beta: 15–18 Hz) and challenge the subject to control them. Passive BCI
involves using BCI to enrich human–machine interaction with implicit information on the
actual user’s state, for example, simulations to detect when users intend to push brakes during
an emergency car stopping procedure. Game developers using passive BCIs need to
acknowledge that through repetition of game levels the user’s cognitive state will change or
adapt. Within the first play of a level, the user will react to things differently from during the
second play: for example, the user will be less surprised at an event in the game if he/she is
expecting it.
SYNTHETIC TELEPATHY/SILENT COMMUNICATION
In a $6.3 million Army initiative to invent devices for telepathic communication,
Gerwin Schalk, underwritten in a $2.2 million grant, found that it is possible to use ECoG
signals to discriminate the vowels and consonants embedded in spoken and in imagined
words. The results shed light on the distinct mechanisms associated with production of
vowels and consonants, and could provide the basis for brain-based communication using
imagined speech.
Research into synthetic telepathy using subvocalization is taking place at the
University of California, Irvine under lead scientist Mike D'Zmura. The first such
communication took place in the 1960s using EEG to create Morse code using brain alpha
waves. Using EEG to communicate imagined speech is less accurate than the invasive
method of placing an electrode between the skull and the brain. On February 27, 2013 Duke
University researchers successfully connected the brains of two rats with electronic
interfaces that allowed them to directly share information, in the first-ever direct brain-to-
brain interface
8. OTHER RESEARCHES
Electronic neural networks have been deployed which shift the learning phase from
the user to the computer. Experiments by scientists at the Fraunhofer Society in 2004 using
neural networks led to noticeable improvements within 30 minutes of training.
In addition to predicting kinematic and kinetic parameters of limb movements, BCIs
that predict electromyographic or electrical activity of the muscles of primates are being
developed. Such BCIs could be used to restore mobility in paralyzed limbs by electrically
stimulating muscles.
Miguel Nicolelis and colleagues demonstrated that the activity of large neural
ensembles can predict arm position. This work made possible creation of BCIs that read arm
movement intentions and translate them into movements of artificial actuators. Carmena
and colleagues programmed the neural coding in a BCI that allowed a monkey to control
reaching and grasping movements by a robotic arm. Lebedev and colleagues argued that
brain networks reorganize to create a new representation of the robotic appendage in
addition to the representation of the animal's own limbs.
The biggest impediment to BCI technology at present is the lack of a sensor modality
that provides safe, accurate and robust access to brain signals. It is conceivable or even
likely; however, that such a sensor will be developed within the next twenty years. The use
of such a sensor should greatly expand the range of communication functions that can be
provided using a BCI.
Development and implementation of a BCI system is complex and time consuming.
In response to this problem, Gerwin Schalk has been developing a general-purpose system
for BCI research, called BCI2000. BCI2000 has been in development since 2000 in a project
led by the Brain–Computer Interface R&D Program at the Wadsworth Center of the New
York State Department of Health in Albany, New York, USA.
A new 'wireless' approach uses light-gated ion channels such as Channelrhodopsin to
control the activity of genetically defined subsets of neurons in vivo. In the context of a
simple learning task, illumination of transfected cells in the somatosensory cortex
influenced the decision making process of freely moving mice.
The use of BMIs has also led to a deeper understanding of neural networks and the
central nervous system. Research has shown that despite the inclination of neuroscientists
to believe that neurons have the most effect when working together, single neurons can be
conditioned through the use of BMIs to fire at a pattern that allows primates to control
motor outputs. The use of BMIs has led to development of the single neuron insufficiency
principle which states that even with a well tuned firing rate single neurons can only carry a
narrow amount of information and therefore the highest level of accuracy is achieved by
recording firings of the collective ensemble. Other principles discovered with the use of
BMIs include the neuronal multitasking principle, the neuronal mass principle, the neural
degeneracy principle, and the plasticity principle.
Experiments by Eduardo Miranda, at the University of Plymouth in the UK, has aimed
to use EEG recordings of mental activity associated with music to allow the disabled to
express themselves musically through an encephalophone.Ramaswamy Palaniappan has
pioneered the development of BCI for use in biometrics to identify/authenticate a person.
The method has also been suggested for use as PIN generation device (for example in ATM
and internet banking transactions. The group which is now at University of Wolverhampton
has previously developed analogue cursor control using thoughts.
Researchers at the University of Twente in the Netherlands have been conducting
research on using BCIs for non-disabled individuals, proposing that BCIs could improve error
handling, task performance, and user experience and that they could broaden the user
spectrum.They particularly focused on BCI games, suggesting that BCI games could provide
challenge, fantasy and sociality to game players and could, thus, improve player experience.
The Emotiv Company has been selling a commercial video game controller, known as
The Epoc, since December 2009. The Epoc uses electromagnetic sensors.
The first BCI session with 100% accuracy (based on 80 right-hand and 80 left-hand
movement imaginations) was recorded in 1998 by Christoph Guger. The BCI system used 27
electrodes overlaying the sensorimotor cortex, weighted the electrodes with Common
Spatial Patterns, calculated the running variance and used a linear discriminant analysis.
Research is ongoing into military use of BCIs and since the 1970s DARPA has been
funding research on this topic. The current focus of research is user-to-user communication
through analysis of neural signals. The project "Silent Talk" aims to detect and analyze the
word-specific neural signals, using EEG, which occur before speech is vocalized, and to see if
the patterns are generalizable.
A VEP is an electrical potential recorded after a subject is presented with a type of
visual stimuli. There are several types of VEPs.
Steady-state visually evoked potentials (SSVEPs) use potentials generated by exciting
the retina, using visual stimuli modulated at certain frequencies. SSVEP’s stimuli are often
formed from alternating checkerboard patterns and at times simply use flashing images. The
frequency of the phase reversal of the stimulus used can be clearly distinguished in the
spectrum of an EEG; this makes detection of SSVEP stimuli relatively easy. SSVEP has proved
to be successful within many BCI systems. This is due to several factors, the signal elicited is
measurable in as large a population as the transient VEP and blink movement and electro
cardiographic artefacts do not affect the frequencies monitored. In addition, the SSVEP
signal is exceptionally robust; the topographic organization of the primary visual cortex is
such that a broader area obtains afferents from the central or fovial region of the visual field
.SSVEP does have several problems however. As SSVEPs use flashing stimuli to infer a user’s
intent, the user must gaze at one of the flashing or iterating symbols in order to interact
with the system. It is, therefore, likely that the symbols could become irritating and
uncomfortable to use during longer play sessions, which can often last more than an hour
which may not be an ideal gameplay.
Another type of VEP used with applications is the P300 potential. The P300 event-
related potential is a positive peak in the EEG that occurs at roughly 300 ms after the
appearance of a target stimulus (a stimulus for which the user is waiting or seeking) or
oddball stimuli . The P300 amplitude decreases as the target stimuli and the ignored stimuli
grow more similar.The P300 is thought to be related to a higher level attention process or
an orienting response Using P300 as a control scheme has the advantage of the participant
only having to attend limited training sessions. The first application to use the P300 model
was the P300 matrix. Within this system, a subject would choose a letter from a grid of 6 by
6 letters and numbers. The rows and columns of the grid flashed sequentially and every
time the selected “choice letter” was illuminated the user’s P300 was (potentially) elicited.
However, the communication process, at approximately 17 characters per minute, was
quite slow. The P300 is a BCI that offers a discrete selection rather than a continuous control
mechanism. The advantage of P300 use within games is that the player does not have to
teach himself/herself how to use a completely new control system and so only has to
undertake short training instances, to learn the gameplay mechanics and basic use of the
BCI paradigm.
BCIs can provide discrete or proportional output. A simple discrete output could be
“yes” or “no”, or it could be a particular value out of N possible values. Proportional output
could be a continuous value within the range of a certain minimum andmaximum.
Depending on the mental strategy and on the brain patterns used, some BCIs are more
suitable for providing discrete output values, while others are moresuitable for allowing
proportional control. A P300 BCI, for instance, is particularly appropriate for selection
applications. SMR based BCIs have been usedfor discrete control, but are best suited to
proportional control applications such as2-dimensional cursor control.In fact, the range of
possible BCI applications is very broad – from verysimple to complex. BCIs have been
validated with many applications, includ-ing spelling devices, simple computer games,
environmental control, navigation invirtual reality, and generic cursor control applications.
Most of these applications run on conventional computers that host the BCI system
and the application as well. Typically, the application is specifically tailored for a particular
type of BCI, and often the application is an integral part of theBCI system. BCIs that can
connect and effectively control a range of already exist-ing assistive devices, software, and
appliances are rare. An increasing number of systems allow control of more sophisticated
devices, including orthoses, prosthe-ses, robotic arms, and mobile robots. There is little
argument that such an interface would be a boon to BCI research. BCIs can control any
application that other interfaces can control, provided these applications can function
effectively with the low information throughput of BCIs. On the other hand, BCIs are
normally not well suited to controlling more demanding and complex applications, because
they lack the necessary information transfer rate. The patient is thirsty and wants to convey
that she wantsto drink some water now. She might perform this task by selecting each
individual letter and writing the message “water, please” or just “water”. Since this is a wish
the patient may have quite often, it would be useful to have a special symbol or command
for this message. In this way, the patient can convey this particular message much faster,
ideally with just one mental task.
Many more short cuts might allow other tasks, but these short cuts lack the flexibility
of writing individual messages. Therefore, an ideal BCI would allow a combination of simple
commands to convey information flexibly and short cuts that allow specific, common,
complex commands.
Examples of BCI applications
In other words, the BCI should allow a combination of process-oriented (or low-evel)
control and goal-oriented (or high level) control. Low-level control means the user has to
manage all the intricate interactions involved in achieving a task or goal, such as spelling the
individual letters for a message. In contrast, goal-oriented or high-level control means the
users simply communicate their goal to the application. Such applications need to be
sufficiently intelligent to autonomously perform all necessary sub-tasks to achieve the goal.
In any interface, users should not be required to control unnecessary low-level details of
system operation.This are especially important with BCIs. Allowing low-level control of a
wheelchair or robot arm, for example, would not only be slow and frustrating but
potentially dangerous.
The semi-autonomous wheelchair Rolland III can deal with different input mod-
alities, such as low-level joystick control or high-level discrete control. Autonomous and
semi-autonomous navigation is supported.
Semi-autonomous assistive devices
The rehabilitation robot FRIEND II is a semi-autonomous system designed to assist
disabled people in activities of daily living. It is system based on a conventional wheelchair
equipped with a stereo camera system, a robot arm with 7 degrees-of-freedom, a gripper
with force/torque sensor, a smart tray with tactile surface and weight sensors, and a
computing unit consisting of three independent industrial PCs. FRIEND II can perform
certain operations completely autonomously. An example of such an operation is a “pour in
beverage” scenario. In this scenario, the system detects the bottle and the glass (both
located at arbitrary positions on the tray), grabs the bottle, moves the bottle to the glass
while automatically avoiding any obstacles on the tray, fills the glass with liquid from the
bottle while avoiding pouring too much, and finally puts the bottle back in its original
position – again avoiding any possible collisions.
These assistive devices offload much of the work from the user onto the system. The
wheelchair provides safety and high-level control by continuous path planning and obstacle
avoidance. The rehabilitation robot offers a collection of tasks which are performed
autonomously and can be initiated by single commands. Without this device intelligence,
the user would need to directly control many aspects of device operation. Consequently,
controlling a wheelchair, a robot arm, or any complex device with a BCI would be almost
impossible, or at least very difficult, time consuming, frustrating, and in many cases even
dangerous. Such complex BCI applications are not broadly available, but are still topics of
research and are being evaluated in research laboratories. The success of these applications,
or actually of any BCI application, will depend on their reliability and on their acceptance by
users.
Another factor is whether these applications provide a clear advantage over
conventional assistive technologies. In the case of completely locked-in patients, alternative
control and communication methodologies do not exist. BCI control and communication is
usually the only possible practical option. However, the situation is different with less
severely disabled or healthy users, since they may be able to communicate through natural
means like speech and gesture, and alternative control and communication technologies
based on movement are available to them such as keyboards or eye tracking systems. Until
recently, it was assumed that users would only use a BCI if other means of communication
were unavailable. More recent work showed a user who preferred a BCI over an eye
tracking system. Although BCIs are gaining acceptance with broader user groups, there are
many scenarios where BCIs remain too slow and unreliable for effective control. For
example, most prostheses cannot be effectively controlled with a BCI.
Typically, prostheses for the upper extremities are controlled by electromyo- graphic
(myoelectric) signals recorded from residual muscles of the amputation stump. In the case
of transradial amputation (forearm amputation), the muscle activity recorded by electrodes
over the residual flexor and extensor muscles is used to open, close, and rotate a hand
prosthesis. Controlling such a device with a BCI is not practical. For higher amputations,
however, the number of degrees-of-freedom of a prostheses conventional myoelectric
control of the prosthetic arm and hand becomes very difficult. Controlling such a device by a
BCI may seem to be an option. In fact, several approaches have been investigated to control
prostheses with invasive and non-invasive BCIs. Ideally, the control of prostheses should
provide highly reliable, intuitive, simultaneous, and proportional control of many degrees-
of-freedom. In order to provide sufficient flexibility, low-level control is required.
Proportional control in this case means the user can modulate speed and force of the
actuators in the prosthesis. “Simultaneous” means that several degrees-of-freedom (joints)
can be controlled at the same time. That is, for instance, the prosthetic hand can be closed
while the wrist of the hand is rotated at the same time. “Intuitive” means that learning to
control the prosthesis should be easy. None of the BCI approaches that have been currently
suggested for controlling prostheses meets these criteria. Non-invasive approaches suffer
from limited band- width, and will not be able to provide complex, high-bandwidth control
in the near future. Invasive approaches show considerable more promise for such control in
the near future. However, then these approaches will need to demonstrate that they have
clear advantages over other methodologies such as myoelectric control combined with
targeted muscle reinnervation (TMR).
TMR is a surgical technique that transfers residual arm nerves to alternative muscle
sites. After reinnervation, these target muscles produce myoelectric signals
(electromyographic signals) on the surface of the skin that can be measured and used to
control prosthetic devices. For example, in persons who have had their arm removed at the
shoulder (called “shoulder disarticulation amputees”), residual peripheral nerves of arm and
hand are transferred to separate regions of the pectoralis muscles.
Prototype of prosthesis
Today, there is no BCI that can allow independent control of 7 different degrees of
freedom, which is necessary to duplicate all the movements that a natural arm could make.
On the other hand, sufficiently independent control signals can be derived from the
myoelectric signals recorded from the pectoralis. Moreover, control is largely intuitive, since
users invoke muscle activity in the pectoralis in a similar way as they did to invoke
movement of their healthy hand and arm. For instance, the users’ intent to open the hand
of their “phantom limb” results in particular myoelectric activity patterns that can be
recorded from the pectoralis, and can be translated into control commands that open the
pros- thetic hand correspondingly. Because of this intuitive control feature, TMR based
prosthetic devices can also be seen as thought-controlled neuroprostheses. Clearly, TMR
holds the promise to improve the operation of complex prosthetic systems. BCI approaches
(non-invasive and invasive) will need to demonstrate clinical and commercial advantages
over TMR approaches in order to be viable.
The example with prostheses underscores a problem and an opportunity for BCI
research. The problem is that BCIs cannot provide effective control because they cannot
provide sufficient reliability and bandwidth (information per second). Similarly, the
bandwidth and reliability of modern BCIs is far too low for many other goals that are fairly
easy with conventional interfaces. Rapid communication, most computer games, detailed
wheelchair navigation, and cell phone dialing are only a few examples of goals that require a
regular interface.
Does this mean that BCIs will remain limited to severely disabled users? We think
not, for several reasons. First, as noted above, there are many ways to increase the
“effective bandwidth” of a BCI through intelligent interfaces and high level selection.
Second, BCIs are advancing very rapidly. We don’t think a BCI that is as fast as a keyboard is
imminent, but substantial improvements in bandwidth are feasible. Third, some people may
use BCIs even though they are slow because they are attracted to the novel and futuristic
aspect of BCIs. Many research labs have demonstrated that computer games such as Pong,
Tetris, or Pacman can be con- trolled by BCIs and that rather complex computer applications
like Google Earth can be navigated by BCI. Users could control these systems more
effectively with a keyboard, but may consider a BCI more fun or engaging. Motivated by the
advances in BCI research over the last years, companies have started to consider BCIs as
possibility to augment human–computer interaction. This interest is under- lined by a
number of patents and new products, which are further discussed in the concluding chapter
of this book.
We are especially optimistic about BCIs for new user groups for two reasons. First,
BCIs are becoming more reliable and easier to apply. New users will need a BCI that is
robust, practical, and flexible. All applications should function outside the lab, using only a
minimal number of EEG channels (ideally only one channel), a simple and easy to setup BCI
system, and a stable EEG pattern suitable for online detection. The Graz BCI lab developed
an example of such a system. It uses a specific motor imagery-based BCI designed to detect
the short-lasting ERS pattern in the beta band after imagination of brisk foot movement in a
single EEG channel
Second, we are optimistic about a new technology called a “hybrid” system, which is
composed of 2 BCIs or at least one BCI and another system. One example of such a hybrid
BCI relies on simple, one-channel ERD BCI to activate the flickering lights of a 4-step SSVEP-
based hand orthosis only when the SSVEP system was needed for control.
9. THE BCI AWARD
The Annual BCI Research Award, endowed with 3,000 USD, is awarded in recognition
of outstanding and innovative research in the field of Brain-Computer Interfaces. Each year,
a renowned research laboratory is asked to judge the submitted projects and to award the
prize. The jury consists of world-leading BCI experts recruited by the awarding laboratory.
Following list consists the winners of the BCI Award:
 2010: Cuntai Guan, Kai Keng Ang, Karen Sui Geok Chua and Beng Ti Ang, (A*STAR,
Singapore) Motor imagery-based Brain-Computer Interface robotic rehabilitation for
stroke.
 2011: Moritz Grosse-Wentrup and Bernhard Schölkopf, (Max Planck Institute for
Intelligent Systems, Germany) What are the neuro-physiological causes of
performance variations in brain-computer interfacing?
 2012: Surjo R. Soekadar and Niels Birbaumer, (Applied Neurotechnology Lab,
University Hospital Tübingen and Institute of Medical Psychology and Behavioral
Neurobiology, Eberhard Karls University, Tübingen, Germany) Improving Efficacy of
Ipsilesional Brain-Computer Interface Training in Neurorehabilitation of Chronic
Stroke.
 2013: M. C. Dadarlata,b, J. E. O’Dohertya, P. N. Sabesa,b (aDepartment of Physiology,
Center for Integrative Neuroscience, San Francisco, CA, US, bUC Berkeley-UCSF
Bioengineering Graduate Program, University of California, San Francisco, CA, US),
A learning-based approach to artificial sensory feedback: intracortical
microstimulation replaces and augments vision.
10. LOW-COST BCI-BASED INTERFACES
Recentlyanumberof companieshave scaledbackmedical grade EEG technology(andinone
case, NeuroSky, rebuilt the technology from the ground up to create inexpensive BCIs. This
technology has been built into toys and gaming devices; some of these toys have been extremely
commercially successful like the NeuroSky and Mattel MindFlex.
 In 2006 Sonypatentedaneural interface systemallowingradiowavestoaffectsignalsinthe
neural cortex.
 In 2007 NeuroSky released the first affordable consumer based EEG along with the game
NeuroBoy. This was also the first large scale EEG device to use dry sensor technology.
 In 2008 OCZ Technology developed a device for use in video games relying primarily on
electromyography.
 In 2008 the Final Fantasy developer Square Enix announced that it was partnering with
NeuroSky to create a game, Judecca
 In 2009 Mattel partnered with NeuroSky to release the Mindflex, a game that used an EEG
to steer a ball through an obstacle course. By far the best selling consumer based EEG to
date.
 In 2009 Uncle Milton Industries partnered with NeuroSky to release the Star Wars Force
Trainer, a game designed to create the illusion of possessing the force.
 In 2009 Emotiv Systems released the EPOC, a 14 channel EEG device that can read 4 mental
states, 13 conscious states, facial expressions, and head movements. The EPOC is the first
commercial BCIto use dry sensortechnology,whichcanbe dampenedwithasaline solution
for a better connection.
 In November 2011 Time Magazine selected "necomimi" produced by Neurowear as one of
the best inventions of the year. The company announced that it expected to launch a
consumer version of the garment, consisting of cat-like ears controlled by a brain-wave
reader produced by NeuroSky, in spring 2012.
 In March 2012 g.tec introduced the intendiX-SPELLER, first commercially available BCI
systemfor home use which can be used to control computer games and apps. It can detect
different brain signals with an accuracy of 99%.[104] g.tec has hosted several workshop
tours to demonstrate the intendiX system and other hardware and software to the public,
such as a g.tec workshop tour of the US West Coast during September 2012.
 In January 2013 Hasaca National University (HNU) announced first Masters Program in
Virtual Reality Brain Computer interface application design.
 In February 2014 They Shall Walk (a nonprofit organization fixed on constructing
exoskeletons,dubbedLIFESUITs,forparaplegicsandquadriplegics) beganapartnershipwith
JamesW. Shakarji onthe developmentof a wireless BCI via the production and localization
of a Schumann resonance cavity.

More Related Content

What's hot

Neural devices will change humankind
Neural devices will change humankindNeural devices will change humankind
Neural devices will change humankindKarlos Svoboda
 
Artificial intelligence by yas
Artificial intelligence by yasArtificial intelligence by yas
Artificial intelligence by yasyash_1993
 
Artificial intelligence nanni
Artificial intelligence nanniArtificial intelligence nanni
Artificial intelligence nannisominand
 
Human Brain Project and Blue Brain Project
Human Brain Project and Blue Brain ProjectHuman Brain Project and Blue Brain Project
Human Brain Project and Blue Brain ProjectFazla Rabbi Mashrur
 
Human brain project 2010
Human brain project 2010Human brain project 2010
Human brain project 2010Karlos Svoboda
 
A Paper Presentation On Artificial Intelligence And Global Risk Paper Present...
A Paper Presentation On Artificial Intelligence And Global Risk Paper Present...A Paper Presentation On Artificial Intelligence And Global Risk Paper Present...
A Paper Presentation On Artificial Intelligence And Global Risk Paper Present...guestac67362
 
What is Deep Learning?
What is Deep Learning?What is Deep Learning?
What is Deep Learning?Ahmed Banafa
 
How artificial intelligence impact engineering
How artificial intelligence impact engineeringHow artificial intelligence impact engineering
How artificial intelligence impact engineeringUSM Systems
 
Artificial intelligence
Artificial intelligenceArtificial intelligence
Artificial intelligencevallibhargavi
 
Mind reading computers
Mind reading computersMind reading computers
Mind reading computersDaman Kathuria
 
New Artifitial Intelligence that can predicts Human Actions
New Artifitial Intelligence that can predicts Human ActionsNew Artifitial Intelligence that can predicts Human Actions
New Artifitial Intelligence that can predicts Human ActionsShreya Shetty
 
Fundamental Questions - The Second Decade of AI: Towards Architectures for Hu...
Fundamental Questions - The Second Decade of AI: Towards Architectures for Hu...Fundamental Questions - The Second Decade of AI: Towards Architectures for Hu...
Fundamental Questions - The Second Decade of AI: Towards Architectures for Hu...Aaron Sloman
 
Brain controlled-interfaces
Brain controlled-interfaces Brain controlled-interfaces
Brain controlled-interfaces N0696142
 
artificial Intelligence
artificial Intelligence artificial Intelligence
artificial Intelligence Ramya SK
 
Konica Minolta - Artificial Intelligence White Paper
Konica Minolta - Artificial Intelligence White PaperKonica Minolta - Artificial Intelligence White Paper
Konica Minolta - Artificial Intelligence White PaperEyal Benedek
 

What's hot (20)

Neural devices will change humankind
Neural devices will change humankindNeural devices will change humankind
Neural devices will change humankind
 
Artificial intelligence by yas
Artificial intelligence by yasArtificial intelligence by yas
Artificial intelligence by yas
 
Blue brain
Blue brainBlue brain
Blue brain
 
Artificial intelligence nanni
Artificial intelligence nanniArtificial intelligence nanni
Artificial intelligence nanni
 
Human Brain Project and Blue Brain Project
Human Brain Project and Blue Brain ProjectHuman Brain Project and Blue Brain Project
Human Brain Project and Blue Brain Project
 
AI: A Begining
AI: A BeginingAI: A Begining
AI: A Begining
 
Human brain project 2010
Human brain project 2010Human brain project 2010
Human brain project 2010
 
A Paper Presentation On Artificial Intelligence And Global Risk Paper Present...
A Paper Presentation On Artificial Intelligence And Global Risk Paper Present...A Paper Presentation On Artificial Intelligence And Global Risk Paper Present...
A Paper Presentation On Artificial Intelligence And Global Risk Paper Present...
 
What is Deep Learning?
What is Deep Learning?What is Deep Learning?
What is Deep Learning?
 
How artificial intelligence impact engineering
How artificial intelligence impact engineeringHow artificial intelligence impact engineering
How artificial intelligence impact engineering
 
Teja
TejaTeja
Teja
 
Artificial intelligence
Artificial intelligenceArtificial intelligence
Artificial intelligence
 
Ai its nature and future
Ai its nature and futureAi its nature and future
Ai its nature and future
 
Blue brain
Blue brainBlue brain
Blue brain
 
Mind reading computers
Mind reading computersMind reading computers
Mind reading computers
 
New Artifitial Intelligence that can predicts Human Actions
New Artifitial Intelligence that can predicts Human ActionsNew Artifitial Intelligence that can predicts Human Actions
New Artifitial Intelligence that can predicts Human Actions
 
Fundamental Questions - The Second Decade of AI: Towards Architectures for Hu...
Fundamental Questions - The Second Decade of AI: Towards Architectures for Hu...Fundamental Questions - The Second Decade of AI: Towards Architectures for Hu...
Fundamental Questions - The Second Decade of AI: Towards Architectures for Hu...
 
Brain controlled-interfaces
Brain controlled-interfaces Brain controlled-interfaces
Brain controlled-interfaces
 
artificial Intelligence
artificial Intelligence artificial Intelligence
artificial Intelligence
 
Konica Minolta - Artificial Intelligence White Paper
Konica Minolta - Artificial Intelligence White PaperKonica Minolta - Artificial Intelligence White Paper
Konica Minolta - Artificial Intelligence White Paper
 

Viewers also liked

Brain Computer Interfaces(BCI)
Brain Computer Interfaces(BCI)Brain Computer Interfaces(BCI)
Brain Computer Interfaces(BCI)Dr. Uday Saikia
 
Towards second generation expert systems in telepathology for aid in diagnosis
Towards second generation expert systems in telepathology for aid in diagnosisTowards second generation expert systems in telepathology for aid in diagnosis
Towards second generation expert systems in telepathology for aid in diagnosisTouradj Ebrahimi
 
EEG Game Simulator Using BCI & RaspberrPi
EEG Game Simulator Using BCI & RaspberrPi EEG Game Simulator Using BCI & RaspberrPi
EEG Game Simulator Using BCI & RaspberrPi Vishal Aditya
 
Brain-Computer Interfaces
Brain-Computer InterfacesBrain-Computer Interfaces
Brain-Computer Interfacesguest9006ab
 
Introduction to BCI
Introduction to BCIIntroduction to BCI
Introduction to BCITing-Shuo Yo
 
Brain computer interface
Brain computer interfaceBrain computer interface
Brain computer interfaceRahul Baghla
 
Brain Computer Interface (Bci)
Brain Computer Interface (Bci)Brain Computer Interface (Bci)
Brain Computer Interface (Bci)PavanKumar dhruv
 
45891026 brain-computer-interface-seminar-report
45891026 brain-computer-interface-seminar-report45891026 brain-computer-interface-seminar-report
45891026 brain-computer-interface-seminar-reportkapilpanwariet
 
how brain computer interface works
how brain computer interface workshow brain computer interface works
how brain computer interface worksguest0cc23a8
 
Predictive Analytics/Data Mining – как извлечь максимум из корпоративных дан...
Predictive Analytics/Data Mining –  как извлечь максимум из корпоративных дан...Predictive Analytics/Data Mining –  как извлечь максимум из корпоративных дан...
Predictive Analytics/Data Mining – как извлечь максимум из корпоративных дан...zolik
 
Brain Computer Interface
Brain Computer InterfaceBrain Computer Interface
Brain Computer InterfaceSonal Patil
 
Brain Computer Interface Next Generation of Human Computer Interaction
Brain Computer Interface Next Generation of Human Computer InteractionBrain Computer Interface Next Generation of Human Computer Interaction
Brain Computer Interface Next Generation of Human Computer InteractionSaurabh Giratkar
 
Brain Computer Interface
Brain Computer InterfaceBrain Computer Interface
Brain Computer Interfacedeaneal
 
Brain Computer Interface
Brain Computer InterfaceBrain Computer Interface
Brain Computer Interfaceguest9fd1acd
 
Brain computer interface
Brain computer interfaceBrain computer interface
Brain computer interfaceKomal Maloo
 
Brain Computer Interface ppt
Brain Computer Interface pptBrain Computer Interface ppt
Brain Computer Interface pptAjay George
 

Viewers also liked (19)

Brain Computer Interfaces(BCI)
Brain Computer Interfaces(BCI)Brain Computer Interfaces(BCI)
Brain Computer Interfaces(BCI)
 
Bci report
Bci reportBci report
Bci report
 
BCI
BCIBCI
BCI
 
Towards second generation expert systems in telepathology for aid in diagnosis
Towards second generation expert systems in telepathology for aid in diagnosisTowards second generation expert systems in telepathology for aid in diagnosis
Towards second generation expert systems in telepathology for aid in diagnosis
 
EEG Game Simulator Using BCI & RaspberrPi
EEG Game Simulator Using BCI & RaspberrPi EEG Game Simulator Using BCI & RaspberrPi
EEG Game Simulator Using BCI & RaspberrPi
 
Brain-Computer Interfaces
Brain-Computer InterfacesBrain-Computer Interfaces
Brain-Computer Interfaces
 
Introduction to BCI
Introduction to BCIIntroduction to BCI
Introduction to BCI
 
Brain computer interface
Brain computer interfaceBrain computer interface
Brain computer interface
 
Brain Computer Interface (Bci)
Brain Computer Interface (Bci)Brain Computer Interface (Bci)
Brain Computer Interface (Bci)
 
Inroduction to BCI
Inroduction to BCIInroduction to BCI
Inroduction to BCI
 
45891026 brain-computer-interface-seminar-report
45891026 brain-computer-interface-seminar-report45891026 brain-computer-interface-seminar-report
45891026 brain-computer-interface-seminar-report
 
how brain computer interface works
how brain computer interface workshow brain computer interface works
how brain computer interface works
 
Predictive Analytics/Data Mining – как извлечь максимум из корпоративных дан...
Predictive Analytics/Data Mining –  как извлечь максимум из корпоративных дан...Predictive Analytics/Data Mining –  как извлечь максимум из корпоративных дан...
Predictive Analytics/Data Mining – как извлечь максимум из корпоративных дан...
 
Brain Computer Interface
Brain Computer InterfaceBrain Computer Interface
Brain Computer Interface
 
Brain Computer Interface Next Generation of Human Computer Interaction
Brain Computer Interface Next Generation of Human Computer InteractionBrain Computer Interface Next Generation of Human Computer Interaction
Brain Computer Interface Next Generation of Human Computer Interaction
 
Brain Computer Interface
Brain Computer InterfaceBrain Computer Interface
Brain Computer Interface
 
Brain Computer Interface
Brain Computer InterfaceBrain Computer Interface
Brain Computer Interface
 
Brain computer interface
Brain computer interfaceBrain computer interface
Brain computer interface
 
Brain Computer Interface ppt
Brain Computer Interface pptBrain Computer Interface ppt
Brain Computer Interface ppt
 

Similar to The Evolution of Brain-Computer Interfacing

Mind reading computer
Mind reading computerMind reading computer
Mind reading computerKaustav Das
 
Artificial Intelligence Research Topics for PhD Manuscripts 2021 - Phdassistance
Artificial Intelligence Research Topics for PhD Manuscripts 2021 - PhdassistanceArtificial Intelligence Research Topics for PhD Manuscripts 2021 - Phdassistance
Artificial Intelligence Research Topics for PhD Manuscripts 2021 - PhdassistancePhD Assistance
 
Mindreadingcomputer 120714123334-phpapp01
Mindreadingcomputer 120714123334-phpapp01Mindreadingcomputer 120714123334-phpapp01
Mindreadingcomputer 120714123334-phpapp01manojkesari
 
Brain Machine Interface
Brain Machine InterfaceBrain Machine Interface
Brain Machine InterfaceNikitha Iyer
 
Human centered computing
Human centered computing Human centered computing
Human centered computing Khalid Raza
 
Mind reading computer
Mind reading computerMind reading computer
Mind reading computerJudy Francis
 
ICCS REASERCH PAPER TOPICS AND ITS SUBBIMISSION
ICCS REASERCH PAPER TOPICS AND ITS SUBBIMISSIONICCS REASERCH PAPER TOPICS AND ITS SUBBIMISSION
ICCS REASERCH PAPER TOPICS AND ITS SUBBIMISSIONsakethgupta243
 
Mind reading computer
Mind reading computerMind reading computer
Mind reading computerrajasri999
 
Future of Wearable Tech Report
Future of Wearable Tech ReportFuture of Wearable Tech Report
Future of Wearable Tech ReportIntel iQ
 
artificial intelligence
artificial intelligenceartificial intelligence
artificial intelligencevallibhargavi
 
artificial intelligence
artificial intelligenceartificial intelligence
artificial intelligencevallibhargavi
 
A brain machine
A brain machineA brain machine
A brain machineshehnasr
 
Artificial Intelligence
Artificial IntelligenceArtificial Intelligence
Artificial IntelligenceAbbas Hashmi
 

Similar to The Evolution of Brain-Computer Interfacing (20)

Mind reading computer
Mind reading computerMind reading computer
Mind reading computer
 
Mind reading computer
Mind reading computerMind reading computer
Mind reading computer
 
Artificial Intelligence Research Topics for PhD Manuscripts 2021 - Phdassistance
Artificial Intelligence Research Topics for PhD Manuscripts 2021 - PhdassistanceArtificial Intelligence Research Topics for PhD Manuscripts 2021 - Phdassistance
Artificial Intelligence Research Topics for PhD Manuscripts 2021 - Phdassistance
 
Mindreadingcomputer 120714123334-phpapp01
Mindreadingcomputer 120714123334-phpapp01Mindreadingcomputer 120714123334-phpapp01
Mindreadingcomputer 120714123334-phpapp01
 
Cognitive systems
Cognitive  systemsCognitive  systems
Cognitive systems
 
Cognitive systems
Cognitive  systemsCognitive  systems
Cognitive systems
 
Mind reading computer
Mind reading computerMind reading computer
Mind reading computer
 
Brain Machine Interface
Brain Machine InterfaceBrain Machine Interface
Brain Machine Interface
 
Brain computer Interface
Brain computer InterfaceBrain computer Interface
Brain computer Interface
 
Human centered computing
Human centered computing Human centered computing
Human centered computing
 
Bioethics Uvt2008
Bioethics Uvt2008Bioethics Uvt2008
Bioethics Uvt2008
 
Mind reading computer
Mind reading computerMind reading computer
Mind reading computer
 
ICCS REASERCH PAPER TOPICS AND ITS SUBBIMISSION
ICCS REASERCH PAPER TOPICS AND ITS SUBBIMISSIONICCS REASERCH PAPER TOPICS AND ITS SUBBIMISSION
ICCS REASERCH PAPER TOPICS AND ITS SUBBIMISSION
 
Mind reading computer
Mind reading computerMind reading computer
Mind reading computer
 
Cognitive computing
Cognitive computing Cognitive computing
Cognitive computing
 
Future of Wearable Tech Report
Future of Wearable Tech ReportFuture of Wearable Tech Report
Future of Wearable Tech Report
 
artificial intelligence
artificial intelligenceartificial intelligence
artificial intelligence
 
artificial intelligence
artificial intelligenceartificial intelligence
artificial intelligence
 
A brain machine
A brain machineA brain machine
A brain machine
 
Artificial Intelligence
Artificial IntelligenceArtificial Intelligence
Artificial Intelligence
 

Recently uploaded

My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure servicePooja Nehwal
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024Results
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slidespraypatel2
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitecturePixlogix Infotech
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Alan Dix
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreternaman860154
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfEnterprise Knowledge
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slidevu2urc
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 

Recently uploaded (20)

My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024
 
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC Architecture
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 

The Evolution of Brain-Computer Interfacing

  • 1. 1. INTRODUCTION For generations, humans have fantasized about the ability to communicate and inter- act with machines through thought alone or to create devices that can peer into per- son’s mind and thoughts.The idea of interfacing minds with machines has long captured the human imagination in the form of ancient myths and modern science fiction stories. Recent advances in neuroscience and engineering are making this a reality, opening the door to restoring and potentially augmenting human physical and mental capabilities. Medical applications such as cochlear implants for the deaf and deep brain stimulation for Parkinson's disease are becoming increasingly commonplace. Brain- computer interfaces (BCIs) are now being explored in applications as diverse as security, lie detection, alertness monitoring, telepresence, gaming, education, art, and human augmentation. Research on BCIs began in the 1970s at the University of California Los Angeles (UCLA) under a grant from the National Science Foundation, followed by a contract from DARPA. The papers published after this research also mark the first appearance of the expression brain–computer interface in scientific literature. The field of BCI research and development has since focused primarily on neuroprosthetics applications that aim at restoring damaged hearing, sight and movement. Thanks to the remarkable cortical plasticity of the brain, signals from implanted prostheses can, after adaptation, be handled by the brain like natural sensor or effector channels. Following years of animal experimentation, the first neuroprosthetic devices implanted in humans appeared in the mid-1990s. However, it is only recently that advances in cognitive neuroscience and brain imaging technologies have started to provide us with the ability to interface directly with the human brain. This ability is made possible through the use of sensors that can monitor some of the physical processes that occur within the brain that correspond with certain forms of thought. Primarily driven by growing societal recognition for the needs of people with physical disabilities, researchers have used these technologies to build brain- computer interfaces (BCIs), communication systems that do not depend on the brain’s normal output pathways of peripheral nerves and muscles. In these systems, users explicitly manipulate their brain activity instead of using motor movements to produce signals that can be used to control computers or communication devices. The impact of this work is extremely high, especially to those who suffer from devastating neuromuscular injuries and neurodegenerative diseases such as amy- otrophic lateral sclerosis, which eventually strips individuals of voluntary muscular activity while leaving cognitive function intact. Meanwhile, and largely independent of these efforts, Human-Computer Interac- tion (HCI) researchers continually work to increase the communication bandwidth and quality between humans and computers. They have explored visualizations and multimodal presentations so that computers may use as many sensory channels as possible to send information to a human. Similarly, they have devised hardware and software innovations to
  • 2. increase the information a human can quickly input into the computer. Since we have traditionally interacted with the external world only through our physical bodies, these input mechanisms have mostly required perform- ing some form of motor activity, be it moving a mouse, hitting buttons, using hand gestures, or speaking.Additionally, these researchers have started to consider implicit forms of input, that is, input that is not explicitly performed to direct a computer to do some- thing. In an area of exploration referred to by names such as perceptual com- puting or contextual computing, researchers attempt to infer information about user state and intent by observing their physiology, behavior, or even the envi- ronment in which they operate. Using this information, systems can dynamically adapt themselves in useful ways in order to better support the user in the task at hand. We believe that there exists a large opportunity to bridge the burgeoning research in Brain-Computer Interfaces and Human Computer Interaction, and this book at- tempts to do just that. We believe that BCI researchers would benefit greatly from the body of expertise built in the HCI field as they construct systems that rely solely on interfacing with the brain as the control mechanism. Likewise, BCIs are now mature enough that HCI researchers must add them to our tool belt when designing novel input techniques (especially in environments with constraints on normal motor movement), when measuring traditionally elusive cognitive or emotional phenomena in evaluating our interfaces, or when trying to infer user state to build adaptive systems. Each chapter in this book was selected to present the novice reader with an overview of some aspect of BCI or HCI, and in many cases the union of the two, so that they not only get a flavor of work that currently exists, but are hopefully inspired by the opportunities that remain.
  • 3. 2. THE EVOLUTION OF BRAIN COMPUTER INTERFACING The evolution of any technology can generally be broken into three phases. The initial phase, or proof-of-concept, demonstrates the basic functionality of a technology. In this phase, even trivially functional systems are impressive and stimulate imagination. They are also sometimes misunderstood and doubted. As an example, when moving pictures were first developed, people were amazed by simple footage shot with stationary cameras of flowers blowing in the wind or waves crashing on the beach. Similarly, when the computer mouse was first invented, people were intrigued by the ability to move a physical device small distances on a tabletop in order to control a pointer in two dimensions on a computer screen. In brain sensing work, this represents the ability to extract any bit of information directly from the brain without utilizing normal muscular channels. In the second phase, or emulation, the technology is used to mimic existing technologies. The first movies were simply recorded stage plays, and computer mice were used to select from lists of items much as they would have been with the numeric pad on a keyboard. Similarly, early brain-computer interfaces have aimed to emulate functionality of mice and keyboards, with very few fundamental changes to the interfaces on which they operated. It is in this phase that the technology starts to be driven less by its novelty and starts to interest a wider audience interested by the science of understanding and developing it more deeply. Finally, the technology hits the third phase, in which it attains maturity in its own right. In this phase, designers understand and exploit the intricacies of the new technology to build unique experiences that provide us with capabilities never before available. For example, the flashback and crosscut, as well as “bullet-time” introduced more recently by the movie the Matrix have become well-acknowledged idioms of the medium of film. Although most researchers accept the term “BCI” and its definition, other terms has been used to describe this special form of human–machine interface. Similarly, the mouse has become so well integrated into our notions of computing that it is extremely hard to imagine using current interfaces without such a device attached. It should be noted that in both these cases, more than forty years passed between the introduction of the technology and the widespread development and usage of these methods. The human computer interaction field continues to work toward expanding the effective information bandwidth between human and machine, and more importantly to design technologies that integrate seamlessly into our everyday tasks. Specifically, we believe there are several opportunities, though we believe our views are necessarily constrained and hope that this book inspires further crossover and discussion. For example:  While the BCI community has largely focused on the very difficult mechanics of acquiring data from the brain, HCI researchers could add experience designing
  • 4. interfaces that make the most out of the scanty bits of information they have about the user and their intent. They also bring in a slightly different viewpoint which may result in interesting innovation on the existing applications of interest. For example, while BCI researchers maintain admirable focus on providing patients who have lost muscular control an alternate input device, HCI researchers might complement the efforts by considering the entire locked-in experience, including such factors as preparation, communication, isolation, and awareness, etc.  Beyond the traditional definition of Brain-Computer Interfaces, HCI researchers have already started to push the boundaries of what we can do if we can peer into the user’s brain, if even ever so roughly. Considering how these devices apply to healthy users in addition to the physically disabled, and how adaptive system may take advantage of them could push analysis methods as well as application areas.  The HCI community has also been particularly successful at systematically exploring and creating whole new application areas. In addition to thinking about using technology to fix existing pain points, or to alleviate difficult work, this community has sought scenarios in which technology can augment everyday human life in some way.
  • 5. 3. HISTORY OF BRAIN COMPUTER INTERFACING Back in the 60s, con-trolling devices with brain waves was considered pure science fiction, as wild and fantastic as warp drive and transporters. Although recording brain signals from the human scalp gained some attention in 1929, when the German scientist Hans Berger recorded the electrical brain activity from the human scalp, the required technolo-gies for measuring and processing brain signals as well as our understanding of brain function were still too limited. Nowadays, the situation has changed. Neuroscience research over the last decades has led to a much better understanding of the brain. Signal processing algorithms and computing power have advanced so rapidly that complex real-time processing of brain signals does not require expensive or bulky equipment anymore. The first BCI was described by Dr. Grey Walter in 1964. Ironically, this was shortly before the first Star Trek episode aired. Dr. Walter connected electrodes directly to the motor areas of a patient’s brain. (The patient was undergoing surgery for other reasons.) The patient was asked to press a button to advance a slide projec-tor while Dr. Walter recorded the relevant brain activity. Then, Dr. Walter connected the system to the slide projector so that the slide projector advanced whenever the patient’s brain activity indicated that he wanted to press the button. Interestingly, Dr. Walter found that he had to introduce a delay from the detection of the brain activity until the slide projector advanced because the slide projector would oth-erwise advance before the patient pressed the button! Control before the actual movement happens, that is, control without movement – the first BCI! Unfortunately, Dr. Walter did not publish this major breakthrough. He only pre-sented a talk about it to a group called the Ostler Society in London. There was little progress in BCI research for most of the time since then. BCI research advanced slowly for many more years. By the turn of the century, there were only one or two dozen labs doing serious BCI research. However, BCI research devel-oped quickly after that, particularly during the last few years. Every year, there are more BCI-related papers, conference talks, products, and media articles. There are at least 100 BCI research groups active today, and this number is growing. More importantly, BCI research has succeeded in its initial goal: proving that BCIs can work with patients who need a BCI to communicate. Indeed, BCI researchers have used many different kinds of BCIs with several different patients. Furthermore, BCIs are moving beyond communication tools for people who cannot otherwise communicate. BCIs are gaining attention for healthy users and new goals such as rehabilitation or hands-free gaming. BCIs are not science fiction anymore. On the other hand, BCIs are far from mainstream tools. Most people today still do not know that BCIs are even possible. There are still many practical challenges before a typical person can use a BCI without expert help. There is a long way to go from providing communication for some specific patients, with considerable expert help, to providing a range of functions for any user without help. In Hans Berger's discovery of the electrical activity of the human brain and the development of electroencephalography (EEG). In 1924 Berger was the first to record human
  • 6. brain activity by means of EEG. Berger was able to identify oscillatory activity in the brain by analyzing EEG traces. One wave he identified was the alpha wave (8–13 Hz), also known as Berger's wave. Berger's first recording device was very rudimentary. He inserted silver wires under the scalps of his patients. These were later replaced by silver foils attached to the patients' head by rubber bandages. Berger connected these sensors to a Lippmann capillary electrometer, with disappointing results. More sophisticated measuring devices, such as the Siemens double-coil recording galvanometer, which displayed electric voltages as small as one ten thousandth of a volt, led to success. Berger analyzed the interrelation of alternations in his EEG wave diagrams with brain diseases. EEGs permitted completely new possibilities for the research of human brain activities.
  • 7. 4. BRAIN COMPUTER INTERFACING VERSUS NEUROPROSTHETICS Neuroprosthetics is an area of neuroscience concerned with neural prostheses. That is, using artificial devices to replace the function of impaired nervous systems and brain related problems, or of sensory organs. The most widely used neuroprosthetic device is the cochlear implant which, as of December 2010, had been implanted in approximately 220,000 people worldwide.There are also several neuroprosthetic devices that aim to restore vision, including retinal implants. The difference between BCIs and neuroprosthetics is mostly in how the terms are used: neuroprosthetics typically connect the nervous system to a device, whereas BCIs usually connect the brain (or nervous system) with a computer system. A BCI is an artificial output channel, a direct interface from the brain to a computer or machine, which can accept voluntary commands directly from the brain without requiring physical movements. Practical neuroprosthetics can be linked to any part of the nervous system. For example, peripheral nerves while the term "BCI" usually designates a narrower class of systems which interface with the central nervous system. The terms are sometimes, however, used interchangeably. Neuroprosthetics and BCIs seek to achieve the same aims, such as restoring sight, hearing, movement, ability to communicate, and even cognitive function. Both use similar experimental methods and surgical techniques. Any natural form of communication or control requires peripheral nerves and mus- cles. The process begins with the user’s intent. This intent triggers a complex process in which certain brain areas are activated, and hence signals are sent via the peripheral nervous system (specifically, the motor pathways) to the correspond- ing muscles, which in turn perform the movement necessary for the communication or control task. The activity resulting from this process is often called motor out- put or efferent output. Efferent means conveying impulses from the central to the peripheral nervous system and further to an effector (muscle). Afferent, in con- trast, describes communication in the other direction, from the sensory receptors to the central nervous system. For motion control, the motor (efferent) pathway is essential. The sensory (afferent) pathway is particularly important for learning motor skills and dexterous tasks, such as typing or playing a musical instrument. A BCI offers an alternative to natural communication and control. A BCI is an artificial system that bypasses the body’s normal efferent pathways, which are the neuromuscular output channels.Instead of depending on peripheral nerves and muscles, a BCI directly mea- sures brain activity associated with the user’s intent and translates the recorded brain activity into corresponding control signals for BCI applications. This translation involves signal processing and pattern recognition, which is typically done by a computer. Since the measured activity originates directly from the brain and not from the peripheral systems or muscles, the system is called a Brain–ComputerInterface. A BCI must have four components. It must record activity directly from the brain (invasively or non-invasively). It must provide feedback to the user, and must do so in realtime. Finally, the system must rely on intentional control. That is, the user must choose to perform a mental task whenever s/he
  • 8. wants to accomplish a goal with the BCI. Devices that only passively detect changes in brain activity that occur without any intent, such as EEG activity associated with workload, arousal, or sleep, are not BCIs. “Neuroprosthesis,” however, is a more general term. Neuroprostheses (also called neural prostheses) are devices that cannot only receive output from the ner- vous system, but can also provide input. Moreover, they can interact with the peripheral and the central nervous systems. BCIs are a special category of neuroprostheses.
  • 9. 5. BCI PERFORMANCE The performance of a BCI can be measured in various ways . A simple measure is classification performance (also termed classification accuracy or classification rate). It is the ratio of the number of correctly classified trials (successful attempts to perform the required mental tasks) and the total number of trials. The error rate is also easy to calculate, since it is just the ratio of incorrectly classified trials and the total number of trials. Although classification or error rates are easy to calculate, application dependent measures are often more meaningful. For instance, in a mental typewriting applica- tion the user is supposed to write a particular sentence by performing a sequence of mental tasks. Again, classification performance could be calculated, but the number of letters per minute the users can convey is a more appropriate measure. Letters per minute is an application dependent measure that assesses (indirectly) not only the classification performance but also the time that was necessary to perform the required tasks. A more general performance measure is the so-called information transfer rate (ITR) . It depends on the number of different brain patterns (classes) used; the time the BCI needs to classify these brain patterns, and the classification accu- racy. ITR is measured in bits per minute. Since ITR depends on the number of brain patterns that can be reliably and quickly detected and classified by a BCI, the information transfer rate depends on the mental strategy employed. Typically, BCIs with selective attention strategies have higher ITRs than those using, for instance, motor imagery. A major reason is that BCIs based on selective attention usually provide a larger number of classes (e.g. number of light sources). Motor imagery, for instance, is typically restricted to four or less motor imagery tasks. More imagery tasks are possible but often only to the expense of decreased classification accuracy, which in turn would decrease in the information transfer rate as well. There are a few papers that report BCIs with a high ITR, ranging from 30 bits/min to slightly above 60 bits/min and, most recently, over 90 bits per minute. Such performance, however, is not typical for most users in real world settings. In fact, these record values are often obtained under laboratory conditions by indi- vidual healthy subjects who are the top performing subjects in a lab. In addition, high ITRs are usually reported when people only use a BCI for short periods. Of course, it is interesting to push the limits and learn the best possible performance of current BCI technology, but it is no less important to estimate realistic performance in more practical settings. Unfortunately, there is currently no study available that investigates the average information transfer rate for various BCI systems over a larger user population and over a longer time period so that a general estimate of average BCI performance can be derived. The closest such study is the excellent work by Kübler and Birbaumer. Furthermore, a minority of subjects exhibit little or no control .The reason is not clear, but even long sustained training cannot improve performance for those subjects. In any case,
  • 10. a BCI provides an alternative communication channel, but this channel is slow. It certainly does not provide high-speed interaction. It can- not compete with natural communication (such as speaking or writing) or traditionalman-machine interfaces in terms of ITR. However, it has important applications for the most severely disabled. There are also new emerging applications for less severely disabled or even healthy people, as detailed in the next section. BCIs measure brain activity, process it, and produce control signals that reflect the user’s intent. To understand BCI operation better, one has to understand how brain activity can be measured and which brain signals can be utilized. Brain activity produces electrical and magnetic activity. Therefore, sensors can detect different types of changes in electrical or magnetic activity, at different times over different areas of the brain, to study brain activity.Most BCIs rely on electrical measures of brain activity, and rely on sensors placed over the head to measure this activity. Electroencephalography (EEG) refers to recording electrical activity from the scalp with electrodes. It is a very well estab- lished method, which has been used in clinical and research settings for decades. Temporal resolution, meaning the ability to detect changes within a certain time interval, is very good. However, the EEG is not with- out disadvantages: The spatial (topographic) resolution and the frequency range are limited. The EEG is susceptible to so-called artifacts, which are contamina- tions in the EEG caused by other electrical activities. Examples are bioelectrical activities caused by eye movements or eye blinks (electrooculographic activity, EOG) and from muscles (electromyographic activity, EMG) close to the recording sites. External electromagnetic sources such as the power line can also contaminate the EEG. Furthermore, although the EEG is not very technically demanding, the setup pro- cedure can be cumbersome. To achieve adequate signal quality, the skin areas that are contacted by the electrodes have to be carefully prepared with special abrasive electrode gel. Because gel is required, these electrodes are also called wet elec- trodes. The number of electrodes required by current BCI systems range from only a few to more than 100 electrodes. Most groups try to minimize the number of elec- trodes to reduce setup time and hassle. Since electrode gel can dry out and wearing the EEG cap with electrodes is not convenient or fashionable, the setting up pro- cedure usually has to be repeated before each session of BCI use. From a practical viewpoint, this is one of largest drawbacks of EEG-based BCIs. A possible solution is a technology called dry electrodes. Dry electrodes do not require skin prepara- tion nor electrode gel. This technology is currently being researched, but a practical solution that can provide signal quality comparable to wet electrodes is not in sight at the moment.A BCI analyzes ongoing brain activity for brain patterns that originate from specific brain areas. To get consistent recordings from specific regions of the head, scientists rely on a standard system for accurately placing electrodes, which is called the International 10– 20 System. It is widely used in clinical EEG recording and EEG research as well as BCI research.
  • 11. Contrary to popular simplifications, the brain is not a general-purpose computerwith a unified central processor. Rather, it is a complex assemblage of competing sub-systems; each highly specialized for particular tasks (Carey 2002). By studying the effects of brain injuries and, more recently, by using new brain imaging technologies, neuroscientists have built detailed topographical maps associating different parts of the physical brain with distinct cognitive functions. The brain can be roughly divided into two main parts: the cerebral cortex and sub- cortical regions. Sub-cortical regions are phylogenetically older and include areas associated with controlling basic functions including vital functions such as respiration, heart rate, and temperature regulation, basic emotional and instinctive responses such as fear and reward, reflexes, as well as learning and memory. The cerebral cortex is evolutionarily much newer. Since this is the largest and most complex part of the brain in the human, this is usually the part of the brain people notice in pictures. The cortex supports most sensory and motor processing as well as “higher” level functions including reasoning, planning, language processing, and pattern recognition. This is the region that current BCI work has largely focused on. GEOGRAPHY OF THOUGHT The cerebral cortex is split into two hemispheres that often have very different functions.For instance; most language functions lie primarily in the left hemisphere, while the right hemisphere controls many abstract and spatial reasoning skills. Also, most motor and sensory signals to and from the brain cross hemispheres, meaning that the right brain senses and controls the left side of the body and vice versa. The brain can be further divided into separate regions specialized for different functions. For example, occipital regions at the very back of the head are largely devoted to processing of visual information. Areas in the temporal regions, roughly along the sides and lower areas of the cortex are involved in memory, pattern matching, language processing and auditory processing. Still other areas of the cortex are devoted to diverse functions such as spatial representation and processing, attention orienting, arithmetic, voluntary muscle movement, planning, reasoning and even enigmatic aspects of human behavior such as moral sense and ambition. We should emphasize that our understanding of brain structure and activity is still fairly shallow. These topographical maps are not definitive assignments of location to function. In fact, some areas process multiple functions, and many functions are processed in more than one area. BRAIN IMAGING TECHNOLOGIES There are two general classes of brain imaging technologies: invasive technologies, in which sensors are implanted directly on or in the brain, and non-invasive technologies, which measure brain activity using external sensors. Although invasive technologies provide high temporal and spatial resolution, they usually cover only very small regions of the brain.
  • 12. Additionally, these techniques require surgical procedures that often lead to medical complications as the body adapts, or does not adapt, to the implants. International 10-20 system Furthermore, once implanted, these technologies cannot be moved to measure different regions of the brain.While many researchers are experimenting with such implants, we will not review this research in detail as we believe these techniques are unsuitable for human-computer interaction work and general consumer use. MEASURING BRAIN ACTIVITY The techniques discussed in the last section are all non-invasive recording tech- niques. That is, there is no need to perform surgery or even break the skin. In contrast, invasive recording methods require surgery to implant the necessary sensors.
  • 13. This surgery includes opening the skull through a surgical procedure called a craniotomy and cutting the membranes that cover the brain. When the electrodes are placed on the surface of the cortex, the signal recorded from these electrodes is called the electrocorticogram (ECoG). ECoG does not damage any neurons because no electrodes penetrate the brain. The signal recorded from electrodes that penetrate brain tissue is called intracortical recording. Three different ways to detect brain’s electrical activities-EEG, EcoG, Intracortical recordings Invasive recording techniques combine excellent signal quality, very good spatial resolution, and a higher frequency range. Artifacts are less problematic with invasive recordings. Further, the cumbersome application and re-application of electrodes as described above is unnecessary for invasive approaches. Intracortical electrodes can record the neural activity of a single brain cell or small assemblies of brain cells. The ECoG records the integrated activity of a much larger number of neurons that are in the proximity of the ECoG electrodes. However, any invasive technique has better spatial resolution than the EEG. Clearly, invasive methods have some advantages over non-invasive methods. However, these advantages come with the serious drawback of requiring surgery. Ethical, financial, and other considerations make neurosurgery impractical except for some users who need a BCI to communicate. Even then, some of these users may find that a noninvasive BCI meets their needs. It is also unclear whether both ECoG and intracortical recordings can provide safe and stable recording over years. Long- term stability may be especially problematic in the case of intracortical recordings. Electrodes implanted into the cortical tissue can cause tissue reactions that lead to deteriorating signal quality or even complete electrode failure. Research on invasive BCIs is difficult because of the cost and risk of neurosurgery. For ethical reasons, some invasive research efforts rely on patients who undergo neurosurgery for other reasons, such as treatment of epilepsy. Studies with these
  • 14. patients can be very infor- mative, but it is impossible to study the effects of training and long term use because these patients typically have an ECoG system for only a few days before it is removed. ELECTROENCEPHALOGRAPHY (EEG) EEG uses electrodes placed directly on the scalp to measure the weak (5–100 μV) electrical potentials generated by activity in the brain, Because of the fluid, bone, and skin that separate the electrodes from the actual electrical activity, signals tend to be smoothed and rather noisy. Hence, while EEG measurements have good temporal resolution with delays in the tens of milliseconds, spatial resolution tends to be poor, ranging about 2–3 cm accuracy at best, but usually worse. Two centimeters on the cerebral cortex could be the difference between inferring that the user is listening to music when they are in fact moving their hands. Electroencephalography (EEG) is the most studied potential non-invasive interface, mainly due to its fine temporal resolution, ease of use, portability and low set-up cost. The technology is highly susceptible to noise however. Another substantial barrier to using EEG as a brain–computer interface is the extensive training required before users can work the technology. For example, in experiments beginning in the mid-1990s, Niels Birbaumer at the University of Tübingen in Germany trained severely paralysed people to self-regulate the slow cortical potentials in their EEG to such an extent that these signals could be used as a binary signal to control a computer cursor. The experiment saw ten patients trained to move a computer cursor by controlling their brainwaves. The process was slow, requiring more than an hour for patients to write 100 characters with the cursor, while training often took many months. Electroencephalogram Another research parameter is the type of oscillatory activity that is measured. Birbaumer's later research with Jonathan Wolpaw at New York State University has focused on developing technology that would allow users to choose the brain signals they found
  • 15. easiest to operate a BCI, including mu and beta rhythms.A further parameter is the method of feedback used and this is shown in studies of P300 signals. Patterns of P300 waves are generated involuntarily when people see something they recognize and may allow BCIs to decode categories of thoughts without training patients first. By contrast, the biofeedback methods described above require learning to control brainwaves so the resulting brain activity can be detected. Lawrence Farwell and Emanuel Donchin developed an EEG-based brain–computer interface in the 1980s. Their "mental prosthesis" used the P300 brainwave response to allow subjects, including one paralyzed Locked-In syndrome patient, to communicate words, letters and simple commands to a computer and thereby to speak through a speech synthesizer driven by the computer. A number of similar devices have been developed since then. In 2000, for example, research by Jessica Bayliss at the University of Rochester showed that volunteers wearing virtual reality helmets could control elements in a virtual world using their P300 EEG readings, including turning lights on and off and bringing a mock-up car to a stop. While an EEG based brain-computer interface has been pursued extensively by a number of research labs, recent advancements made by Bin He and his team at the University of Minnesota suggest the potential of an EEG based brain-computer interface to accomplish tasks close to invasive brain-computer interface. Using advanced functional neuroimaging including BOLD functional MRI and EEG source imaging, Bin He and co-workers identified the co-variation and co-localization of electrophysiological and hemodynamic signals induced by motor imagination.Refined by a neuroimaging approach and by a training protocol, Bin He and co-workers demonstrated the ability of a non-invasive EEG based brain- computer interface to control the flight of a virtual helicopter in 3-dimensional space, based upon motor imagination.In June 2013 it was announced that Bin He had developed the technique to enable a remote-control helicopter to be guided through an obstacle course. In addition to a brain-computer interface based on brain waves, as recorded from scalp EEG electrodes, Bin He and co-workers explored a virtual EEG signal-based brain- computer interface by first solving the EEG inverse problem and then used the resulting virtual EEG for brain-computer interface tasks. Well-controlled studies suggested the merits of such a source analysis based brain-computer interface. DRY ACTIVE ELECTRODE ARRAYS In the early 1990s Babak Taheri, at University of California, Davis demonstrated the first single and also multichannel dry active electrode arrays using micro-machining. The single channel dry EEG electrode construction and results were published in 1994. The arrayed electrode was also demonstrated to perform well compared to Silver/Silver Chloride electrodes. The device consisted of four sites of sensors with integrated electronics to reduce noise by impedance matching. The advantages of such electrodes are:
  • 16. I. No electrolyte used, II. No skin preparation, III. Significantly reduced sensor size, and IV. Compatibility with eeg monitoring systems. The active electrode array is an integrated system made of an array of capacitive sensors with local integrated circuitry housed in a package with batteries to power the circuitry. This level of integration was required to achieve the functional performance obtained by the electrode. The electrode was tested on an electrical test bench and on human subjects in four modalities of EEG activity, namely: I. Spontaneous EEG, II. Sensory event-related potentials, III. Brain stem potentials, and IV. Cognitive event-related potentials. The performance of the dry electrode compared favorably with that of the standard wet electrodes in terms of skin preparation, no gel requirements and higher signal-to-noise ratio. In 1999 researchers at Case Western Reserve University, in Cleveland, Ohio, led by Hunter Peckham, used 64-electrode EEG skullcap to return limited hand movements to quadriplegic Jim Jatich. As Jatich concentrated on simple but opposite concepts like up and down, his beta-rhythm EEG output was analysed using software to identify patterns in the noise. A basic pattern was identified and used to control a switch: Above average activity was set to on, below average off. As well as enabling Jatich to control a computer cursor the signals were also used to drive the nerve controllers embedded in his hands, restoring some movement. PROSTHESIS CONTROL Non-invasive BCIs have also been applied to enable brain-control of prosthetic upper and lower extremity devices in people with paralysis. For example, Gert Pfurtscheller of Graz University of Technology and colleagues demonstrated a BCI-controlled functional electrical stimulation system to restore upper extremity movements in a person with tetraplegia due to spinal cord injury. Between 2012 and 2013, researchers at the University of California, Irvine demonstrated for the first time that it is possible to use BCI technology to restore brain- controlled walking after spinal cord injury. In their study, a person with paraplegia due to spinal cord injury was able to operate a BCI-robotic gait orthosis to regain basic brain- controlled ambulation. MENTAL STRATEGIES AND BRAIN PATTERNS
  • 17. Measuring brain activity effectively is a critical first step for brain–computer com- munication. However, measuring activity is not enough, because a BCI cannot read the mind or decipher thoughts in general. A BCI can only detect and classify spe- cific patterns of activity in the ongoing brain signals that are associated with specific tasks or events. What the BCI user has to do to produce these patterns is determined by the mental strategy the BCI system employs. The mental strategy is the foundation of any brain computer communication. The mental strategy determines what the user has to do to volition- ally produce brain patterns that the BCI can interpret. The mental strategy also sets certain constraints on the hardware and software of a BCI, such as the signal processing techniques to be employed later. The amount of training required to successfully use a BCI also depends on the mental strategy. The most common mental strategies are selective attention and motor imagery. In the following, we briefly explain these different BCIs. More detailed information about different BCI approaches and associated brain signals can be found in chapter “Brain Signals for Brain–Computer Interfaces”. SELECTIVE ATTENTION BCIs based on selective attention require external stimuli provided by the BCI sys- tem. The stimuli can be auditory or somatosensory. Most BCIs, however, are based on visual stimuli. That is, the stimuli could be different tones, different tactile stimulations, or flashing lights with different frequencies. In a typical BCI setting, each stimulus is associated with a command that controls the BCI appli- cation. In order to select a command, the users have to focus their attention to the corresponding stimulus. Let’s consider an example of a navigation/selection appli- cation, in which we want to move a cursor to items on a computer screen and then we want to select them. A BCI based on selective attention could rely on five stimuli. Four stimuli are associated with the commands for cursor movement: left, right, up, and down. The fifth stimulus is for the select command. This system would allow two dimensional navigation and selection on a computer screen. Users operate this BCI by focusing their attention on the stimulus that is associated with the intended command. Assume the user wants to select an item on the computer screen which is one position above and left of the current cursor position. The user would first need to focus on the stimulus that is associated with the up command, then on the one for the left command, then select the item by focusing on the stimulus associated with the select command. The items could represent a wide variety of desired messages or commands, such as letters, words, and instructions to move a wheelchair or robot arm, or signals to control a smart home. A 5-choice BCI like this could be based on visual stimuli. In fact, visual attention can be implemented with two different BCI approaches, which rely on somewhat different stimuli, mental strategies, and signal processing. These approaches are named after the brain patterns they produce, which are called P300 potentials andsteady-state visual evoked
  • 18. potentials (SSVEP). The BCIs employing these brain patterns are called P300 BCI and SSVEP BCI, respectively. A P300 based BCI relies on stimuli that flash in succession. These stimuli are usually letters, but can be symbols that represent other goals, such as controlling a cursor, robot arm, or mobile robot. Selective attention to a specific flashing symbol/letter elicits a brain pattern called P300, which develops in centro-parietalbrain areas about 300 ms after the presen- tation of the stimulus. The BCI can detect this P300. The BCI can then determine the symbol/letter that the user intends to select. Like a P300 BCI, an SSVEP based BCI requires a number of visual stimuli. Each stimulus is associated with a specific command, which is associated with an output the BCI can produce. In contrast to the P300 approach, however, these stim- uli do not flash successively, but flicker continuously with different frequencies in the range of about 6– 30 Hz. Paying attention to one of the flickering stimuli elicits an SSVEP in the visual cortex that has the same frequency as the tar- get flicker. That is, if the targeted stimulus flickers at 16 Hz, the resulting SSVEP will also flicker at 16 Hz. Therefore, an SSVEP BCI can determine which stimulus occupies the user’s attention by looking for SSVEP activity in the visual cortex at a specific frequency. The BCI knows the flickering frequencies of all light sources, and when an SSVEP is detected, it can determine the corresponding light source and its associated command. BCI approaches using selective attention are quite reliable across different users and usage sessions, and can allow fairly rapid communication. Moreover, these approaches do not require significant training. Users can produce P300s and SSVEPs without any training at all. Almost all subjects can learn the simple task of focusing on a target letter or symbol within a few minutes. Many types of P300 and SSVEP BCIs have been developed. For example, the task described above, in which users move a cursor to a target and then select it, has been validated with both P300 and SSVEP BCIs .One drawback of both P300 and SSVEP BCIs is that they may require the user to shift gaze. This is relevant because completelylocked- in patients are not able to shift gaze anymore. Although P300 and SSVEP BCIs without gaze shifting are possible as well, BCIs that rely on visual attention seem to work best when users can shift gaze. Another concern is that some people may dislike the external stimuli that are necessary to elicit P300 or SSVEP activity. MOTOR IMAGERY Moving a limb or even contracting a single muscle changes brain activity in the cor- tex. In fact, already the preparation of movement or the imagination of movement also changes the so-called sensorymotor rhythms. Sensorimotor rhythms (SMR) refer to oscillations in brain activity recorded from somatosensory and motor areas.
  • 19. Brain oscillations are typically categorized according to specific fre- quency bands which are named after Greek letters (delta: < 4 Hz, theta: 4–7 Hz, alpha: 8–12Hz, beta: 12– 30 Hz, gamma: > 30 Hz). Alpha activity recorded from sensorimotor areas is also called mu activity. The decrease of oscillatory activ- ity in a specific frequency band is called event- related desynchronization (ERD). Correspondingly, the increase of oscillatory activity in a specific frequency band is called event-relatedsynchronization (ERS). ERD/ERS patterns can be volition- ally produced by motor imagery, which is the imagination of movement without actually performing the movement. The frequency bands that are most important for motor imagery are mu and beta in EEG signals. Invasive BCIs often also use gamma activity, which is hard to detect with electrodes mounted outside the head. In contrast to BCIs based on selective attention, BCIs based on motor imagery do not depend on external stimuli. However, motor imagery is a skill that has to be learned. BCIs based on motor imagery usually do not work very well during the first session. Instead, unlike BCIs on selective attention, some training is necessary. While performance and training time vary across subjects, most subjects can attain good control in a 2-choice task with 1–4 h of training However, longer training is often necessary to achieve sufficient control. Therefore, training is an important component of many BCIs. Users learn through a process called operant conditioning, which is a fundamental term in psychology. In operant conditioning, people learn to associate a certain action with a response or effect. For example, people learn that touching a hot stove is painful, and never do it again. In a BCI, a user who wants to move the cursor up may learn that mentally visualizing a certain motor task such as a clenching
  • 20. one’s fist is less effective than thinking about the kinaesthetic experience of such a movement. BCI learning is a special case of operant conditioning because the user is not performing an action in the classical sense, since s/he does not move. Nonetheless, if imagined actions produce effects, then conditioning can still occur. During BCI use, operant conditioning involves training with feedback that is usually presented on a computer screen. Positive feedback indicates that the brain signals are modulated in a desired way. Negative or no feedback is given when the user was not able to perform the desired task. BCI learning is a type of feedback called neurofeedback. The feedback indicates whether the user performed the mental task well or failed to achieve the desired goal through the BCI. Users can utilize this feedback to optimize their mental tasks and improve BCI performance. The feedback can be tactile or auditory, but most often it is visual. Chapter “Neurofeedback Training for BCI Control” in this book presents more details about neuro-feedbackand its importance in BCI research. PROSTHESIS CONTROL Non-invasive BCIs have also been applied to enable brain-control of prosthetic upper and lower extremity devices in people with paralysis. For example, Gert Pfurtscheller of Graz University of Technology and colleagues demonstrated a BCI-controlled functional electrical stimulation system to restore upper extremity movements in a person with tetraplegia due to spinal cord injury. Between 2012 and 2013, researchers at the University of California, Irvine demonstrated for the first time that it is possible to use BCI technology to restore brain- controlled walking after spinal cord injury. In their study, a person with paraplegia due to spinal cord injury was able to operate a BCI-robotic gait orthosis to regain basic brain- controlled ambulation. SIGNAL PROCESSING A BCI measures brain signals and processes them in real time to detect certain patterns that reflect the user’s intent. This signal processing can have three stages: preprocessing, feature extraction, and detection and classification. Preprocessing aims at simplifying subsequent processing operations without los- ing relevant information. An important goal of preprocessing is to improve signal quality by improving the so- called signal-to-noise ratio (SNR). A bad or small SNR means that the brain patterns are buried in the rest of the signal, which makes relevant patterns hard to detect. A good or large SNR, on the other hand, simplifies the BCI’s detection and classification task. Transformations combined with filtering techniques are often employed during preprocessing in a BCI. Scientists use these techniques to transform the signals so unwanted signal components can be eliminated or at least reduced. These techniques can improve the SNR. The brain patterns used in BCIs are characterized by certain features or proper- ties. For instance, amplitudes and frequencies are essential features of sensorimotor rhythms and SSVEPs. The firing rate of individual neurons is an important feature of invasive BCIs using
  • 21. intracortical recordings. The feature extraction algorithms of a BCI calculate (extract) these features. Feature extraction can be seen as another step in preparing the signals to facilitate the subsequent and last signal processing stage, detection and classification. Detection and classification of brain patterns is the core signal processing task in BCIs. The user elicits certain brain patterns by performing mental tasks according to mental strategies and the BCI detects and classifies these patterns and translates them into appropriate commands for BCI applications. This detection and classification process can be simplified when the user com- municates with the BCI only in well defined time frames. Such a time frame is indicated by the BCI by visual or acoustic cues. For example, a beep informs the user that s/he could send a command during the upcoming time frame, which might last 2–6 s. During this time, the user is supposed to perform a specific mental task. The BCI tries to classify the brain signals recorded in this time frame. This type of BCI does not consider the possibility that the user does not wish to communicate anything during one of these time frames, or that s/he wants to communicate outside of a specified time frame. This mode of operation is called synchronous or cue-paced. Correspondingly, a BCI employing this mode of operation is called a synchronous BCI or a cue-pacedBCI. Although these BCIs are relatively easy to develop and use, they are impractical in many real- world settings. A cue-pased BCI is somewhat like a keyboard that can only be used at certain times. In an asynchronous or self-paced BCI, users can interact with a BCI at their leisure, without worrying about well defined time frames. Users may send a signal, or choose not to use a BCI, whenever they want. Therefore, asynchronous BCIs or self-paced BCIs have to analyse the brain signals continuously. This mode of operation is technically more demanding, but it offers a more natural and convenient form of interaction with a BCI.
  • 22. 6. ANIMAL BCI RESEARCH Several laboratories have managed to record signals from monkey and rat cerebral cortices to operate BCIs to produce movement. Monkeys have navigated computer cursors on screen and commanded robotic arms to perform simple tasks simply by thinking about the task and seeing the visual feedback, but without any motor output. In May 2008 photographs that showed a monkey at the University of Pittsburgh Medical Center operating a robotic arm by thinking were published in a number of well known science journals and magazines.Other research on cats has decoded their neural visual signals. In 1969 the operant conditioning studies of Fetz and colleagues, at the Regional Primate Research Center and Department of Physiology and Biophysics, University of Washington School of Medicine in Seattle, showed for the first time that monkeys could learn to control the deflection of a biofeedback meter arm with neural activity.Similar work in the 1970s established that monkeys could quickly learn to voluntarily control the firing rates of individual and multiple neurons in the primary motor cortex if they were rewarded for generating appropriate patterns of neural activity. Studies that developed algorithms to reconstruct movements from motor cortex neurons, which control movement, date back to the 1970s. In the 1980s, Apostolos Georgopoulos at Johns Hopkins University found a mathematical relationship between the electrical responses of single motor cortex neurons in rhesus macaque monkeys and the direction in which they moved their arms .He also found that dispersed groups of neurons, in different areas of the monkey's brains, collectively controlled motor commands, but was able to record the firings of neurons in only one area at a time, because of the technical limitations imposed by his equipment.
  • 23. There has been rapid development in BCIs since the mid-1990s. Several groups have been able to capture complex brain motor cortex signals by recording from neural ensembles (groups of neurons) and using these to control external devices. Phillip Kennedy (who later founded Neural Signals in 1987) and colleagues built the first intracortical brain–computer interface by implanting neurotrophic-cone electrodes into monkeys. Yang Dan and colleagues' recordings of cat vision using a BCI implanted in the lateral geniculate nucleus In 1999, researchers led by Yang Dan at the University of California, Berkeley decoded neuronal firings to reproduce images seen by cats. The team used an array of electrodes embedded in the thalamus, which integrates all of the brain’s sensory input of sharp-eyed cats. Researchers targeted 177 brain cells in the thalamus lateral geniculate nucleus area, which decodes signals from the retina. The cats were shown eight short movies, and their neuron firings were recorded. Using mathematical filters, the researchers decoded the signals to generate movies of what the cats saw and were able to reconstruct recognizable scenes and moving objects. Similar results in humans have since been achieved by researchers in Japan. Miguel Nicolelis, a professor at Duke University, in Durham, North Carolina, has been a prominent proponent of using multiple electrodes spread over a greater area of the brain to obtain neuronal signals to drive a BCI. Such neural ensembles are said to reduce the variability in output produced by single electrodes, which could make it difficult to operate a BCI. After conducting initial studies in rats during the 1990s, Nicolelis and his colleagues developed BCIs that decoded brain activity in owl monkeys and used the devices to reproduce monkey movements in robotic arms. Monkeys have advanced reaching and grasping abilities and good hand manipulation skills, making them ideal test subjects for this kind of work. By 2000 the group succeeded in building a BCI that reproduced owl monkey movements while the monkey operated a joystick or reached for food.The BCI operated in real time and could also control a separate robot remotely over Internet protocol. But the monkeys could not see the arm moving and did not receive any feedback, a so-called open- loop BCI.
  • 24. Diagram of the BCI developed by Miguel Nicolelis and colleagues for use on Rhesus monkeys Later experiments by Nicolelis using rhesus monkeys succeeded in closing the feedback loop and reproduced monkey reaching and grasping movements in a robot arm. With their deeply cleft and furrowed brains, rhesus monkeys are considered to be better models for human neurophysiology than owl monkeys. The monkeys were trained to reach and grasp objects on a computer screen by manipulating a joystick while corresponding movements by a robot arm were hidden. The monkeys were later shown the robot directly and learned to control it by viewing its movements. The BCI used velocity predictions to control reaching movements and simultaneously predicted handgripping force. Other laboratories which have developed BCIs and algorithms that decode neuron signals include those run by John Donoghue at Brown University, Andrew Schwartz at the University of Pittsburgh and Richard Andersen at Caltech. These researchers have been able to produce working BCIs, even using recorded signals from far fewer neurons than did Nicolelis. Donoghue's group reported training rhesus monkeys to use a BCI to track visual targets on a computer screen (closed-loop BCI) with or without assistance of a joystick. Schwartz's group created a BCI for three-dimensional tracking in virtual reality and also reproduced BCI control in a robotic arm. The same group also created headlines when they demonstrated that a monkey could feed itself pieces of fruit and marshmallows using a robotic arm controlled by the animal's own brain signals.
  • 25. Andersen's group used recordings of premovement activity from the posterior parietal cortex in their BCI, including signals created when experimental animals anticipated receiving a reward. In addition to predicting kinematic and kinetic parameters of limb movements, BCIs that predict electromyographic or electrical activity of the muscles of primates are being developed.Such BCIs could be used to restore mobility in paralyzed limbs by electrically stimulating muscles. Miguel Nicolelis and colleagues demonstrated that the activity of large neural ensembles can predict arm position. This work made possible creation of BCIs that read arm movement intentions and translate them into movements of artificial actuators. Carmena and colleagues programmed the neural coding in a BCI that allowed a monkey to control reaching and grasping movements by a robotic arm. Lebedev and colleagues argued that brain networks reorganize to create a new representation of the robotic appendage in addition to the representation of the animal's own limbs. The biggest impediment to BCI technology at present is the lack of a sensor modality that provides safe, accurate and robust access to brain signals. It is conceivable or even likely; however, that such a sensor will be developed within the next twenty years. The use of such a sensor should greatly expand the range of communication functions that can be provided using a BCI. Development and implementation of a BCI system is complex and time consuming. In response to this problem, Gerwin Schalk has been developing a general-purpose system for BCI research, called BCI2000. BCI2000 has been in development since 2000 in a project led by the Brain–Computer Interface R&D Program at the Wadsworth Center of the New York State Department of Health in Albany, New York, USA. A new 'wireless' approach uses light-gated ion channels such as Channelrhodopsin to control the activity of genetically defined subsets of neurons in vivo. In the context of a simple learning task, illumination of transfected cells in the somatosensory cortex influenced the decision making process of freely moving mice. The use of BMIs has also led to a deeper understanding of neural networks and the central nervous system. Research has shown that despite the inclination of neuroscientists to believe that neurons have the most effect when working together, single neurons can be conditioned through the use of BMIs to fire at a pattern that allows primates to control motor outputs. The use of BMIs has led to development of the single neuron insufficiency principle which states that even with a well tuned firing rate single neurons can only carry a narrow amount of information and therefore the highest level of accuracy is achieved by recording firings of the collective ensemble. Other principles discovered with the use of BMIs include the neuronal multitasking principle, the neuronal mass principle, the neural degeneracy principle, and the plasticity principle.
  • 26. 7. HUMAN BCI RESEARCH INVASIVE BCIs Invasive BCI research has targeted repairing damaged sight and providing new functionality for people with paralysis. Invasive BCIs are implanted directly into the grey matter of the brain during neurosurgery. Because they lie in the grey matter, invasive devices produce the highest quality signals of BCI devices but are prone to scar-tissue build-up, causing the signal to become weaker, or even non-existent, as the body reacts to a foreign object in the brain. In vision science, direct brain implants have been used to treat non-congenital (acquired) blindness. One of the first scientists to produce a working brain interface to restore sight was private researcher William Dobelle. Dobelle's first prototype was implanted into "Jerry", a man blinded in adulthood, in 1978. A single-array BCI containing 68 electrodes was implanted onto Jerry’s visual cortex and succeeded in producing phosphenes, the sensation of seeing light. The system included cameras mounted on glasses to send signals to the implant. Initially, the implant allowed Jerry to see shades of grey in a limited field of vision at a low frame-rate. This also required him to be hooked up to a mainframe computer, but shrinking electronics and faster computers made his artificial eye more portable and now enable him to perform simple tasks unassisted. In 2002, Jens Naumann, also blinded in adulthood, became the first in a series of 16 paying patients to receive Dobelle’s second generation implant, marking one of the earliest commercial uses of BCIs. The second generation device used a more sophisticated implant enabling better mapping of phosphenes into coherent vision. Phosphenes are spread out across the visual field in what researchers call "the starry-night effect". Immediately after his implant, Jens was able to use his imperfectly restored vision to drive an automobile slowly around the parking area of the research institute. Unfortunately, Dobelle died in 2004 before his processes and developments were documented. Subsequently, when Mr. Naumann and the other patients in the program began having problems with their vision, there was no relief
  • 27. and they eventually lost their "sight" again. Naumann wrote about his experience with Dobelle's work in Search for Paradise: A Patient's Account of the Artificial Vision Experiment and has returned to his farm in Southeast Ontario, Canada, to resume his normal activities. BCIs focusing on motor neuroprosthetics aim to either restore movement in individuals with paralysis or provide devices to assist them, such as interfaces with computers or robot arms. Researchers at Emory University in Atlanta, led by Philip Kennedy and Roy Bakay, were first to install a brain implant in a human that produced signals of high enough quality to simulate movement. Their patient, Johnny Ray (1944–2002), suffered from ‘locked-in syndrome’ after suffering a brain-stem stroke in 1997. Ray’s implant was installed in 1998 and he lived long enough to start working with the implant, eventually learning to control a computer cursor; he died in 2002 of a brain aneurysm. Tetraplegic Matt Nagle became the first person to control an artificial hand using a BCI in 2005 as part of the first nine-month human trial of Cyberkinetics’s BrainGate chip- implant. Implanted in Nagle’s right precentral gyrus (area of the motor cortex for arm movement), the 96-electrode BrainGate implant allowed Nagle to control a robotic arm by thinking about moving his hand as well as a computer cursor, lights and TV. One year later, professor Jonathan Wolpaw received the prize of the Altran Foundation for Innovation to develop a Brain Computer Interface with electrodes located on the surface of the skull, instead of directly in the brain.
  • 28. More recently, research teams led by the Braingate group at Brown University and a group led by University of Pittsburgh Medical Center, both in collaborations with the United States Department of Veterans Affairs, have demonstrated further success in direct control of robotic prosthetic limbs with many degrees of freedom using direct connections to arrays of neurons in the motor cortex of patients with tetraplegia. PARTIALLY INVASIVE BCIs Partially invasive BCI devices are implanted inside the skull but rest outside the brain rather than within the grey matter. They produce better resolution signals than non-invasive BCIs where the bone tissue of the cranium deflects and deforms signals and have a lower risk of forming scar-tissue in the brain than fully invasive BCIs. Electrocorticography (ECoG) measures the electrical activity of the brain taken from beneath the skull in a similar way to non-invasive electroencephalography, but the electrodes are embedded in a thin plastic pad that is placed above the cortex, beneath the dura mater ECoG technologies were first trialed in humans in 2004 by Eric Leuthardt and Daniel Moran from Washington University in St Louis. In a later trial, the researchers enabled a teenage boy to play Space Invaders using his ECoG implant.This research indicates that control is rapid, requires minimal training, and may be an ideal tradeoff with regards to signal fidelity and level of invasiveness. Signals can be either subdural or epidural, but are not taken from within the brain parenchyma itself. It has not been studied extensively until recently due to the limited access of subjects. Currently, the only manner to acquire the signal for study is through the use of patients requiring invasive monitoring for localization and resection of an epileptogenic focus. ECoG is a very promising intermediate BCI modality because it has higher spatial resolution, better signal-to-noise ratio, wider frequency range, and less training requirements than scalp-recorded EEG, and at the same time has lower technical difficulty, lower clinical risk, and probably superior long-term stability than intracortical single-neuron recording. This feature profile and recent evidence of the high level of control with minimal training requirements shows potential for real world application for people with motor disabilities. Light Reactive Imaging BCI devices are still in the realm of theory. These would involve implanting a laser inside the skull. The laser would be trained on a single neuron and the neuron's reflectance measured by a separate sensor. When the neuron fires, the laser light pattern and wavelengths it reflects would change slightly. This would allow researchers to monitor single neurons but require less contact with tissue and reduce the risk of scar-tissue build-up. NON-INVASIVE BCIs
  • 29. As well as invasive experiments, there have also been experiments in humans using non-invasive neuroimaging technologies as interfaces. Signals recorded in this way have been used to power muscle implants and restore partial movement in an experimental volunteer. Although they are easy to wear, non-invasive implants produce poor signal resolution because the skull dampens signals, dispersing and blurring the electromagnetic waves created by the neurons. Although the waves can still be detected it is more difficult to determine the area of the brain that created them or the actions of individual neurons. NEUROGAMING Currently, there is a new field of gaming called Neurogaming, which uses non- invasive BCI in order to improve gameplay so that users can interact with a console without the use of a traditional controller.Some Neurogaming software use a player's brain waves, heart rate, expressions, pupil dilation, and even emotions to complete tasks or affect the mood of the game.For example, game developers at Emotiv have created non-invasive BCI that will determine the mood of a player and adjust music or scenery accordingly. This new form of interaction between player and software will enable a player to have a more realistic gaming experience. Because there will be less disconnect between a player and console, Neurogaming will allow individuals to utilize their "psychological state" and have their reactions transfer to games in real-time. However, since Neurogaming is still in its first stages, not much is written about the new industry. The first NeuroGaming Conference was held in San Francisco on May 1–2, 2013. Biofeedback is used to monitor a subject’s mental relaxation. In some cases, biofeedback does not monitor electroencephalography (EEG), but instead bodily parameters such as electromyography (EMG), galvanic skin resistance (GSR), and heart rate variability (HRV).Many biofeedback systems are used to treat certain disorders such as attention deficit hyperactivity disorder (ADHD), sleep problems in children, teeth grinding, and chronic pain. EEG biofeedback systems typically monitor four different bands (theta: 4–7 Hz, alpha:8–12 Hz, SMR: 12–15 Hz, beta: 15–18 Hz) and challenge the subject to control them. Passive BCI involves using BCI to enrich human–machine interaction with implicit information on the actual user’s state, for example, simulations to detect when users intend to push brakes during an emergency car stopping procedure. Game developers using passive BCIs need to acknowledge that through repetition of game levels the user’s cognitive state will change or adapt. Within the first play of a level, the user will react to things differently from during the second play: for example, the user will be less surprised at an event in the game if he/she is expecting it. SYNTHETIC TELEPATHY/SILENT COMMUNICATION In a $6.3 million Army initiative to invent devices for telepathic communication, Gerwin Schalk, underwritten in a $2.2 million grant, found that it is possible to use ECoG signals to discriminate the vowels and consonants embedded in spoken and in imagined
  • 30. words. The results shed light on the distinct mechanisms associated with production of vowels and consonants, and could provide the basis for brain-based communication using imagined speech. Research into synthetic telepathy using subvocalization is taking place at the University of California, Irvine under lead scientist Mike D'Zmura. The first such communication took place in the 1960s using EEG to create Morse code using brain alpha waves. Using EEG to communicate imagined speech is less accurate than the invasive method of placing an electrode between the skull and the brain. On February 27, 2013 Duke University researchers successfully connected the brains of two rats with electronic interfaces that allowed them to directly share information, in the first-ever direct brain-to- brain interface
  • 31. 8. OTHER RESEARCHES Electronic neural networks have been deployed which shift the learning phase from the user to the computer. Experiments by scientists at the Fraunhofer Society in 2004 using neural networks led to noticeable improvements within 30 minutes of training. In addition to predicting kinematic and kinetic parameters of limb movements, BCIs that predict electromyographic or electrical activity of the muscles of primates are being developed. Such BCIs could be used to restore mobility in paralyzed limbs by electrically stimulating muscles. Miguel Nicolelis and colleagues demonstrated that the activity of large neural ensembles can predict arm position. This work made possible creation of BCIs that read arm movement intentions and translate them into movements of artificial actuators. Carmena and colleagues programmed the neural coding in a BCI that allowed a monkey to control reaching and grasping movements by a robotic arm. Lebedev and colleagues argued that brain networks reorganize to create a new representation of the robotic appendage in addition to the representation of the animal's own limbs. The biggest impediment to BCI technology at present is the lack of a sensor modality that provides safe, accurate and robust access to brain signals. It is conceivable or even likely; however, that such a sensor will be developed within the next twenty years. The use of such a sensor should greatly expand the range of communication functions that can be provided using a BCI. Development and implementation of a BCI system is complex and time consuming. In response to this problem, Gerwin Schalk has been developing a general-purpose system for BCI research, called BCI2000. BCI2000 has been in development since 2000 in a project led by the Brain–Computer Interface R&D Program at the Wadsworth Center of the New York State Department of Health in Albany, New York, USA. A new 'wireless' approach uses light-gated ion channels such as Channelrhodopsin to control the activity of genetically defined subsets of neurons in vivo. In the context of a simple learning task, illumination of transfected cells in the somatosensory cortex influenced the decision making process of freely moving mice. The use of BMIs has also led to a deeper understanding of neural networks and the central nervous system. Research has shown that despite the inclination of neuroscientists to believe that neurons have the most effect when working together, single neurons can be conditioned through the use of BMIs to fire at a pattern that allows primates to control motor outputs. The use of BMIs has led to development of the single neuron insufficiency principle which states that even with a well tuned firing rate single neurons can only carry a narrow amount of information and therefore the highest level of accuracy is achieved by
  • 32. recording firings of the collective ensemble. Other principles discovered with the use of BMIs include the neuronal multitasking principle, the neuronal mass principle, the neural degeneracy principle, and the plasticity principle. Experiments by Eduardo Miranda, at the University of Plymouth in the UK, has aimed to use EEG recordings of mental activity associated with music to allow the disabled to express themselves musically through an encephalophone.Ramaswamy Palaniappan has pioneered the development of BCI for use in biometrics to identify/authenticate a person. The method has also been suggested for use as PIN generation device (for example in ATM and internet banking transactions. The group which is now at University of Wolverhampton has previously developed analogue cursor control using thoughts. Researchers at the University of Twente in the Netherlands have been conducting research on using BCIs for non-disabled individuals, proposing that BCIs could improve error handling, task performance, and user experience and that they could broaden the user spectrum.They particularly focused on BCI games, suggesting that BCI games could provide challenge, fantasy and sociality to game players and could, thus, improve player experience. The Emotiv Company has been selling a commercial video game controller, known as The Epoc, since December 2009. The Epoc uses electromagnetic sensors. The first BCI session with 100% accuracy (based on 80 right-hand and 80 left-hand movement imaginations) was recorded in 1998 by Christoph Guger. The BCI system used 27 electrodes overlaying the sensorimotor cortex, weighted the electrodes with Common Spatial Patterns, calculated the running variance and used a linear discriminant analysis. Research is ongoing into military use of BCIs and since the 1970s DARPA has been funding research on this topic. The current focus of research is user-to-user communication through analysis of neural signals. The project "Silent Talk" aims to detect and analyze the word-specific neural signals, using EEG, which occur before speech is vocalized, and to see if the patterns are generalizable. A VEP is an electrical potential recorded after a subject is presented with a type of visual stimuli. There are several types of VEPs. Steady-state visually evoked potentials (SSVEPs) use potentials generated by exciting the retina, using visual stimuli modulated at certain frequencies. SSVEP’s stimuli are often formed from alternating checkerboard patterns and at times simply use flashing images. The frequency of the phase reversal of the stimulus used can be clearly distinguished in the spectrum of an EEG; this makes detection of SSVEP stimuli relatively easy. SSVEP has proved to be successful within many BCI systems. This is due to several factors, the signal elicited is measurable in as large a population as the transient VEP and blink movement and electro cardiographic artefacts do not affect the frequencies monitored. In addition, the SSVEP signal is exceptionally robust; the topographic organization of the primary visual cortex is
  • 33. such that a broader area obtains afferents from the central or fovial region of the visual field .SSVEP does have several problems however. As SSVEPs use flashing stimuli to infer a user’s intent, the user must gaze at one of the flashing or iterating symbols in order to interact with the system. It is, therefore, likely that the symbols could become irritating and uncomfortable to use during longer play sessions, which can often last more than an hour which may not be an ideal gameplay. Another type of VEP used with applications is the P300 potential. The P300 event- related potential is a positive peak in the EEG that occurs at roughly 300 ms after the appearance of a target stimulus (a stimulus for which the user is waiting or seeking) or oddball stimuli . The P300 amplitude decreases as the target stimuli and the ignored stimuli grow more similar.The P300 is thought to be related to a higher level attention process or an orienting response Using P300 as a control scheme has the advantage of the participant only having to attend limited training sessions. The first application to use the P300 model was the P300 matrix. Within this system, a subject would choose a letter from a grid of 6 by 6 letters and numbers. The rows and columns of the grid flashed sequentially and every time the selected “choice letter” was illuminated the user’s P300 was (potentially) elicited. However, the communication process, at approximately 17 characters per minute, was quite slow. The P300 is a BCI that offers a discrete selection rather than a continuous control mechanism. The advantage of P300 use within games is that the player does not have to teach himself/herself how to use a completely new control system and so only has to undertake short training instances, to learn the gameplay mechanics and basic use of the BCI paradigm. BCIs can provide discrete or proportional output. A simple discrete output could be “yes” or “no”, or it could be a particular value out of N possible values. Proportional output could be a continuous value within the range of a certain minimum andmaximum. Depending on the mental strategy and on the brain patterns used, some BCIs are more suitable for providing discrete output values, while others are moresuitable for allowing proportional control. A P300 BCI, for instance, is particularly appropriate for selection applications. SMR based BCIs have been usedfor discrete control, but are best suited to proportional control applications such as2-dimensional cursor control.In fact, the range of possible BCI applications is very broad – from verysimple to complex. BCIs have been validated with many applications, includ-ing spelling devices, simple computer games, environmental control, navigation invirtual reality, and generic cursor control applications. Most of these applications run on conventional computers that host the BCI system and the application as well. Typically, the application is specifically tailored for a particular type of BCI, and often the application is an integral part of theBCI system. BCIs that can connect and effectively control a range of already exist-ing assistive devices, software, and appliances are rare. An increasing number of systems allow control of more sophisticated devices, including orthoses, prosthe-ses, robotic arms, and mobile robots. There is little
  • 34. argument that such an interface would be a boon to BCI research. BCIs can control any application that other interfaces can control, provided these applications can function effectively with the low information throughput of BCIs. On the other hand, BCIs are normally not well suited to controlling more demanding and complex applications, because they lack the necessary information transfer rate. The patient is thirsty and wants to convey that she wantsto drink some water now. She might perform this task by selecting each individual letter and writing the message “water, please” or just “water”. Since this is a wish the patient may have quite often, it would be useful to have a special symbol or command for this message. In this way, the patient can convey this particular message much faster, ideally with just one mental task. Many more short cuts might allow other tasks, but these short cuts lack the flexibility of writing individual messages. Therefore, an ideal BCI would allow a combination of simple commands to convey information flexibly and short cuts that allow specific, common, complex commands. Examples of BCI applications
  • 35. In other words, the BCI should allow a combination of process-oriented (or low-evel) control and goal-oriented (or high level) control. Low-level control means the user has to manage all the intricate interactions involved in achieving a task or goal, such as spelling the individual letters for a message. In contrast, goal-oriented or high-level control means the users simply communicate their goal to the application. Such applications need to be sufficiently intelligent to autonomously perform all necessary sub-tasks to achieve the goal. In any interface, users should not be required to control unnecessary low-level details of system operation.This are especially important with BCIs. Allowing low-level control of a wheelchair or robot arm, for example, would not only be slow and frustrating but potentially dangerous. The semi-autonomous wheelchair Rolland III can deal with different input mod- alities, such as low-level joystick control or high-level discrete control. Autonomous and semi-autonomous navigation is supported. Semi-autonomous assistive devices The rehabilitation robot FRIEND II is a semi-autonomous system designed to assist disabled people in activities of daily living. It is system based on a conventional wheelchair equipped with a stereo camera system, a robot arm with 7 degrees-of-freedom, a gripper with force/torque sensor, a smart tray with tactile surface and weight sensors, and a computing unit consisting of three independent industrial PCs. FRIEND II can perform certain operations completely autonomously. An example of such an operation is a “pour in beverage” scenario. In this scenario, the system detects the bottle and the glass (both located at arbitrary positions on the tray), grabs the bottle, moves the bottle to the glass
  • 36. while automatically avoiding any obstacles on the tray, fills the glass with liquid from the bottle while avoiding pouring too much, and finally puts the bottle back in its original position – again avoiding any possible collisions. These assistive devices offload much of the work from the user onto the system. The wheelchair provides safety and high-level control by continuous path planning and obstacle avoidance. The rehabilitation robot offers a collection of tasks which are performed autonomously and can be initiated by single commands. Without this device intelligence, the user would need to directly control many aspects of device operation. Consequently, controlling a wheelchair, a robot arm, or any complex device with a BCI would be almost impossible, or at least very difficult, time consuming, frustrating, and in many cases even dangerous. Such complex BCI applications are not broadly available, but are still topics of research and are being evaluated in research laboratories. The success of these applications, or actually of any BCI application, will depend on their reliability and on their acceptance by users. Another factor is whether these applications provide a clear advantage over conventional assistive technologies. In the case of completely locked-in patients, alternative control and communication methodologies do not exist. BCI control and communication is usually the only possible practical option. However, the situation is different with less severely disabled or healthy users, since they may be able to communicate through natural means like speech and gesture, and alternative control and communication technologies based on movement are available to them such as keyboards or eye tracking systems. Until recently, it was assumed that users would only use a BCI if other means of communication were unavailable. More recent work showed a user who preferred a BCI over an eye tracking system. Although BCIs are gaining acceptance with broader user groups, there are many scenarios where BCIs remain too slow and unreliable for effective control. For example, most prostheses cannot be effectively controlled with a BCI. Typically, prostheses for the upper extremities are controlled by electromyo- graphic (myoelectric) signals recorded from residual muscles of the amputation stump. In the case of transradial amputation (forearm amputation), the muscle activity recorded by electrodes over the residual flexor and extensor muscles is used to open, close, and rotate a hand prosthesis. Controlling such a device with a BCI is not practical. For higher amputations, however, the number of degrees-of-freedom of a prostheses conventional myoelectric control of the prosthetic arm and hand becomes very difficult. Controlling such a device by a BCI may seem to be an option. In fact, several approaches have been investigated to control prostheses with invasive and non-invasive BCIs. Ideally, the control of prostheses should provide highly reliable, intuitive, simultaneous, and proportional control of many degrees- of-freedom. In order to provide sufficient flexibility, low-level control is required. Proportional control in this case means the user can modulate speed and force of the actuators in the prosthesis. “Simultaneous” means that several degrees-of-freedom (joints)
  • 37. can be controlled at the same time. That is, for instance, the prosthetic hand can be closed while the wrist of the hand is rotated at the same time. “Intuitive” means that learning to control the prosthesis should be easy. None of the BCI approaches that have been currently suggested for controlling prostheses meets these criteria. Non-invasive approaches suffer from limited band- width, and will not be able to provide complex, high-bandwidth control in the near future. Invasive approaches show considerable more promise for such control in the near future. However, then these approaches will need to demonstrate that they have clear advantages over other methodologies such as myoelectric control combined with targeted muscle reinnervation (TMR). TMR is a surgical technique that transfers residual arm nerves to alternative muscle sites. After reinnervation, these target muscles produce myoelectric signals (electromyographic signals) on the surface of the skin that can be measured and used to control prosthetic devices. For example, in persons who have had their arm removed at the shoulder (called “shoulder disarticulation amputees”), residual peripheral nerves of arm and hand are transferred to separate regions of the pectoralis muscles. Prototype of prosthesis Today, there is no BCI that can allow independent control of 7 different degrees of freedom, which is necessary to duplicate all the movements that a natural arm could make.
  • 38. On the other hand, sufficiently independent control signals can be derived from the myoelectric signals recorded from the pectoralis. Moreover, control is largely intuitive, since users invoke muscle activity in the pectoralis in a similar way as they did to invoke movement of their healthy hand and arm. For instance, the users’ intent to open the hand of their “phantom limb” results in particular myoelectric activity patterns that can be recorded from the pectoralis, and can be translated into control commands that open the pros- thetic hand correspondingly. Because of this intuitive control feature, TMR based prosthetic devices can also be seen as thought-controlled neuroprostheses. Clearly, TMR holds the promise to improve the operation of complex prosthetic systems. BCI approaches (non-invasive and invasive) will need to demonstrate clinical and commercial advantages over TMR approaches in order to be viable. The example with prostheses underscores a problem and an opportunity for BCI research. The problem is that BCIs cannot provide effective control because they cannot provide sufficient reliability and bandwidth (information per second). Similarly, the bandwidth and reliability of modern BCIs is far too low for many other goals that are fairly easy with conventional interfaces. Rapid communication, most computer games, detailed wheelchair navigation, and cell phone dialing are only a few examples of goals that require a regular interface. Does this mean that BCIs will remain limited to severely disabled users? We think not, for several reasons. First, as noted above, there are many ways to increase the “effective bandwidth” of a BCI through intelligent interfaces and high level selection. Second, BCIs are advancing very rapidly. We don’t think a BCI that is as fast as a keyboard is imminent, but substantial improvements in bandwidth are feasible. Third, some people may use BCIs even though they are slow because they are attracted to the novel and futuristic aspect of BCIs. Many research labs have demonstrated that computer games such as Pong, Tetris, or Pacman can be con- trolled by BCIs and that rather complex computer applications like Google Earth can be navigated by BCI. Users could control these systems more effectively with a keyboard, but may consider a BCI more fun or engaging. Motivated by the advances in BCI research over the last years, companies have started to consider BCIs as possibility to augment human–computer interaction. This interest is under- lined by a number of patents and new products, which are further discussed in the concluding chapter of this book. We are especially optimistic about BCIs for new user groups for two reasons. First, BCIs are becoming more reliable and easier to apply. New users will need a BCI that is robust, practical, and flexible. All applications should function outside the lab, using only a minimal number of EEG channels (ideally only one channel), a simple and easy to setup BCI system, and a stable EEG pattern suitable for online detection. The Graz BCI lab developed an example of such a system. It uses a specific motor imagery-based BCI designed to detect
  • 39. the short-lasting ERS pattern in the beta band after imagination of brisk foot movement in a single EEG channel Second, we are optimistic about a new technology called a “hybrid” system, which is composed of 2 BCIs or at least one BCI and another system. One example of such a hybrid BCI relies on simple, one-channel ERD BCI to activate the flickering lights of a 4-step SSVEP- based hand orthosis only when the SSVEP system was needed for control.
  • 40. 9. THE BCI AWARD The Annual BCI Research Award, endowed with 3,000 USD, is awarded in recognition of outstanding and innovative research in the field of Brain-Computer Interfaces. Each year, a renowned research laboratory is asked to judge the submitted projects and to award the prize. The jury consists of world-leading BCI experts recruited by the awarding laboratory. Following list consists the winners of the BCI Award:  2010: Cuntai Guan, Kai Keng Ang, Karen Sui Geok Chua and Beng Ti Ang, (A*STAR, Singapore) Motor imagery-based Brain-Computer Interface robotic rehabilitation for stroke.  2011: Moritz Grosse-Wentrup and Bernhard Schölkopf, (Max Planck Institute for Intelligent Systems, Germany) What are the neuro-physiological causes of performance variations in brain-computer interfacing?  2012: Surjo R. Soekadar and Niels Birbaumer, (Applied Neurotechnology Lab, University Hospital Tübingen and Institute of Medical Psychology and Behavioral Neurobiology, Eberhard Karls University, Tübingen, Germany) Improving Efficacy of Ipsilesional Brain-Computer Interface Training in Neurorehabilitation of Chronic Stroke.  2013: M. C. Dadarlata,b, J. E. O’Dohertya, P. N. Sabesa,b (aDepartment of Physiology, Center for Integrative Neuroscience, San Francisco, CA, US, bUC Berkeley-UCSF Bioengineering Graduate Program, University of California, San Francisco, CA, US), A learning-based approach to artificial sensory feedback: intracortical microstimulation replaces and augments vision.
  • 41. 10. LOW-COST BCI-BASED INTERFACES Recentlyanumberof companieshave scaledbackmedical grade EEG technology(andinone case, NeuroSky, rebuilt the technology from the ground up to create inexpensive BCIs. This technology has been built into toys and gaming devices; some of these toys have been extremely commercially successful like the NeuroSky and Mattel MindFlex.  In 2006 Sonypatentedaneural interface systemallowingradiowavestoaffectsignalsinthe neural cortex.  In 2007 NeuroSky released the first affordable consumer based EEG along with the game NeuroBoy. This was also the first large scale EEG device to use dry sensor technology.  In 2008 OCZ Technology developed a device for use in video games relying primarily on electromyography.  In 2008 the Final Fantasy developer Square Enix announced that it was partnering with NeuroSky to create a game, Judecca  In 2009 Mattel partnered with NeuroSky to release the Mindflex, a game that used an EEG to steer a ball through an obstacle course. By far the best selling consumer based EEG to date.  In 2009 Uncle Milton Industries partnered with NeuroSky to release the Star Wars Force Trainer, a game designed to create the illusion of possessing the force.  In 2009 Emotiv Systems released the EPOC, a 14 channel EEG device that can read 4 mental states, 13 conscious states, facial expressions, and head movements. The EPOC is the first commercial BCIto use dry sensortechnology,whichcanbe dampenedwithasaline solution for a better connection.  In November 2011 Time Magazine selected "necomimi" produced by Neurowear as one of the best inventions of the year. The company announced that it expected to launch a consumer version of the garment, consisting of cat-like ears controlled by a brain-wave reader produced by NeuroSky, in spring 2012.  In March 2012 g.tec introduced the intendiX-SPELLER, first commercially available BCI systemfor home use which can be used to control computer games and apps. It can detect different brain signals with an accuracy of 99%.[104] g.tec has hosted several workshop tours to demonstrate the intendiX system and other hardware and software to the public, such as a g.tec workshop tour of the US West Coast during September 2012.
  • 42.  In January 2013 Hasaca National University (HNU) announced first Masters Program in Virtual Reality Brain Computer interface application design.  In February 2014 They Shall Walk (a nonprofit organization fixed on constructing exoskeletons,dubbedLIFESUITs,forparaplegicsandquadriplegics) beganapartnershipwith JamesW. Shakarji onthe developmentof a wireless BCI via the production and localization of a Schumann resonance cavity.