Brain Access
Jay Lohokare
Rohan Karhadkar
Akshay Satam
Dr. Xiaojun Bi
State University of New York at Stony Brook
Spring 2018
Human Computer Interactions course
Motivation
• Non invasive interface to allow users access interface with devices
through new ‘modes’ - Multi Modal system
• Physically disabled users - Access with physical motion
• End to end system allowing seamless control
• Not limiting the number of activities allowed - Control any
smartphone, any application
• Authentication along with control to ensure privacy and security
Why alternative
Interface Modes
Virtual Reality -
Head mounted
gears
Disability Private interface
Existing works
EEG
• “Electroencephalography (EEG) is an electrophysiological monitoring method to
record electrical activity of the brain”. Any action, thought and emotions
generate electric signals in brains which are captured by electrodes resulting to
EEG
• Existing works focus on enabling brain-computer access for specific applications
– No universal interface developed.
• Brain waves can be used to classify as many as 10 actions with accuracy above
90% [1]
• “NeuroPhone” – iPhone interface that uses flashing images that user needs to
focus on [2]
• Samsung demoed Tablet control using flashing images based BCI [4]
• There are many systems developed that use BCI to perform various actions like
controlling robot motion [3], controlling drones [5], Typing applications [6][7].
However, all of these systems used visual stimulation (Like P300) which limits
its use to develop generic BCI interface.
• There are studies working on directly using EEG for action classification. For
example [8] uses raw EEG data with visual stimulation to allow users to play
rock-paper-scissors game.
• Many projects achieved higher classification accuracy – Needed invasive
surgeries.
• System keeps flashing the numbers
• Visual stimuli activates some brain waves
• These reading are recorded and mapped to the
image flashed at that moment to determine the
exact action user intends.
• This system can’t be used as a part of existing
systems (Apps, games, etc) – News interfaced
need to be designed for compatibility
• Our system provides a solution
Existing works
EEG
Existing works
EEG
Existing works
EEG
MIT Researchers playing game using BCI
“NeuroPhone” – Smartphone BCI
using flashing images
Samsung researchers controlling Tablet
Commercial BCI systems
Existing works
EMG
• Electromyography (EMG) is an electrodiagnostic medicine technique for
evaluating and recording the electrical activity produced by skeletal
muscles.
• EMG provides direct signals compared to EEG for actions classification.
• EMG detects electric potential generated by muscles when a person tries
to move or preform some actions.
• “AlterEgo” achieves 92% median accuracy for a silent conversational
interface [9]
• Microsoft’s muCIs (Muscle computer interface) is another project using
EMGs to enable human computer interactions.[10]
• Another study uses EMG for controlling mouse cursor, achieving 70%
accuracy [11].
Our contribution
• All existing work focus on creating alternative interfaces to applications
• These interfaces provide with limited functionalities of underlying
platform
• Existing systems with conventional displays and interfaces should be
able to interface with EEG/EMG based applications
• Need for Plug and play system that can make all applications compatible
with BCI interfaces/Muscle control interfaces
• We present a system a system that can be integrated with any existing
display to interface it with brain or muscles based control.
• No need to re-develop applications to interface them with the brain
Approach to the
solution
• Overlay numbers of usual display interfaces
• No special application screens with limited feature
• System contains 3 parts
1. A background service that detects all possible interactions on the system,
and renders numbers on those locations – In our prototype, we used
Android’s Accessibility service that can access UI elements of the
smartphone screen in real time. The service allows us to access coordinates
of all ‘interaction elements’ on the screen (Elements that allow interactions
like Touch, long touch, swipe).
2. We then used these coordinates to render numbers on an empty android
activity, and kept updating this overlay screen.
3. We implemented a Bluetooth (BLE) service that keeps listening to
messages containing numbers (With a defined encoding). When this service
gets a number, it forwards it to the accessibility service (which maps
numbers with coordinates of elements on the screen). The service then
performs touch/swipe interaction on the corresponding coordinate.
Approach to the
solution
OpenBCI
Ganglion
(4-channel
interface)
Bluetooth Low
energy
Smartphone
application
Approach to the
solution
1. Accessibility service
recognize all possible
interactions
2. Accessibility service
overlays numbers on
screen
Action performed
Implementation
• 6 pins ( 2 ground)
• Power with 3.3V to 12V DC battery
• Current Draw: 14mA when idle, 15mA
connected and streaming data
• Simblee BLE Radio module (Arduino
Compatible)
• MCP3912 Analog Front End
• LIS2DH 3 axis Accelerometer
• MicroSD Card Slot
• Board Dimensions 2.41” x 2.41” (octagon has
1” edges)
OpenBCI Ganglion Board
Data capture
• We ran various experiments to find out optimal data capture
configurations for EEG
• 0.3 to 50 Hertz low pass filter – Remove noise related to breathing
and background activities
• Existing EEG data sets have high background noise as the person is
not in calm state when the experiment was conducted.
• Our initial approach involved building CNN classifier – We
observed that the signals differ from person to person. Hence, same
classifier can not be used for different people. Retraining the model
becomes imperative. Existing works support this claim.
Deep learning
implemented
• Literature review provides strong support to using CNNs for
classification
• We used 3 layer CNN (Inspired from MIT AlterEgo architecture)
for building the classifier
• We generated the data using Ganglion kit for training, and validated
the model using dataset available on Kaggle
• Implemented architecture – CNN classifier using Tensorflow.
Every model has to be trained for every user and then imported into
the application. Drawback is that CNNs can not be trained on
smartphones due to the low computing capabilities.
• Proposed enhancement – CNN feature extractor that will be
trained once, and will work for any user. The feature extractor will
extract features for any users and then pass on the data to a SVM
classifier which can be easily trained on any devices (Including
smartphones).
Deep learning
implemented
CNN architecture – Inspired from MIT Alter Ego implementation
Train the classifier model
Training stage
Classification stage
Pre-trained classifier
Number (0-9)
Data Processing
4 Channel
(Implemented)
Train the feature extractor
Training stage
Classification stage
Pre-trained feature extractor
Features extracted
SVM
Number (0-9)
Data Processing
16 Channel
(Proposed
enhancement)
Results
• Our primary achievement – Creating a system that allows plug
and play interface to control any smartphone
• EEG Signals classification model – Weak model due to only 4
input channels available
• Still achieved significant accuracies in classification.
• However, the hardware we had can not support building end to
end working system
• The classification accuracies achieved have been reported on
next slide. With a 16 channel hardware, we could potentially
build a system that achieves a high (90% +) accuracy for 10
digits classification (As already proven in previous research)
• We also validated an authentication system using EEG as
proposed by [12] – Ensures privacy and end to end security in
the systems.
• We validated that EEG data captured from right side of the brain
gives better accuracy with predicting what number a person is
thinking.
Electrode Position labels
Results –
Electrodes position
Results –
Electrodes position
Right hemisphere : Improved results
(Intraparietal sulcus)
Default electrode position
Results –
Classification
accuracy
We extracted data for each digit in two batches, of 30 seconds each, at a sampling rate
of 200Hz. During this time interval, the person tried to stay in a passive state and only
thought about a particular digit.
2 Classes = 0, 1
3 Classes = 0, 1, 2
5 Classes = 0, 1, 2, 3, 4
Number of classes Data from default
position
Data from Right
hemisphere
2 92.2% 94.7%
3 76.4% 84.2%
5 41.6% 51.4%
Ubiquitous
computing & IoT
• Instead of existing approach of creating new custom interface
for BCI, our system will allow universal control of any devices
and any existing applications
• Users with EEG headsets can connect to any devices through
BLE and control them in a wireless manner.
• A communication standard can be set to enable control of any
appliance using codes. For example –
1 = On
2 = Off
• The benefit of such a standard is that an user can control IoT
enabled devices as well as complex systems with GUIs using the
Brain/EMG interfaces
Future work
• Developing the plug and play interface was primary goal of this project
• Future work can involve working on better data processing and Deep
Learning model to achieve better accuracies
• Evaluating performance of EEG classification under various human
activity conditions (Stress, Physical activities, thought, etc)
• We aim to build end to end system by using 16 channel EEG devices
(Instead of current 4 channel) to evaluate the actual integration of plug
and play interface with a real Deep learning model
• Validating “BrainSense” with EMG integration
Thank You!
References
1. https://www.independent.co.uk/news/science/read-your-mind-brain-waves-thoughts-locked-in-syndrome-toyohashi-
japan-a7687471.html
2. Andrew Campbell, Tanzeem Choudhury, Shaohan Hu, Hong Lu, Matthew K. Mukerjee, Mashfiqui Rabbi, and
Rajeev D.S. Raizada. 2010. NeuroPhone: brain-mobile phone interface using a wireless EEG headset. In
<em>Proceedings of the second ACM SIGCOMM workshop on Networking, systems, and applications on mobile
handhelds</em> (MobiHeld '10). ACM, New York, NY, USA, 3-8. DOI=http://dx.doi.org/10.1145/1851322.1851326
3. D. Wijayasekara and M. Manic, "Human machine interaction via brain activity monitoring," 2013 6th International
Conference on Human System Interactions (HSI), Sopot, 2013, pp. 103-109.
doi: 10.1109/HSI.2013.6577809
keywords: {brain-computer interfaces;electroencephalography;human computer interaction;medical robotics;mobile
robots;patient monitoring;BCI device;EEG measuring device;Emotiv EPOC headset;brain activity monitoring;brain
computer interface;differential wheeled robot;electroencephalograph;human computer interaction;human machine
interaction;mass market;mobile robot;pattern identification;sensor;Brain;Electroencephalography;Headphones;Mobile
robots;Sensors;Training;Brain Computer Interface;Differential Wheel Robot;EEG;Emotiv EPOC},
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6577809&isnumber=6577789
4. https://www.technologyreview.com/s/513861/samsung-demos-a-tablet-controlled-by-your-brain/
5. http://www.cs.zju.edu.cn/~gpan/publication/2012-ubicomp-flyingbuddy2.pdf
6. https://www.sciencedirect.com/science/article/pii/S0010482516303092
7. Q. T. Obeidat, T. A. Campbell and J. Kong, "Spelling With a Small Mobile Brain-Computer Interface in a Moving
Wheelchair," in IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 25, no. 11, pp. 2169-2179,
Nov. 2017.
doi: 10.1109/TNSRE.2017.2700025
8. https://www.seeker.com/mind-reading-computer-knows-what-youre-about-to-say-1770704180.html
9. Arnav Kapur, Shreyas Kapur, and Pattie Maes. 2018. AlterEgo: A Personalized Wearable Silent Speech Interface.
In 23rd International Conference on Intelligent User Interfaces (IUI '18). ACM, New York, NY, USA, 43-53. DOI:
https://doi.org/10.1145/3172944.3172977
10. https://www.microsoft.com/en-us/research/project/muscle-computer-interfaces-mucis/
11. https://ieeexplore.ieee.org/abstract/document/1020453/
12. https://ieeexplore.ieee.org/document/6889569/

Brain access

  • 1.
  • 2.
    Jay Lohokare Rohan Karhadkar AkshaySatam Dr. Xiaojun Bi State University of New York at Stony Brook Spring 2018 Human Computer Interactions course
  • 3.
    Motivation • Non invasiveinterface to allow users access interface with devices through new ‘modes’ - Multi Modal system • Physically disabled users - Access with physical motion • End to end system allowing seamless control • Not limiting the number of activities allowed - Control any smartphone, any application • Authentication along with control to ensure privacy and security
  • 4.
    Why alternative Interface Modes VirtualReality - Head mounted gears Disability Private interface
  • 5.
    Existing works EEG • “Electroencephalography(EEG) is an electrophysiological monitoring method to record electrical activity of the brain”. Any action, thought and emotions generate electric signals in brains which are captured by electrodes resulting to EEG • Existing works focus on enabling brain-computer access for specific applications – No universal interface developed. • Brain waves can be used to classify as many as 10 actions with accuracy above 90% [1] • “NeuroPhone” – iPhone interface that uses flashing images that user needs to focus on [2] • Samsung demoed Tablet control using flashing images based BCI [4] • There are many systems developed that use BCI to perform various actions like controlling robot motion [3], controlling drones [5], Typing applications [6][7]. However, all of these systems used visual stimulation (Like P300) which limits its use to develop generic BCI interface. • There are studies working on directly using EEG for action classification. For example [8] uses raw EEG data with visual stimulation to allow users to play rock-paper-scissors game. • Many projects achieved higher classification accuracy – Needed invasive surgeries.
  • 6.
    • System keepsflashing the numbers • Visual stimuli activates some brain waves • These reading are recorded and mapped to the image flashed at that moment to determine the exact action user intends. • This system can’t be used as a part of existing systems (Apps, games, etc) – News interfaced need to be designed for compatibility • Our system provides a solution Existing works EEG
  • 7.
  • 8.
    Existing works EEG MIT Researchersplaying game using BCI “NeuroPhone” – Smartphone BCI using flashing images Samsung researchers controlling Tablet Commercial BCI systems
  • 9.
    Existing works EMG • Electromyography(EMG) is an electrodiagnostic medicine technique for evaluating and recording the electrical activity produced by skeletal muscles. • EMG provides direct signals compared to EEG for actions classification. • EMG detects electric potential generated by muscles when a person tries to move or preform some actions. • “AlterEgo” achieves 92% median accuracy for a silent conversational interface [9] • Microsoft’s muCIs (Muscle computer interface) is another project using EMGs to enable human computer interactions.[10] • Another study uses EMG for controlling mouse cursor, achieving 70% accuracy [11].
  • 10.
    Our contribution • Allexisting work focus on creating alternative interfaces to applications • These interfaces provide with limited functionalities of underlying platform • Existing systems with conventional displays and interfaces should be able to interface with EEG/EMG based applications • Need for Plug and play system that can make all applications compatible with BCI interfaces/Muscle control interfaces • We present a system a system that can be integrated with any existing display to interface it with brain or muscles based control. • No need to re-develop applications to interface them with the brain
  • 11.
    Approach to the solution •Overlay numbers of usual display interfaces • No special application screens with limited feature • System contains 3 parts 1. A background service that detects all possible interactions on the system, and renders numbers on those locations – In our prototype, we used Android’s Accessibility service that can access UI elements of the smartphone screen in real time. The service allows us to access coordinates of all ‘interaction elements’ on the screen (Elements that allow interactions like Touch, long touch, swipe). 2. We then used these coordinates to render numbers on an empty android activity, and kept updating this overlay screen. 3. We implemented a Bluetooth (BLE) service that keeps listening to messages containing numbers (With a defined encoding). When this service gets a number, it forwards it to the accessibility service (which maps numbers with coordinates of elements on the screen). The service then performs touch/swipe interaction on the corresponding coordinate.
  • 12.
  • 13.
    Approach to the solution 1.Accessibility service recognize all possible interactions 2. Accessibility service overlays numbers on screen Action performed
  • 14.
    Implementation • 6 pins( 2 ground) • Power with 3.3V to 12V DC battery • Current Draw: 14mA when idle, 15mA connected and streaming data • Simblee BLE Radio module (Arduino Compatible) • MCP3912 Analog Front End • LIS2DH 3 axis Accelerometer • MicroSD Card Slot • Board Dimensions 2.41” x 2.41” (octagon has 1” edges) OpenBCI Ganglion Board
  • 15.
    Data capture • Weran various experiments to find out optimal data capture configurations for EEG • 0.3 to 50 Hertz low pass filter – Remove noise related to breathing and background activities • Existing EEG data sets have high background noise as the person is not in calm state when the experiment was conducted. • Our initial approach involved building CNN classifier – We observed that the signals differ from person to person. Hence, same classifier can not be used for different people. Retraining the model becomes imperative. Existing works support this claim.
  • 16.
    Deep learning implemented • Literaturereview provides strong support to using CNNs for classification • We used 3 layer CNN (Inspired from MIT AlterEgo architecture) for building the classifier • We generated the data using Ganglion kit for training, and validated the model using dataset available on Kaggle • Implemented architecture – CNN classifier using Tensorflow. Every model has to be trained for every user and then imported into the application. Drawback is that CNNs can not be trained on smartphones due to the low computing capabilities. • Proposed enhancement – CNN feature extractor that will be trained once, and will work for any user. The feature extractor will extract features for any users and then pass on the data to a SVM classifier which can be easily trained on any devices (Including smartphones).
  • 17.
    Deep learning implemented CNN architecture– Inspired from MIT Alter Ego implementation
  • 18.
    Train the classifiermodel Training stage Classification stage Pre-trained classifier Number (0-9) Data Processing 4 Channel (Implemented)
  • 19.
    Train the featureextractor Training stage Classification stage Pre-trained feature extractor Features extracted SVM Number (0-9) Data Processing 16 Channel (Proposed enhancement)
  • 20.
    Results • Our primaryachievement – Creating a system that allows plug and play interface to control any smartphone • EEG Signals classification model – Weak model due to only 4 input channels available • Still achieved significant accuracies in classification. • However, the hardware we had can not support building end to end working system • The classification accuracies achieved have been reported on next slide. With a 16 channel hardware, we could potentially build a system that achieves a high (90% +) accuracy for 10 digits classification (As already proven in previous research) • We also validated an authentication system using EEG as proposed by [12] – Ensures privacy and end to end security in the systems. • We validated that EEG data captured from right side of the brain gives better accuracy with predicting what number a person is thinking.
  • 21.
    Electrode Position labels Results– Electrodes position
  • 22.
    Results – Electrodes position Righthemisphere : Improved results (Intraparietal sulcus) Default electrode position
  • 23.
    Results – Classification accuracy We extracteddata for each digit in two batches, of 30 seconds each, at a sampling rate of 200Hz. During this time interval, the person tried to stay in a passive state and only thought about a particular digit. 2 Classes = 0, 1 3 Classes = 0, 1, 2 5 Classes = 0, 1, 2, 3, 4 Number of classes Data from default position Data from Right hemisphere 2 92.2% 94.7% 3 76.4% 84.2% 5 41.6% 51.4%
  • 24.
    Ubiquitous computing & IoT •Instead of existing approach of creating new custom interface for BCI, our system will allow universal control of any devices and any existing applications • Users with EEG headsets can connect to any devices through BLE and control them in a wireless manner. • A communication standard can be set to enable control of any appliance using codes. For example – 1 = On 2 = Off • The benefit of such a standard is that an user can control IoT enabled devices as well as complex systems with GUIs using the Brain/EMG interfaces
  • 25.
    Future work • Developingthe plug and play interface was primary goal of this project • Future work can involve working on better data processing and Deep Learning model to achieve better accuracies • Evaluating performance of EEG classification under various human activity conditions (Stress, Physical activities, thought, etc) • We aim to build end to end system by using 16 channel EEG devices (Instead of current 4 channel) to evaluate the actual integration of plug and play interface with a real Deep learning model • Validating “BrainSense” with EMG integration
  • 26.
  • 27.
    References 1. https://www.independent.co.uk/news/science/read-your-mind-brain-waves-thoughts-locked-in-syndrome-toyohashi- japan-a7687471.html 2. AndrewCampbell, Tanzeem Choudhury, Shaohan Hu, Hong Lu, Matthew K. Mukerjee, Mashfiqui Rabbi, and Rajeev D.S. Raizada. 2010. NeuroPhone: brain-mobile phone interface using a wireless EEG headset. In <em>Proceedings of the second ACM SIGCOMM workshop on Networking, systems, and applications on mobile handhelds</em> (MobiHeld '10). ACM, New York, NY, USA, 3-8. DOI=http://dx.doi.org/10.1145/1851322.1851326 3. D. Wijayasekara and M. Manic, "Human machine interaction via brain activity monitoring," 2013 6th International Conference on Human System Interactions (HSI), Sopot, 2013, pp. 103-109. doi: 10.1109/HSI.2013.6577809 keywords: {brain-computer interfaces;electroencephalography;human computer interaction;medical robotics;mobile robots;patient monitoring;BCI device;EEG measuring device;Emotiv EPOC headset;brain activity monitoring;brain computer interface;differential wheeled robot;electroencephalograph;human computer interaction;human machine interaction;mass market;mobile robot;pattern identification;sensor;Brain;Electroencephalography;Headphones;Mobile robots;Sensors;Training;Brain Computer Interface;Differential Wheel Robot;EEG;Emotiv EPOC}, URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6577809&isnumber=6577789 4. https://www.technologyreview.com/s/513861/samsung-demos-a-tablet-controlled-by-your-brain/ 5. http://www.cs.zju.edu.cn/~gpan/publication/2012-ubicomp-flyingbuddy2.pdf 6. https://www.sciencedirect.com/science/article/pii/S0010482516303092 7. Q. T. Obeidat, T. A. Campbell and J. Kong, "Spelling With a Small Mobile Brain-Computer Interface in a Moving Wheelchair," in IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 25, no. 11, pp. 2169-2179, Nov. 2017. doi: 10.1109/TNSRE.2017.2700025 8. https://www.seeker.com/mind-reading-computer-knows-what-youre-about-to-say-1770704180.html 9. Arnav Kapur, Shreyas Kapur, and Pattie Maes. 2018. AlterEgo: A Personalized Wearable Silent Speech Interface. In 23rd International Conference on Intelligent User Interfaces (IUI '18). ACM, New York, NY, USA, 43-53. DOI: https://doi.org/10.1145/3172944.3172977 10. https://www.microsoft.com/en-us/research/project/muscle-computer-interfaces-mucis/ 11. https://ieeexplore.ieee.org/abstract/document/1020453/ 12. https://ieeexplore.ieee.org/document/6889569/