Research Work
Gender based ECoG decoding
NAFIZ ISHTIAQUE AHMED
9 Sept 2019
UOU – UNIVERSITY OF ULSAN
Auditory Neuroscience
2
Male and Female voice based Electrocorticography
decoding model
• Pervious highest gender identification accuracy is 70%.
• We decode voice gender with 86% acceptable accuracy and
specify specific gender voice activating brain region.
• In addition this study demonstrate gender based hierarchical
model.
ElectrodesWords spoken by
male and female
Cortical surface
field potential
Event-related
spectral perturbation
Deep
learning
algorithm
Gender
classification
Gender decoding accuracy
A Brain–Robot Interaction System by
Fusing Human and Machine Intelligence
Xiaoqian Mao , Wei Li , Chengwei Lei, Jing Jin,
Feng Duan, and Sherry Chen
March 22, 2019
Presented By
Nafiz Ishtiaque Ahmed
UOU- UNIVERSITY OF ULSAN
Introduction
 So far BCI is successfully applied to Brain robot interaction (BRI).
 However; at real time scenario the information transfer rate (ITR)
made those system impractical.
 For better usages, BRI systems generally adopted with one-way
control mode. Like picking up an object and place it to a box.
 Using P300 component the average response time is 6.6s (6 class
p300 model) and for steady-state visual evoked potential (SSVEP)
- 3.65s .
 Which makes it difficult to directly apply it to control a robot like
NAO.
I am NAO
BRI System
 Machine learning combines with BCI system to mitigate the drawbacks.
 Where machine intelligence assists the robot in accomplishing tasks by analyzing data from sensors and
human supervises the system at high level.
 Here P300 pattern select objects of interest node (where to go).
 If machine learning algorithm fails to avoid any obstacles on the path then steady-state visual evoked
potential (SSVEP) pattern triggers to avoid intermediate nodes or obstacles.
FIG : Cerebot platform for
the BRI system.
Construction of BRI System
Flow Chart of The BRI System
Experiment
 P300 experimental protocol displays a 3 × 3 stimulus matrix on the UI.
 Nine target object an orange desk, a green washbasin, a purple cushion, a red stool, a
blue plastic stool, a blue box, a yellow box, and spare white and black targets, as show in
Fig. (a) was presented.
 One display cycle is 1.8s. Offline trial /online trial 6/3 times shown in Fig. (b).
 Turning left, crossing the barrier, turning right behaviors to control the robot by SSVEP is
presented at the bottom of the UI
Experiment
 32-channel EEG cap.
 Ten subjects.
 P300 experiment
 6 round for each classification model.
 Each round subject need to focus on target image 1 to 9 for 12 flashes.
 SSVEP experiment
 Subject needs to gaze target 1 to 9 for 5s
 30 rounds total
 P300 and SSVEP recognition algorithm part
 Data pre-processing – down samples - 1000 Hz to 20Hz
 Feature vectors extracting – discard extra non target feature
 Classification - FLDA
Dataset for Object Extraction
 Before experiment all object’s attribute needs to store to the dataset.
 Objects are taken form cluttered environment for best performance.
 Using fuzzification and defuzzification the IFCE is trained for detecting an object form any
angel.
 When subject select the object with P300 the object parameter goes to IFCE foo
completing the process.
FLDA classifiers accuracy
 6 fold cross validation is used in the FLDA classifier shown in fig (a)(b).
 5 samples are used to train where rest one is used to test. (Acc1 ~ Acc6)
 Here the model choose the best fold accuracy feature for each subject.
 Like in fig(a) of subject S6; Acc1 samples classification model is used for the subject. As
it’s accuracy is highest (55.56%) from rest fold.
(a) P300 OFFLINE MODEL (b) SSVEP OFFLINE MODEL
Automatic Object Approaching
 NAO is able to guide to the destination using central vision tracking strategy (CVTS) using
IFCE result.
 CVTS principle purpose is to keep the NAO to move toward the Object where keeping the
object middle of its vision shown in FIG. (a)
(a)
Experiment Environment
 The real experiment environment is shown in fig (a).
 FIG (b) maps the plan of the experiment environment.
Result
 Subject’s supervision and decisions with robots decision making with machine learning
this system can control robot efficiently.
 Where traditional BRI system take 15.6 commands this proposed system can finish the
task at only 4 commands. Shown in Fig. (a)
 The combination of human and machine intelligence makes the response time about
1-2 s.
(a) RESULTS OF TEN SUBJECTS FOR TASK 1
Brain signal seminar

Brain signal seminar

  • 1.
    Research Work Gender basedECoG decoding NAFIZ ISHTIAQUE AHMED 9 Sept 2019 UOU – UNIVERSITY OF ULSAN
  • 2.
    Auditory Neuroscience 2 Male andFemale voice based Electrocorticography decoding model • Pervious highest gender identification accuracy is 70%. • We decode voice gender with 86% acceptable accuracy and specify specific gender voice activating brain region. • In addition this study demonstrate gender based hierarchical model. ElectrodesWords spoken by male and female Cortical surface field potential Event-related spectral perturbation Deep learning algorithm Gender classification Gender decoding accuracy
  • 3.
    A Brain–Robot InteractionSystem by Fusing Human and Machine Intelligence Xiaoqian Mao , Wei Li , Chengwei Lei, Jing Jin, Feng Duan, and Sherry Chen March 22, 2019 Presented By Nafiz Ishtiaque Ahmed UOU- UNIVERSITY OF ULSAN
  • 4.
    Introduction  So farBCI is successfully applied to Brain robot interaction (BRI).  However; at real time scenario the information transfer rate (ITR) made those system impractical.  For better usages, BRI systems generally adopted with one-way control mode. Like picking up an object and place it to a box.  Using P300 component the average response time is 6.6s (6 class p300 model) and for steady-state visual evoked potential (SSVEP) - 3.65s .  Which makes it difficult to directly apply it to control a robot like NAO. I am NAO
  • 5.
    BRI System  Machinelearning combines with BCI system to mitigate the drawbacks.  Where machine intelligence assists the robot in accomplishing tasks by analyzing data from sensors and human supervises the system at high level.  Here P300 pattern select objects of interest node (where to go).  If machine learning algorithm fails to avoid any obstacles on the path then steady-state visual evoked potential (SSVEP) pattern triggers to avoid intermediate nodes or obstacles. FIG : Cerebot platform for the BRI system.
  • 6.
    Construction of BRISystem Flow Chart of The BRI System
  • 7.
    Experiment  P300 experimentalprotocol displays a 3 × 3 stimulus matrix on the UI.  Nine target object an orange desk, a green washbasin, a purple cushion, a red stool, a blue plastic stool, a blue box, a yellow box, and spare white and black targets, as show in Fig. (a) was presented.  One display cycle is 1.8s. Offline trial /online trial 6/3 times shown in Fig. (b).  Turning left, crossing the barrier, turning right behaviors to control the robot by SSVEP is presented at the bottom of the UI
  • 8.
    Experiment  32-channel EEGcap.  Ten subjects.  P300 experiment  6 round for each classification model.  Each round subject need to focus on target image 1 to 9 for 12 flashes.  SSVEP experiment  Subject needs to gaze target 1 to 9 for 5s  30 rounds total  P300 and SSVEP recognition algorithm part  Data pre-processing – down samples - 1000 Hz to 20Hz  Feature vectors extracting – discard extra non target feature  Classification - FLDA
  • 9.
    Dataset for ObjectExtraction  Before experiment all object’s attribute needs to store to the dataset.  Objects are taken form cluttered environment for best performance.  Using fuzzification and defuzzification the IFCE is trained for detecting an object form any angel.  When subject select the object with P300 the object parameter goes to IFCE foo completing the process.
  • 10.
    FLDA classifiers accuracy 6 fold cross validation is used in the FLDA classifier shown in fig (a)(b).  5 samples are used to train where rest one is used to test. (Acc1 ~ Acc6)  Here the model choose the best fold accuracy feature for each subject.  Like in fig(a) of subject S6; Acc1 samples classification model is used for the subject. As it’s accuracy is highest (55.56%) from rest fold. (a) P300 OFFLINE MODEL (b) SSVEP OFFLINE MODEL
  • 11.
    Automatic Object Approaching NAO is able to guide to the destination using central vision tracking strategy (CVTS) using IFCE result.  CVTS principle purpose is to keep the NAO to move toward the Object where keeping the object middle of its vision shown in FIG. (a) (a)
  • 12.
    Experiment Environment  Thereal experiment environment is shown in fig (a).  FIG (b) maps the plan of the experiment environment.
  • 13.
    Result  Subject’s supervisionand decisions with robots decision making with machine learning this system can control robot efficiently.  Where traditional BRI system take 15.6 commands this proposed system can finish the task at only 4 commands. Shown in Fig. (a)  The combination of human and machine intelligence makes the response time about 1-2 s. (a) RESULTS OF TEN SUBJECTS FOR TASK 1