Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Full Body Spatial Vibrotactile Brain Computer Interface Paradigm

50 views

Published on

T. Kodama, “Full Body Spatial Vibrotactile Brain Computer Interface Paradigm,” Master’s Thesis Defense, Department of Computer Science - Graduate School of Systems and Information Engineering, University of Tsukuba, Jan. 2017.

Published in: Science
  • Be the first to comment

  • Be the first to like this

Full Body Spatial Vibrotactile Brain Computer Interface Paradigm

  1. 1. Full Body Spatial Vibrotactile Brain Computer Interface Paradigm 1 Full Body Spatial Vibrotactile Brain Computer Interface Paradigm 1 Takumi Kodama Department of Computer Science Graduate School of System and Information Engineering Supervisor: Shoji Makino
  2. 2. Introduction - What’s the BCI? ● Brain Computer Interface (BCI) ○ Exploits user intentions ONLY using brain responses 2
  3. 3. Introduction - ALS Patients ● Amyotrophic lateral sclerosis (ALS) patients ○ Have difficulty to move their muscle by themselves ○ BCI could be a communicating tool for them 3 … …!
  4. 4. Introduction - Research Approach 1, Stimulate touch sensories 2, Classify brain response A B A B 3, Predict user thought 92.0% 43.3% A B Target Non-Target P300 brainwave response 4 ● Tactile (Touch-based) P300-based BCI paradigm ○ Predict user’s intentions by decoding P300 responses ○ P300 responses are evoked by external (tactile) stimuli
  5. 5. ● Previous Tactile P300-based BCI paradigm ○ Chest Tactile BCI (for around chest positions) [1] ○ Tactile and auditory BCI (for head positions) [2] Introduction - Previous Researches 5 [1] H. Mori, S. Makino, T. M. Rutkowski, Multi–command chest tactile brain computer interface for small vehicle robot navigation, 2013. [2] H. Mori, et al., “Multi-command tactile and auditory brain computer interface based on head position stimulation,” 2013.
  6. 6. ● Previous Tactile P300-based BCI paradigm ○ Chest Tactile BCI (for around chest positions) [1] ○ Tactile and auditory BCI (for head positions) [2] Introduction - Previous Researches 6 [1] H. Mori, S. Makino, T. M. Rutkowski, Multi–command chest tactile brain computer interface for small vehicle robot navigation, 2013. [2] H. Mori, et al., “Multi-command tactile and auditory brain computer interface based on head position stimulation,” 2013. Problems 1. Discrimination of each stimulus pattern 2. Application for actual ALS patients
  7. 7. 1. Propose a new touch-based BCI paradigm intended for communicating with ALS patients 2. Confirm an effectiveness of the modality by improving stimulus pattern classification accuracies Introduction - Research Purpose 7
  8. 8. Method - Our Approach 8 ● Full-body Tactile P300-based BCI (fbBCI) ○ Applies six vibrotactile stimulus patterns to user’s back ○ User can take experiment with their body lying down
  9. 9. Method - Four fbBCI experiments 9 Ⅰ. Psychophysical Ⅱ. EEG online Ⅲ. SWLDA&SVM Ⅳ. CNN Online experiment Offline experiment (Training one by one) Offline experiment (Training altogether) Pre experiment (Without ERP calculation)
  10. 10. Method - Four fbBCI experiments 10 Ⅱ. EEG online Ⅲ. SWLDA&SVM Ⅳ. CNN Online experiment Offline experiment (Training one by one) Offline experiment (Training altogether) Pre experiment (Without ERP calculation) Ⅰ. Psychophysical
  11. 11. Experiment Ⅰ - Psychophysical 11 ● Main objective ○ To evaluate the fbBCI stimulus pattern feasibility ● How to ? ○ Selecting target stimulus with button pressing ○ EEG electrodes were not attached on user’s scalp Button press No EEG cap Exciters Targets presented
  12. 12. Condition Details Number of users (mean age) 10 (21.9 years old) Stimulus frequency of exciters 40 Hz Vibration stimulus length 100 ms Inter-stimulus Interval (ISI) 400 ~ 430 ms Number of trials 1 trial Experiment Ⅰ - Psychophysical 12 ● Experimental conditions
  13. 13. Result Ⅰ - Psychophysical ● Correct rate exceeded 95% in each stimulus pattern 13
  14. 14. Method - Four fbBCI experiments 14 Ⅰ. Psychophysical Ⅱ. EEG online Ⅲ. SWLDA&SVM Ⅳ. CNN Online experiment Offline experiment (Training one by one) Offline experiment (Training altogether) Pre experiment (Without ERP calculation)
  15. 15. Experiment Ⅱ - EEG online 15 ● Main objective ○ To reveal the fbBCI classification accuracies ● How to ? ○ Selecting target stimulus with ERP intervals ○ Are P300 responses present in ERPs? EEG cap EEG amplifier Targets & Results presented Exciters
  16. 16. Experiment Ⅱ - EEG online 16 ● Experimental conditions Condition Details Number of users (mean age) 10 (21.9 years old) Stimulus frequency of exciters 40 Hz Vibration stimulus length 100 ms Inter-stimulus Interval (ISI) 400 ~ 430 ms Number of trials 1 training + 5 tests EEG sampling rate 512 Hz Electrode channels Cz, Pz, C3, C4, P3, P4, CP5, CP6 Classification algorithm SWLDA with BCI2000
  17. 17. ● Grand mean ERP intervals in each electrode channel Result Ⅱ - EEG online 17 *Gray-shaded area … significant difference (p < 0.01) between targets and non-targets
  18. 18. Result Ⅱ - EEG online 18 User No. Classification accuracy with SWLDA 1 23.33 % 2 50.0 % 3 43.33 % 4 66.67 % 5 66.67 % 6 53.33 % 7 30.0 % 8 33.33 % 9 93.33 % 10 76.67 % Average. 53.67 %
  19. 19. Method - Four fbBCI experiments 19 Ⅰ. Psychophysical Ⅱ. EEG online Ⅲ. SWLDA&SVM Ⅳ. CNN Online experiment Offline experiment (Training one by one) Offline experiment (Training altogether) Pre experiment (Without ERP calculation)
  20. 20. Exp. Ⅲ - Accuracy Refinement 20 ● Main objective ○ Improvement of classification accuracies ● How to? ○ Accuracy comparison ■ Down-sampling (nd = 1, 4 and 16) ① ■ Epoch averaging (ne = 1, 5 and 10) ① ■ Machine learning algorithms (SWLDA & SVM) ② ① ②
  21. 21. ● SWLDA classification accuracies ○ BEST: 57.48 % (nd = 4, ne = 1) Result Ⅲ - Accuracy Refinement 21 Signal decimation (nd)
  22. 22. ● Linear SVM classification accuracies ○ BEST: 58.5 % (nd = 16, ne = 10) Result Ⅲ - Accuracy Refinement 22 Signal decimation (nd)
  23. 23. ● Non-linear SVM classification accuracies ○ BEST: 59.83 % (nd = 4, ne = 1) Result Ⅲ - Accuracy Refinement 23 Signal decimation (nd)
  24. 24. Method - Four fbBCI experiments 24 Ⅰ. Psychophysical Ⅱ. EEG online Ⅲ. SWLDA&SVM Ⅳ. CNN Online experiment Offline experiment (Training one by one) Offline experiment (Training altogether) Pre experiment (Without ERP calculation)
  25. 25. ● Main objective ○ More improvement of classification accuracies ○ Achievement of non-training ERP classifications ● How to? ○ Feature vectors were transformed into squared input volume matrices (60 × 60) ⇒ next page ○ Evaluate with the classifier model trained by other nine participated user Experiment Ⅳ - CNN application 25User 1 1 2 3 4 5 6 7 8 9 10 Classifier model trained by user 2~10 ERP classification
  26. 26. ● Main objective ○ More improvement of classification accuracies ○ Achievement of non-training ERP classifications ● How to? ○ Feature vectors were transformed into squared input volume matrices (60 × 60) ⇒ next page ○ Evaluate with the classifier model trained by other nine participated user Experiment Ⅳ - CNN application 26User 10 10 1 2 3 4 5 6 7 8 9 trained by user 1~9 ERP classification Classifier model
  27. 27. Experiment Ⅳ - CNN application 27 1. ERP interval elements were deployed in a 20 × 20 squared matrix 2. Matrices generated in each electrode channel and mean of all electrodes were concatenated into a 3 × 3 grid ● Transform feature vectors to input volumes
  28. 28. Experiment Ⅳ - CNN application ● Overview of CNN architecture in fbBCI ○ CONV > POOL > CONV > POOL (LeNet) ○ (Ix, Iy) … Size of the input volume ○ (Ax, Ay) … Size of activation maps 28 MLP
  29. 29. Result Ⅳ - CNN application 29 User No. Non-averaging (ne = 1) SMA 1 97.22 % 100 % 2 30.0 % 100 % 3 72.22 % 100 % 4 86.11 % 100 % 5 94.44 % 100 % 6 88.89 % 100 % 7 86.11 % 100 % 8 100.0 % 100 % 9 100.0 % 100 % 10 41.67 % 100 % Average. 79.66 % 100 %
  30. 30. ● The validity of fbBCI paradigm was confirmed ○ Ⅰ. Stimulus pattern correct rate > 95% manually ○ Ⅱ. Classification accuracy : 53.67 % by SWLDA ○ Ⅲ. 59.83 % by non-linear SVM (nd = 4, ne = 1) ○ Ⅳ. 100 % by CNN with classifier model by all user ● To improve QoL for ALS patients with fbBCI in the future ○ Conduct experiments in practical conditions ○ Implementation of off-line methods to online ERP classification environments ● Hope the series of experimental results will contribute to developments of tactile P300-based BCI paradigms Conclusions 30
  31. 31. Journal Article (Lead; 1) 31 1. T. Kodama, K. Shimizu, S. Makino and T.M. Rutkowski, "Comparison of P300--based Brain--computer Interface Classification Accuracy Refinement Methods using Full--body Tactile paradigm," Journal of Bionic Engineering, (invited; submitting), 2017. Invited
  32. 32. 1. T.M. Rutkowski, K. Shimizu, T. Kodama, P. Jurica and A. Cichocki, "Brain--robot Interfaces Using Spatial Tactile BCI Paradigms - Symbiotic Brain-robot Applications," in Symbiotic Interaction (vol. 9359 of Lecture Notes in Computer Science), B. Blankertz, G. Jacucci, L. Gamberini, A. Spagnolli and J. Freeman Eds., Springer International Publishing, pp. 132-137, Oct. 2015. doi: 10.1007/978-3-319-24917-9_14 Book chapter (Co; 1) 32
  33. 33. 1. T. Kodama, S. Makino and T.M. Rutkowski, "Spatial Tactile Brain-Computer Interface Paradigm Applying Vibration Stimuli to Large Areas of User’s Back," in Proc. the 6th International Brain-Computer Interface Conference, Graz University of Technology Publishing House, pp. Article ID: 032-1-4, Sep. 2014. doi:10.3217/978-3-85125-378-8-32 2. T. Kodama, S. Makino and T.M. Rutkowski, "Spatial Tactile Brain-Computer Interface by Applying Vibration to User’s Shoulders and Waist," in Proc. the 10th AEARU Workshop on Computer Science and Web Technologies (CSWT-2015), University of Tsukuba, pp. 41-42, Feb. 2015. Best Poster Award 3. T. Kodama, K. Shimizu and T.M. Rutkowski, "Full Body Spatial Tactile BCI for Direct Brain-robot Control," in Proc. the Sixth International Brain-Computer Interface Meeting: BCI Past, Present, and Future, Verlag der Technischen Universitaet Graz, pp. 68, May 2016. doi:10.3217/978-3-85125-467-9-68 Student Travel Award Conference Papers (Lead; 1) 33
  34. 34. 4. T. Kodama, S. Makino and T.M. Rutkowski, "Toward a QoL improvement of ALS patients: Development of the Full-body P300-based Tactile Brain--Computer Interface," in Proc. the 2016 AEARU Young Researchers International Conference (AEARU YRIC-2016), University of Tsukuba, pp. 5-8, Sep. 2016. 5. T. Kodama, K. Shimizu, S. Makino and T.M. Rutkowski, "Full–body Tactile P300–based Brain–computer Interface Accuracy Refinement," in Proc. the International Conference on Bio-engineering for Smart Technologies (BioSMART 2016), IEEE Press, pp. 20–23, Dec. 2016. (Extended version invited to the Journal of Bionic Engineering) Best Paper Award Nomination 6. T. Kodama, S. Makino and T.M. Rutkowski, "Tactile Brain-Computer Interface Using Classification of P300 Responses Evoked by Full Body Spatial Vibrotactile Stimuli," in Proc. the Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC 2016), IEEE Press, pp. Article ID: 176, Dec. 2016. Conference Papers (Lead; 2) 34
  35. 35. 7. T. Kodama and S. Makino, "Analysis of the brain activated distributions in response to full-body spatial vibrotactile stimuli using a tactile P300-based BCI paradigm," in Proc. the IEEE International Conference on Biomedical and Health Informatics 2017 (BHI-2017), IEEE Engineering in Medicine and Biology Society, pp. (accepted, in press), Feb. 2017. 8. T. Kodama and S. Makino, "Convolutional Neural Network Architecture and Input Volume Design for Analyzing Somatosensory ERP Signals Evoked by a Tactile P300-based Brain-Computer Interface," in Proc. the 39th Annual International Confernce of the IEEE Engineering in Medicine and Biology Society (EMBC 2017), IEEE Engineering in Medicine and Biology Society, pp. (scheduled), Jul. 2017. Conference Papers (Lead; 3) 35
  36. 36. 1. T.M. Rutkowski, H. Mori, T. Kodama and H. Shinoda, "Airborne Ultrasonic Tactile Display Brain-computer Interface - A Small Robotic Arm Online Control Study," in Proc. the 10th AEARU Workshop on Computer Science and Web Technologies (CSWT-2015), University of Tsukuba, pp. 7-8, Feb. 2015. 2. K. Shimizu, T. Kodama, P. Jurica, A. Cichocki and T.M. Rutkowski, "Tactile BCI Paradigms for Robots' Control," in Proc. the 6th Conference on Systems Neuroscience and Rehabilitation (SNR 2015), National Rehabilitation Center for Persons with Disabilities, pp. 28, Mar. 2015. 3. T.M. Rutkowski, K. Shimizu,T. Kodama, P. Jurica, A. Cichocki and H. Shinoda, "Controlling a Robot with Tactile Brain-computer Interfaces," in Proc. the 38th Annual Meeting of the Japan Neuroscience Society (Neuroscience 2015), Japan Neuroscience Society, pp. 2P332, July 2015. 4. K. Shimizu , D. Aminaka , T. Kodama, C. Nakaizumi, P. Jurica, A. Cichocki, S. Makino and T.M. Rutkowski, "Brain-robot Interfaces Using Spatial Tactile and Visual BCI Paradigms - Brains Connecting to the Internet of Things Approach," in Proc. the International Conference on Brain Informatics & Health (BIH 2015), Imperial College London, pp.9-10, Sep. 2015. Conference Papers (Co; 1) 36
  37. 37. Conference Papers (Co; 2) 37 5. K. Shimizu, T. Kodama, S. Makino and T.M. Rutkowski, "Visual Motion Onset Virtual Reality Brain–computer Interface," in Proc. the International Conference on Bio-engineering for Smart Technologies 2016 (BioSMART 2016), IEEE Press, pp. 24-27, Dec. 2016.
  38. 38. 38 Many thanks for your attention!
  39. 39. fbBCI demonstration 39 https://www.youtube.com/watch?v=sn6OEBBKsPQ
  40. 40. Result Ⅰ - Psychophysical ● Response time differences for each stimulus pattern 40
  41. 41. ● How to train the P300-based BCI classifier? ○ Each stimulus pattern was given 10 times in random ○ Altogether 360 (60×6) times for a classifier training Experiment Ⅱ - EEG online 41 ω1 : Target Classifier (2cls) Target 1 1 2 34 5 6 1 6 5 4 3 2 ω2 : Non-Target × 10 × 10 × 10 × 10 × 10 × 10 Session: 1/6
  42. 42. Experiment Ⅱ - EEG online 42 ω1 : Target Classifier (2cls) Target 2 1 2 34 5 6 1 × 10 2 × 10 Session: 2/6 6 5 4 3 2 ω2 : Non-Target × 20 × 20 × 20 × 20 × 10 1 × 10 ● How to train the P300-based BCI classifier? ○ Each stimulus pattern was given 10 times in random ○ Altogether 360 (60×6) times for a classifier training
  43. 43. Experiment Ⅱ - EEG online 43 ω1 : Target Classifier (2cls) Target 3 1 2 34 5 6 ω2 : Non-Target Session: 3/6 1 × 10 2 × 10 6 5 4 3 2 × 30 × 30 × 30 × 20 × 20 1 × 20 3 × 10 ● How to train the P300-based BCI classifier? ○ Each stimulus pattern was given 10 times in random ○ Altogether 360 (60×6) times for a classifier training
  44. 44. Experiment Ⅱ - EEG online 44 ω1 : Target Classifier (2cls) Target 4 1 2 34 5 6 ω2 : Non-Target Session: 4/6 1 × 10 2 × 10 6 5 4 3 2 × 40 × 40 × 30 × 30 × 30 1 × 30 3 × 10 4 × 10 ● How to train the P300-based BCI classifier? ○ Each stimulus pattern was given 10 times in random ○ Altogether 360 (60×6) times for a classifier training
  45. 45. Experiment Ⅱ - EEG online 45 ω1 : Target Classifier (2cls) Target 5 1 2 34 5 6 ω2 : Non-Target Session: 5/6 1 × 10 2 × 10 6 5 4 3 2 × 50 × 40 × 40 × 40 × 40 1 × 40 3 × 10 4 × 10 5 × 10 ● How to train the P300-based BCI classifier? ○ Each stimulus pattern was given 10 times in random ○ Altogether 360 (60×6) times for a classifier training
  46. 46. Experiment Ⅱ - EEG online 46 ω1 : Target Classifier (2cls) Target 6 1 2 34 5 6 ω2 : Non-Target Session: 6/6 1 × 10 2 × 10 6 5 4 3 2 × 50 × 50 × 50 × 50 × 50 1 × 50 3 × 10 4 × 10 5 × 10 6 × 10 60 300 ● How to train the P300-based BCI classifier? ○ Each stimulus pattern was given 10 times in random ○ Altogether 360 (60×6) times for a classifier training
  47. 47. Experiment Ⅱ - EEG online ● How to predict user’s intention with a trained classifier? ○ Correct example 47 ω1 : Target Classifier (2cls) 1 × 10 72.6 % Target 1 Session: 1/6 ω1 : Target Classifier (2cls) 2 × 10 24.4 % ω1 : Target Classifier (2cls) 3 × 10 56.3 % ω1 : Target Classifier (2cls) 4 × 10 44.1 % ω1 : Target Classifier (2cls) 5 × 10 62.9 % ω1 : Target Classifier (2cls) 6 × 10 39.8 % 1 2 34 5 6
  48. 48. Experiment Ⅱ - EEG online 48 ω1 : Target Classifier (2cls) 1 × 10 35.1 % Target 6 Session: 6/6 ω1 : Target Classifier (2cls) 2 × 10 48.1 % ω1 : Target Classifier (2cls) 3 × 10 69.2 % ω1 : Target Classifier (2cls) 4 × 10 54.3 % ω1 : Target Classifier (2cls) 5 × 10 50.9 % ω1 : Target Classifier (2cls) 6 × 10 64.3 % 1 2 34 5 6 ● How to predict user’s intention with a trained classifier? ○ Wrong example
  49. 49. Experiment Ⅱ - EEG online Target 11/6 5 Target 2 Target 3 3 5 ● Calculate stimulus pattern classification accuracy ○ How many user sessions could be classified with correct targets? Target 4 Target 5 Target 6 2 4 Result 1 Session 2/6 3/6 4/6 5/6 6/6 1 Trial Classification accuracy rate: 4/6 = 0.667 ⇒ 66.7 % Correct Correct Wrong Correct Correct Wrong Target Status
  50. 50. 50 ● Event related potential (ERP) interval ○ captures 800 ms long after vibrotactile stimulus onsets ○ will be converted to feature vectors with their potentials L xi … Ch○○ p1 pL ex.) fs = 512 [Hz] nd = 4 tERP = 800 [ms] = 0.8 [sec] L= ceil((512/4)・0.8) = 103 L = ceil(( fs / nd )・tERP), where fs [Hz] , tERP [sec] Experiment Ⅱ - EEG online
  51. 51. Result Ⅱ - EEG online 51 ● P300 peaks were shifted to later latencies from #1 to #6 #1 Left arm #2 Right arm #3 Shoulder #4 Waist #5 Left leg #6 Right leg
  52. 52. Result Ⅱ - EEG online 52 ● Times series of the Target vs. Non-Target AUC scores
  53. 53. Result Ⅱ - EEG online 53 ● Information Transfer Rate (ITR) ○ Averaged score: 1.31 bit/minute
  54. 54. Result Ⅱ - EEG online 54 ● Grand mean fbBCI classification accuracy: 53.67 %
  55. 55. Exp. Ⅲ - Accuracy Refinement ● Architecture diagram of the off-line ERP classification 55
  56. 56. Exp. Ⅲ - Accuracy Refinement 56 ● Down-sampling (nd) ○ ERPs were decimated by 2 (256 Hz), 4 (128 Hz), 8 (256 Hz), 16 (32 Hz) or kept intact (512 Hz) ○ To reduce a vector length L nd = 4 (128 Hz) nd = 16 (32 Hz) Ch○○ Ch○○
  57. 57. 57 ● Epoch averaging (ne) ○ ERPs were averaged using 2, 5, 10 ERPs or no averaging ○ To cancel background noise ne = 1 ne = 10 Ch○○ Ch○○ Exp. Ⅲ - Accuracy Refinement
  58. 58. ● Concatenating all feature vectors Exp. Ⅲ - Accuracy Refinement ex.) fs = 128 [Hz] (nd = 4) L = ceil(128・0.8) = 103 58 … L x1 … L … L… … … … ……V ex.) Lconcat = L・8 = 103・8 = 824 Lconcat … Ch1 Ch2 Ch8 x2 x8
  59. 59. ● Training the classifier Exp. Ⅲ - Accuracy Refinement 59 X1 X2 Lconcat Classifier (2cls) XNTAR ・ ・ ・ ・ ・ ・ NTAR = 60 / ne NNTAR = 60 / ne Random choose as many as Tmax } Non-TargetTarget X1 X2 XNNTAR Lconcat
  60. 60. ● Evaluation with the trained classifier ○ Same nd and ne were applied Exp. Ⅲ - Accuracy Refinement 60 1 L ・ ・ NERP = 10 / ne Target? or Non-Target? Classifier (2cls) Test data
  61. 61. ● Machine learning algorithms ○ SWLDA ○ Linear SVM ○ Non-linear SVM (Gaussian) where γ > 0, c = 1 Exp. Ⅲ - Accuracy Refinement 61 //
  62. 62. Result Ⅲ - Accuracy Refinement 62
  63. 63. Experiment Ⅳ - CNN application ● Transform feature vectors to input volumes L = 410 xi … p1 p410 fs = 512 [Hz] tERP = 800 [ms] = 0.8 [sec] L= ceil(512・0.8) = 410 1. Feature vector length L was reduced from 410 to 400 (first 10 ERP elements were removed) to create squared matrices for filter training 63
  64. 64. Experiment Ⅳ - CNN application ● One-hidden layer multilayer perceptron ○ Input: 7200 > Hidden: 500 > Output: 2 units 64

×