Improved Target Recognition Response using
Collaborative Brain-Computer Interfaces
Kyongsik Yun
Jet Propulsion Laboratory
California Institute of Technology
Pasadena, CA, USA
yunks@caltech.edu
Adrian Stoica
Jet Propulsion Laboratory
California Institute of Technology
Pasadena, CA, USA
adrian.stoica@jpl.nasa.gov
Abstract—The advantage of using collaborative brain-
computer interfaces in improving human response in visual
target recognition tests was investigated. We used a public EEG
dataset created from recordings made using a 32-channel EEG
system by Delorme et al. (2004) to compare the classification
accuracy using one, two, and three EEG signal sets from
different subjects. Fourteen participants performed a go/no-go
categorization task on images that were presented very briefly,
with the target images of natural photos of animals and
distractor images of photos that did not contain animals. First,
we compared the EEG responses evoked by the target and
distractor images, and it was determined that the P300 (i.e., a
positive deflection in voltage with a latency of 300 ms) response
evoked by the target images was significantly higher than that
evoked by the distractor images. Second, we calculated and
compared the classification accuracy using one, two, and three
EEG signal sets. We used a linear support vector machine with 5-
fold cross validation. Compared to the results obtained from
single brain prediction (79.4%), the overall accuracy of two- and
three-brains prediction was higher (89.3% and 88.7%,
respectively). Furthermore, the time required to achieve 90%
accuracy was significantly less when using EEGs from two and
three brains (100 ms) than when using one brain (230 ms). These
results provide evidence to support the hypothesis that one can
achieve higher levels of perceptual and cognitive performance by
leveraging the power of multiple brains through collaborative
brain-computer interfaces.
Keywords—brain-computer interfaces, collaborative brain-
computer interfaces, multi-brain, EEG, collective intelligence,
visual categorization
I. INTRODUCTION
Collaborative brain-computer interfaces (BCIs) aim to
improve human performance by integrating the neural data
from two or more brains with the help of advanced signal
analytics [1-6]. One of the inspirations behind the
collaborative aspect of BCI is an idea from human social
behavior, which suggests that cooperation can help improve
decision making, visual perception, and social cognition [7-
11]. For the past ten years, simultaneous EEG recording of
two or more brains (i.e., EEG hyperscanning) has been widely
applied [10, 12-16]. These technological advances lowered the
barrier for collaborative BCI research.
Previous collaborative BCI studies preprocessed EEG signals
using P300, principal component analysis, event related
potential (ERP), and spectral power to obtain the classification
results [3, 7]. Preprocessing can produce an unnecessary bias
in the data and decrease the computational efficiency. In this
study, we used raw EEG data to compute the classification
accuracy in a simple visual categorization task. Moreover, the
temporal dynamics of the classification accuracy in the
collaborative BCI setting has been scarcely studied. We
applied temporal analysis to compute the time required to
achieve accurate results as well as to compare the cognitive
performances of single and multi-brains.
The data used in this study was taken from a public database.
In Section II, we describe the experimental trials and the
associated EEG dataset obtained from the participants. Section
III describes the analysis performed. We applied simple
classification mechanisms because our objective was to
illustrate the improvement in classification achieved by using
data from multiple brains, and not to increase the classification
rates per se; hence, optimization of classifiers was beyond the
scope of this paper. Section IV presents the results, illustrating
the improvement in accuracy and time response obtained by
using data from multiple brains, when compared to that
obtained using data from a single brain. Section V presents the
conclusion.
II. EXPERIMENTS AND DATASET
A. Experiments and EEG Dataset
We used an EEG dataset by Delorme et al. [17]. The EEGs
were recorded with a 32-channel Neuroscan device (Cz
referenced, 1000 Hz sampling frequency), while 14
participants, 7 females and 7 males, performed a go/no-go
visual categorization task on natural photos. The photos were
presented very briefly (20 ms) to remove any potential eye
movement artifacts. The participants performed 13 and 12
series of trials on the first and second days of the experiment,
respectively. One series consisted of 50 target images (animal)
and 50 non-target images (non-animal); an example of such a
series is illustrated in Figure 1. Participants were given 1000
2016 IEEE International Conference on Systems, Man, and Cybernetics • SMC 2016 | October 9-12, 2016 • Budapest, Hungary
978-1-5090-1897-0/16/$31.00 ©2016 IEEE SMC_2016 002220
ms to respond. The timing of the image presentation was 2000
ms plus or minus a random delay of 200 ms to remove any
potential cognitive expectation effect (random jittering).
Fig. 1 Example of target (animal) and distractor (non-animal)
images used in the Go/No-go categorization task. Participants
were instructed to press the go button as soon as they
recognized the target images.
III. ANALYSIS
A. Event related potential (ERP) analysis
The EEG signals were averaged by each channel and
condition. Time indicated at the onset of stimulus was 0 ms.
The average was time-locked to this stimulus onset. The two
conditions included the target (animal) and distractor (non-
animal) images. We applied paired t-test with a false
discovery rate (FDR) for correcting multiple comparisons (p <
0.05) for the statistics.
Fig 2. Experimental procedure of one-brain and two-brains
EEG analysis.
B. Linear support vector machine
We applied the linear support vector machine (SVM) for
classifying the two EEG groups (target and distractor images)
(Figure 2). EEG signals (32-channels) from participants were
randomly paired to form combined EEG signals (64-channels)
to compute classification accuracy in two brains. Combined
EEG signals from three brains (96-channels) were used to
compute classification accuracy of multiple brains.
An SVM classifies data by finding the best hyperplane that
separates all the data points in one group from all the data
points in the other group [18, 19]. The best hyperplane means
the one with the maximum width (or thickness) that has no
interior data points. Each group in the linear SVM has 50 ms
time series data points (50 ms × 1000 Hz = 50 data points)
with the sliding increment of 50 ms (non-overlapping time
windows for SVM calculation). The SVM was run on the raw
data waveforms. The classification accuracy was calculated
using the Matlab Statistics and Machine Learning Toolbox. To
prevent overfitting by using the same training and testing data,
we applied the 5-fold cross validation [20]. We randomly split
the data into training and testing data at a ratio of 80:20. We
repeated this process five times, and the average of the
classification accuracy was calculated.
C. Evoked responses for target and distractor
We compared the two evoked responses between the target
(animal) and distractor images. The task was to make a "go"
response for the target images. P300 response (in the range of
300 ms – 500 ms) was significantly higher in the target images
than in the distractor images (Figure 3. FDR multiple
comparisons correction p < 0.05).
Fig 3. P300 in response to the animal (target) and non-animal
(distractor) images in the Pz channel. (N=50 images per
condition; Gray area: false discovery rate multiple
comparisons corrected p < 0.05)
The P300 wave is an ERP component occurring in the process
of decision making, known to reflect the internal cognitive
process of stimulus evaluation and categorization [21, 22].
The results indicate that the animal images are more
noticeable than the non-animal images and generate the
significant P300 signals. The signal measured was the
strongest in the electrodes covering the parietal cortex (Pz
channel).
2016 IEEE International Conference on Systems, Man, and Cybernetics • SMC 2016 | October 9-12, 2016 • Budapest, Hungary
SMC_2016 002221
IV. CLASSIFICATION RESULTS
We calculated and compared the classification accuracy using
EEG signals from one, two, and three brains. We applied
linear SVM with 5-fold cross validation.
The overall accuracies of 0-300 ms time series of two-brains
prediction (89.3%) and three-brains prediction (88.7%) were
higher than one-brain prediction (79.4%) (Figures 2 and 4). In
the temporal analysis, the time required to achieve 90%
accuracy was significantly lower with two and three brains
(100 ms) than that with one brain (230 ms). The results
indicate that the combination of EEG signals of more than two
people may provide complementary and synergistic effect for
enhancing the classification accuracy.
We observed a local minimum of the classification accuracy
in the one-brain EEG signal around 150 ms after the image
was shown. The local minimum may have been caused by the
ERP similarity between the responses to the animal and the
non-animal images (Figure 3).
Fig 4. Classification accuracy of one, two, and three brains.
The classification accuracies of two and three brains are
significantly higher than that of one brain.
Note the slightly lower overall accuracy in classification
obtained using three brains compared to that obtained using
two brains. One possible cause may be the overfitting from the
multidimensional data. Another cause may be that the noisy
nature of the EEG signals led to saturation of the classification
performance, resulting in the decrease in accuracy. The EEG
artifacts include a loose electrode contact, head movements,
eye movements, and muscle activity. It is known that the noise
level may affect the linear classification performance [23].
V. CONCLUSION
Using a simple go/no-go visual categorization task, we found
that the EEG signals of two or more brains achieved a higher
and more accurate cognitive performance than those of one
brain. This result is consistent with our earlier work reported
in [6]. The paper points out several advantages compared to
previous studies, which used secondary or complex features of
the EEG signals to obtain the classification results, including
P300, ERP, and spectral power [3, 8]. We used raw EEG
signals to test whether we can obtain a robust classification
even with unprocessed data so that it can easily be extended to
other BCI applications in a noisy environment. The temporal
dynamics of our classification results showed that two or three
brains could achieve not only an overall higher accuracy, but
also faster decision-making compared to one brain. Moreover,
not only on the behavioral level but also on the neural level,
the results suggest that we can integrate multi-brain EEG
signals to obtain faster, more accurate cognitive decision
making to achieve high performance BCI.
VI. ACKNOWELDGEMENT
This work was performed at the Jet Propulsion Laboratory,
California Institute of Technology, under a contract with the
National Aeronautics and Space Administration.
REFERENCES
[1] A. Stoica, "Aggregation of bio-signals from multiple individuals to
achieve a collective outcome," ed: Google Patents, 2012.
[2] A. Stoica, "Multimind: Multi-brain signal fusion to exceed the power of
a single brain," in Emerging Security Technologies (EST), 2012 Third
International Conference on, 2012, pp. 94-98.
[3] H. Touyama, "A collaborative BCI system based on P300 signals as a
new tool for life log indexing," in Systems, Man and Cybernetics (SMC),
2014 IEEE International Conference on, 2014, pp. 2843-2846.
[4] Y. Wang, Y.-T. Wang, T.-P. Jung, X. Gao, and S. Gao, "A collaborative
brain-computer interface," in Biomedical Engineering and Informatics
(BMEI), 2011 4th International Conference on, 2011, pp. 580-583.
[5] P. Yuan, Y. Wang, W. Wu, H. Xu, X. Gao, and S. Gao, "Study on an
online collaborative BCI to accelerate response to visual targets," in
Engineering in Medicine and Biology Society (EMBC), 2012 Annual
International Conference of the IEEE, 2012, pp. 1736-1739.
[6] A. Stoica, A. Matran-Fernandez, D. Andreou, R. Poli, C. Cinel, Y.
Iwashita, and C. Padgett, "Multi-brain fusion and applications to
intelligence analysis," in SPIE Defense, Security, and Sensing, 2013, pp.
87560N-87560N-8.
[7] D. Valeriani, R. Poli, and C. Cinel, "A collaborative Brain-Computer
Interface to improve human performance in a visual search task," in
Neural Engineering (NER), 2015 7th International IEEE/EMBS
Conference on, 2015, pp. 218-223.
[8] R. Poli, C. Cinel, F. Sepulveda, and A. Stoica, "Improving decision-
making based on visual perception via a collaborative brain-computer
interface," in Cognitive Methods in Situation Awareness and Decision
Support (CogSIMA), 2013 IEEE International Multi-Disciplinary
Conference on, 2013, pp. 1-8.
[9] K. Yun, D. Chung, B. Jang, J. H. Kim, and J. Jeong, "Mathematically
Gifted Adolescents Have Deficiencies in Social Valuation and
Mentalization," PLoS One, vol. 6, p. e18224, 2011.
2016 IEEE International Conference on Systems, Man, and Cybernetics • SMC 2016 | October 9-12, 2016 • Budapest, Hungary
SMC_2016 002222
[10] K. Yun, K. Watanabe, and S. Shimojo, "Interpersonal body and neural
synchronization as a marker of implicit social interaction," Scientific
Reports, vol. 2, p. 959, 2012.
[11] C. F. Camerer, "Psychology and economics. Strategizing in the brain,"
Science, vol. 300, pp. 1673-5, Jun 13 2003.
[12] K. Yun, D. Chung, and J. Jeong, "Emotional Interactions in Human
Decision Making using EEG Hyperscanning," in Proceedings of the 6th
International Conference on Cognitive Science. vol. 1, C. Lee, Ed., ed
Seoul, South Korea: International Association for Cognitive Science,
2008, pp. 327-330.
[13] J. Jiang, B. Dai, D. Peng, C. Zhu, L. Liu, and C. Lu, "Neural
Synchronization during Face-to-Face Communication," The Journal of
neuroscience, vol. 32, pp. 16064-16069, November 7, 2012 2012.
[14] F. Babiloni, F. Cincotti, D. Mattia, F. De Vico Fallani, A. Tocci, L.
Bianchi, S. Salinari, M. G. Marciani, A. Colosimo, and L. Astolfi, "High
Resolution EEG Hyperscanning During a Card Game," in Engineering
in Medicine and Biology Society 2007, pp. 4957-4960.
[15] D. Chung, K. Yun, and J. Jeong, "Decoding covert motivations of free
riding and cooperation from multi-feature pattern analysis of EEG
signals," Social cognitive and affective neuroscience, p. nsv006, 2015.
[16] K. Yun, "On the same wavelength: Face-to-face communication
increases interpersonal neural synchronization," The Journal of
neuroscience, vol. 33, pp. 5081-5082, 2013.
[17] A. Delorme and S. Makeig, "EEGLAB: an open source toolbox for
analysis of single-trial EEG dynamics including independent component
analysis," Journal of Neuroscience Methods, vol. 134, pp. 9-21, 2004.
[18] C. Cortes and V. Vapnik, "Support vector machine," Machine learning,
vol. 20, pp. 273-297, 1995.
[19] T. S. Furey, N. Cristianini, N. Duffy, D. W. Bednarski, M. Schummer,
and D. Haussler, "Support vector machine classification and validation
of cancer tissue samples using microarray expression data,"
Bioinformatics, vol. 16, pp. 906-914, 2000.
[20] C.-W. Hsu, C.-C. Chang, and C.-J. Lin, "A practical guide to support
vector classification," 2003.
[21] A. Mazaheri and T. W. Picton, "EEG spectral dynamics during
discrimination of auditory and visual targets," Cognitive Brain Research,
vol. 24, pp. 81-96, 2005.
[22] D. E. Linden, "The P300: where in the brain is it produced and what
does it tell us?," The Neuroscientist, vol. 11, pp. 563-576, 2005.
[23] R. B. Fisher, "An Empirical Model for Saturation and Capacity in
Classifier Spaces," in Pattern Recognition, 2006. ICPR 2006. 18th
International Conference on, 2006, pp. 189-193.
2016 IEEE International Conference on Systems, Man, and Cybernetics • SMC 2016 | October 9-12, 2016 • Budapest, Hungary
SMC_2016 002223

Improved target recognition response using collaborative brain-computer interfaces

  • 1.
    Improved Target RecognitionResponse using Collaborative Brain-Computer Interfaces Kyongsik Yun Jet Propulsion Laboratory California Institute of Technology Pasadena, CA, USA yunks@caltech.edu Adrian Stoica Jet Propulsion Laboratory California Institute of Technology Pasadena, CA, USA adrian.stoica@jpl.nasa.gov Abstract—The advantage of using collaborative brain- computer interfaces in improving human response in visual target recognition tests was investigated. We used a public EEG dataset created from recordings made using a 32-channel EEG system by Delorme et al. (2004) to compare the classification accuracy using one, two, and three EEG signal sets from different subjects. Fourteen participants performed a go/no-go categorization task on images that were presented very briefly, with the target images of natural photos of animals and distractor images of photos that did not contain animals. First, we compared the EEG responses evoked by the target and distractor images, and it was determined that the P300 (i.e., a positive deflection in voltage with a latency of 300 ms) response evoked by the target images was significantly higher than that evoked by the distractor images. Second, we calculated and compared the classification accuracy using one, two, and three EEG signal sets. We used a linear support vector machine with 5- fold cross validation. Compared to the results obtained from single brain prediction (79.4%), the overall accuracy of two- and three-brains prediction was higher (89.3% and 88.7%, respectively). Furthermore, the time required to achieve 90% accuracy was significantly less when using EEGs from two and three brains (100 ms) than when using one brain (230 ms). These results provide evidence to support the hypothesis that one can achieve higher levels of perceptual and cognitive performance by leveraging the power of multiple brains through collaborative brain-computer interfaces. Keywords—brain-computer interfaces, collaborative brain- computer interfaces, multi-brain, EEG, collective intelligence, visual categorization I. INTRODUCTION Collaborative brain-computer interfaces (BCIs) aim to improve human performance by integrating the neural data from two or more brains with the help of advanced signal analytics [1-6]. One of the inspirations behind the collaborative aspect of BCI is an idea from human social behavior, which suggests that cooperation can help improve decision making, visual perception, and social cognition [7- 11]. For the past ten years, simultaneous EEG recording of two or more brains (i.e., EEG hyperscanning) has been widely applied [10, 12-16]. These technological advances lowered the barrier for collaborative BCI research. Previous collaborative BCI studies preprocessed EEG signals using P300, principal component analysis, event related potential (ERP), and spectral power to obtain the classification results [3, 7]. Preprocessing can produce an unnecessary bias in the data and decrease the computational efficiency. In this study, we used raw EEG data to compute the classification accuracy in a simple visual categorization task. Moreover, the temporal dynamics of the classification accuracy in the collaborative BCI setting has been scarcely studied. We applied temporal analysis to compute the time required to achieve accurate results as well as to compare the cognitive performances of single and multi-brains. The data used in this study was taken from a public database. In Section II, we describe the experimental trials and the associated EEG dataset obtained from the participants. Section III describes the analysis performed. We applied simple classification mechanisms because our objective was to illustrate the improvement in classification achieved by using data from multiple brains, and not to increase the classification rates per se; hence, optimization of classifiers was beyond the scope of this paper. Section IV presents the results, illustrating the improvement in accuracy and time response obtained by using data from multiple brains, when compared to that obtained using data from a single brain. Section V presents the conclusion. II. EXPERIMENTS AND DATASET A. Experiments and EEG Dataset We used an EEG dataset by Delorme et al. [17]. The EEGs were recorded with a 32-channel Neuroscan device (Cz referenced, 1000 Hz sampling frequency), while 14 participants, 7 females and 7 males, performed a go/no-go visual categorization task on natural photos. The photos were presented very briefly (20 ms) to remove any potential eye movement artifacts. The participants performed 13 and 12 series of trials on the first and second days of the experiment, respectively. One series consisted of 50 target images (animal) and 50 non-target images (non-animal); an example of such a series is illustrated in Figure 1. Participants were given 1000 2016 IEEE International Conference on Systems, Man, and Cybernetics • SMC 2016 | October 9-12, 2016 • Budapest, Hungary 978-1-5090-1897-0/16/$31.00 ©2016 IEEE SMC_2016 002220
  • 2.
    ms to respond.The timing of the image presentation was 2000 ms plus or minus a random delay of 200 ms to remove any potential cognitive expectation effect (random jittering). Fig. 1 Example of target (animal) and distractor (non-animal) images used in the Go/No-go categorization task. Participants were instructed to press the go button as soon as they recognized the target images. III. ANALYSIS A. Event related potential (ERP) analysis The EEG signals were averaged by each channel and condition. Time indicated at the onset of stimulus was 0 ms. The average was time-locked to this stimulus onset. The two conditions included the target (animal) and distractor (non- animal) images. We applied paired t-test with a false discovery rate (FDR) for correcting multiple comparisons (p < 0.05) for the statistics. Fig 2. Experimental procedure of one-brain and two-brains EEG analysis. B. Linear support vector machine We applied the linear support vector machine (SVM) for classifying the two EEG groups (target and distractor images) (Figure 2). EEG signals (32-channels) from participants were randomly paired to form combined EEG signals (64-channels) to compute classification accuracy in two brains. Combined EEG signals from three brains (96-channels) were used to compute classification accuracy of multiple brains. An SVM classifies data by finding the best hyperplane that separates all the data points in one group from all the data points in the other group [18, 19]. The best hyperplane means the one with the maximum width (or thickness) that has no interior data points. Each group in the linear SVM has 50 ms time series data points (50 ms × 1000 Hz = 50 data points) with the sliding increment of 50 ms (non-overlapping time windows for SVM calculation). The SVM was run on the raw data waveforms. The classification accuracy was calculated using the Matlab Statistics and Machine Learning Toolbox. To prevent overfitting by using the same training and testing data, we applied the 5-fold cross validation [20]. We randomly split the data into training and testing data at a ratio of 80:20. We repeated this process five times, and the average of the classification accuracy was calculated. C. Evoked responses for target and distractor We compared the two evoked responses between the target (animal) and distractor images. The task was to make a "go" response for the target images. P300 response (in the range of 300 ms – 500 ms) was significantly higher in the target images than in the distractor images (Figure 3. FDR multiple comparisons correction p < 0.05). Fig 3. P300 in response to the animal (target) and non-animal (distractor) images in the Pz channel. (N=50 images per condition; Gray area: false discovery rate multiple comparisons corrected p < 0.05) The P300 wave is an ERP component occurring in the process of decision making, known to reflect the internal cognitive process of stimulus evaluation and categorization [21, 22]. The results indicate that the animal images are more noticeable than the non-animal images and generate the significant P300 signals. The signal measured was the strongest in the electrodes covering the parietal cortex (Pz channel). 2016 IEEE International Conference on Systems, Man, and Cybernetics • SMC 2016 | October 9-12, 2016 • Budapest, Hungary SMC_2016 002221
  • 3.
    IV. CLASSIFICATION RESULTS Wecalculated and compared the classification accuracy using EEG signals from one, two, and three brains. We applied linear SVM with 5-fold cross validation. The overall accuracies of 0-300 ms time series of two-brains prediction (89.3%) and three-brains prediction (88.7%) were higher than one-brain prediction (79.4%) (Figures 2 and 4). In the temporal analysis, the time required to achieve 90% accuracy was significantly lower with two and three brains (100 ms) than that with one brain (230 ms). The results indicate that the combination of EEG signals of more than two people may provide complementary and synergistic effect for enhancing the classification accuracy. We observed a local minimum of the classification accuracy in the one-brain EEG signal around 150 ms after the image was shown. The local minimum may have been caused by the ERP similarity between the responses to the animal and the non-animal images (Figure 3). Fig 4. Classification accuracy of one, two, and three brains. The classification accuracies of two and three brains are significantly higher than that of one brain. Note the slightly lower overall accuracy in classification obtained using three brains compared to that obtained using two brains. One possible cause may be the overfitting from the multidimensional data. Another cause may be that the noisy nature of the EEG signals led to saturation of the classification performance, resulting in the decrease in accuracy. The EEG artifacts include a loose electrode contact, head movements, eye movements, and muscle activity. It is known that the noise level may affect the linear classification performance [23]. V. CONCLUSION Using a simple go/no-go visual categorization task, we found that the EEG signals of two or more brains achieved a higher and more accurate cognitive performance than those of one brain. This result is consistent with our earlier work reported in [6]. The paper points out several advantages compared to previous studies, which used secondary or complex features of the EEG signals to obtain the classification results, including P300, ERP, and spectral power [3, 8]. We used raw EEG signals to test whether we can obtain a robust classification even with unprocessed data so that it can easily be extended to other BCI applications in a noisy environment. The temporal dynamics of our classification results showed that two or three brains could achieve not only an overall higher accuracy, but also faster decision-making compared to one brain. Moreover, not only on the behavioral level but also on the neural level, the results suggest that we can integrate multi-brain EEG signals to obtain faster, more accurate cognitive decision making to achieve high performance BCI. VI. ACKNOWELDGEMENT This work was performed at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. REFERENCES [1] A. Stoica, "Aggregation of bio-signals from multiple individuals to achieve a collective outcome," ed: Google Patents, 2012. [2] A. Stoica, "Multimind: Multi-brain signal fusion to exceed the power of a single brain," in Emerging Security Technologies (EST), 2012 Third International Conference on, 2012, pp. 94-98. [3] H. Touyama, "A collaborative BCI system based on P300 signals as a new tool for life log indexing," in Systems, Man and Cybernetics (SMC), 2014 IEEE International Conference on, 2014, pp. 2843-2846. [4] Y. Wang, Y.-T. Wang, T.-P. Jung, X. Gao, and S. Gao, "A collaborative brain-computer interface," in Biomedical Engineering and Informatics (BMEI), 2011 4th International Conference on, 2011, pp. 580-583. [5] P. Yuan, Y. Wang, W. Wu, H. Xu, X. Gao, and S. Gao, "Study on an online collaborative BCI to accelerate response to visual targets," in Engineering in Medicine and Biology Society (EMBC), 2012 Annual International Conference of the IEEE, 2012, pp. 1736-1739. [6] A. Stoica, A. Matran-Fernandez, D. Andreou, R. Poli, C. Cinel, Y. Iwashita, and C. Padgett, "Multi-brain fusion and applications to intelligence analysis," in SPIE Defense, Security, and Sensing, 2013, pp. 87560N-87560N-8. [7] D. Valeriani, R. Poli, and C. Cinel, "A collaborative Brain-Computer Interface to improve human performance in a visual search task," in Neural Engineering (NER), 2015 7th International IEEE/EMBS Conference on, 2015, pp. 218-223. [8] R. Poli, C. Cinel, F. Sepulveda, and A. Stoica, "Improving decision- making based on visual perception via a collaborative brain-computer interface," in Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), 2013 IEEE International Multi-Disciplinary Conference on, 2013, pp. 1-8. [9] K. Yun, D. Chung, B. Jang, J. H. Kim, and J. Jeong, "Mathematically Gifted Adolescents Have Deficiencies in Social Valuation and Mentalization," PLoS One, vol. 6, p. e18224, 2011. 2016 IEEE International Conference on Systems, Man, and Cybernetics • SMC 2016 | October 9-12, 2016 • Budapest, Hungary SMC_2016 002222
  • 4.
    [10] K. Yun,K. Watanabe, and S. Shimojo, "Interpersonal body and neural synchronization as a marker of implicit social interaction," Scientific Reports, vol. 2, p. 959, 2012. [11] C. F. Camerer, "Psychology and economics. Strategizing in the brain," Science, vol. 300, pp. 1673-5, Jun 13 2003. [12] K. Yun, D. Chung, and J. Jeong, "Emotional Interactions in Human Decision Making using EEG Hyperscanning," in Proceedings of the 6th International Conference on Cognitive Science. vol. 1, C. Lee, Ed., ed Seoul, South Korea: International Association for Cognitive Science, 2008, pp. 327-330. [13] J. Jiang, B. Dai, D. Peng, C. Zhu, L. Liu, and C. Lu, "Neural Synchronization during Face-to-Face Communication," The Journal of neuroscience, vol. 32, pp. 16064-16069, November 7, 2012 2012. [14] F. Babiloni, F. Cincotti, D. Mattia, F. De Vico Fallani, A. Tocci, L. Bianchi, S. Salinari, M. G. Marciani, A. Colosimo, and L. Astolfi, "High Resolution EEG Hyperscanning During a Card Game," in Engineering in Medicine and Biology Society 2007, pp. 4957-4960. [15] D. Chung, K. Yun, and J. Jeong, "Decoding covert motivations of free riding and cooperation from multi-feature pattern analysis of EEG signals," Social cognitive and affective neuroscience, p. nsv006, 2015. [16] K. Yun, "On the same wavelength: Face-to-face communication increases interpersonal neural synchronization," The Journal of neuroscience, vol. 33, pp. 5081-5082, 2013. [17] A. Delorme and S. Makeig, "EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis," Journal of Neuroscience Methods, vol. 134, pp. 9-21, 2004. [18] C. Cortes and V. Vapnik, "Support vector machine," Machine learning, vol. 20, pp. 273-297, 1995. [19] T. S. Furey, N. Cristianini, N. Duffy, D. W. Bednarski, M. Schummer, and D. Haussler, "Support vector machine classification and validation of cancer tissue samples using microarray expression data," Bioinformatics, vol. 16, pp. 906-914, 2000. [20] C.-W. Hsu, C.-C. Chang, and C.-J. Lin, "A practical guide to support vector classification," 2003. [21] A. Mazaheri and T. W. Picton, "EEG spectral dynamics during discrimination of auditory and visual targets," Cognitive Brain Research, vol. 24, pp. 81-96, 2005. [22] D. E. Linden, "The P300: where in the brain is it produced and what does it tell us?," The Neuroscientist, vol. 11, pp. 563-576, 2005. [23] R. B. Fisher, "An Empirical Model for Saturation and Capacity in Classifier Spaces," in Pattern Recognition, 2006. ICPR 2006. 18th International Conference on, 2006, pp. 189-193. 2016 IEEE International Conference on Systems, Man, and Cybernetics • SMC 2016 | October 9-12, 2016 • Budapest, Hungary SMC_2016 002223