SlideShare a Scribd company logo
A Simulation Study of EEG Spatial Super-Resolution
Using Deep Convolutional Networks
2018. 05. 30
Sangjun Han
Gwangju Institute of Science and Technology
School of Electrical Engineering and Computer Science
BioComputing Lab, Prof. Sung Chan Jun
Presentation for Master’s Thesis
• Introduction
- Electroencephalography
- Deep Learning
- Related Work
- Motivation
• Method
- Data Generation
- Source Localization
- Data Preparation
- Deep Convolutional Networks
- Evaluation Metrics
• Results
- Result 1 – Conclusion 1
- Result 2 – Conclusion 2
- Result 3 – Conclusion 3
• Discussion
• Summary
• Publication
• References
Index
2
Introduction
3
Electroencephalography
• Electroencephalography (EEG)
- Measures electrical potential of brain on the scalp
- Temporal and spatial dynamics
- Non-invasively measured
- Is mixed signal originated from brain sources
EEG systems Sensor and source level
source
sensor
4
Electroencephalography
• Improving spatial resolution of EEG
- High-density EEG hardware can be used, but it requires a lot of cost
32 channels 64 channels 128 channels 256 channels
Experimental cost↑
• Resolution of EEG
- High temporal resolution
- But relatively low spatial resolution
5
Electroencephalography
• Low spatial resolution EEG...
- May cause aliasing in spatial frequency [1]
Topological difference between 16 channels and 64 channels EEG
64 channels16 channels
6
Electroencephalography
• Low spatial resolution EEG...
- Increasing the electrode number helps decrease localization error [2]
Mean source localization error for 5 subjects
7
Deep Learning
• The success of deep learning ...
- Backpropagation appeared (1986) [3]
- Weight initialization by restricted Boltzmann machine (2010) [4]
- High accuracy in speech recognition (2012) [5]
- High accuracy in image classification (2012) [6]
- Image localization, detection, segmentation, ... super-resolution!
Super-resolution (SR)
Recovering a high-resolution image
from a single low-resolution image
• Image super-resolution
8
• Image super-resolution
SRCNN, Dong et al. 2015 [7] DRCN, Kim et al. 2015 [8]
ESPCN, Shi et al. 2016 [9]
SRGAN, Ledig et al. 2016 [10]
Related Work
9
• Image super-resolution
SRCNN, Dong et al. 2015 [7] DRCN, Kim et al. 2015 [8]
ESPCN, Shi et al. 2016 [9]
SRGAN, Ledig et al. 2016 [10]
How to optimize effectively and efficiently
by reconstructing networks’ structure
Related Work
10
• Image super-resolution
SRCNN, Dong et al. 2015 [7] DRCN, Kim et al. 2015 [8]
ESPCN, Shi et al. 2016 [9]
SRGAN, Ledig et al. 2016 [10]
To satisfy human’s visual perception
with a new concept of loss function
Related Work
11
• Image super-resolution
High-resolution (HR)
Original image
Related Work
12
• Image super-resolution
High-resolution (HR)
Blurring, Sub-sampling
Original image
Low-resolution (LR)
Related Work
13
• Image super-resolution
High-resolution (HR)
Train neural networks
min
θ
( 𝐻𝑅 − 𝐿𝑅)2
Original image
Low-resolution (LR)
Related Work
14
• Image super-resolution
High-resolution (HR)
Original image
Low-resolution (LR)
Trained model
Super-resolution (SR)
Recovered image
Related Work
15
• Audio super-resolution
- V. Kuleshov, 2017 [11]
- Regarded as generative model
- Temporally up-scaled
- Bandwidth extension, thus predicting higher frequencies
Related Work
16
• EEG super-resolution
- I. A. Corley, 2018 [12]
- Mental imagery open dataset, 3 classes
- Spatially up-scaled, 16 to 32 channels (2x), 8 to 32 channels (4x)
- Evaluated SR performance by classification results
Related Work
17
Motivation
• Enhancing spatial resolution of EEG using deep learning
- Not merely interpolating a few missing channels
- Rather, scaling up the number of channels to several folds
- We can acquire high quality data without high experimental cost
- Observing properties of super-resolved EEG at sensor and source level
Super-resolution (SR)
• Limitation of previous work
- How about properties of super-resolved EEG signal?
18
Motivation
• Questions
1. How does noise type affect the EEG SR process?
2. How does SR deep learning work over various upscaling sizes? (2x, 4x, 8x)
3. Are there any approaches to improve signal quality during SR process?
Sensor and source level
source
sensor white Gaussian noise
real environmental noise
19
Method
20
Data Generation
• Head model and channel information
- 3-shell spherical boundary element method (BEM)
- HydroCel GSN systems (Electrical Geodescis. Inc.)
1
0.92
0.87
Brain σ : 1
Skull σ : 0.0125
Scalp σ : 1
spherical head model
GSN 128 layout
21
Data Generation
noiseless scalp EEG
• Noiseless scalp EEG
- Two dipoles were projected on scalp EEG sensors
- Sampled at 250 Hz, and one trial lasted for 1 second
two dipoles (blue dots)
22
Data Generation
+ Simulation EEG
noiseless scalp EEG
white Gaussian noise
real noise
or
• Adding noise to scalp EEG
- Adding white Gaussian noise and real noise
- Real noise was measured from one subject resting state
- Adjusting SNR 10, 5, 1, 0.5, 0.1, 0.05, and 0.01
two dipoles (blue dots)
23
Source Localization
+ Simulation EEG
noiseless scalp EEG
white Gaussian noise
real noise
or
two dipoles (blue dots)
• Source Localization
- Array-gain minimum variance beamformer [13]
- Beamforming scanned at a 7 mm scanning interval
- On 10,000 voxels
24
Data Preparation
ex) For 16 to 128 channels
HR (128 channels)
0 200 400 600 800 1000
25
Data Preparation
ex) For super-resolution from 16 to 128 channels
HR (128 channels)LR (16 channels)
select 16 channels
0 200 400 600 800 10000 200 400 600 800 1000
26
Data Preparation
HR (128 channels)LR (16 channels)
select 16 channels
interpolated with
the average of its neighbor
LR (128 channels)
ex) For super-resolution from 16 to 128 channels
0 200 400 600 800 10000 200 400 600 800 1000
0 200 400 600 800 1000
27
Data Preparation
HR (128 channels)LR (16 channels)
select 16 channels
interpolated with
the average of its neighbor
LR (128 channels)
train neural networks
min
θ
( 𝐻𝑅 − 𝐿𝑅)2
- This is an ill-posed problem
- For good starting initialization [7]
16 to 32 (2x)
16 to 64 (4x)
16 to 128 (8x)
ex) For super-resolution from 16 to 128 channels
0 200 400 600 800 10000 200 400 600 800 1000
0 200 400 600 800 1000
28
Deep Convolutional Networks
LR
Conv
Conv
Conv
Features
ConvT
ConvT
ConvT
Conv
Conv
HR
13 X 5 kernel
64 filters
13 X 9 kernel
64 filters
7 X 1 kernel
1 filters
training : min
θ
( 𝐻𝑅 − 𝐿𝑅)2
• Settings
- Convolution for down-sampling
- Transposed convolution for up-sampling
- Adam optimizer (first-order gradient optimization) [14]
- He initializer [15]
- Linear activation function (y = x) was used
29
Deep Convolutional Networks
• Dataset
- Training for 1,600 trials
- Testing for 400 trials * 50 times = 20,000 trials
- Averaging testing results for statistical stability
LR
Conv
Conv
Conv
Features
ConvT
ConvT
ConvT
Conv
Conv
HR
13 X 5 kernel
64 filters
13 X 9 kernel
64 filters
7 X 1 kernel
1 filters
training : min
θ
( 𝐻𝑅 − 𝐿𝑅)2
30
Evaluation Metrics
• Evaluation metrics
- Mean squared error (MSE, at sensor level)
- Correlation (at sensor level)
- Error distance between dipole locations (at source level)
SLR SHR SSRvs. vs.
Noiseless Scalp EEG
SLR : Low-resolution signal, SHR : High-resolution signal, SSR : Super-resolved signal
31
MSE
Correlation
Error distance
MSE
Correlation
Error distance
MSE
Correlation
Error distance
Evaluation Metrics
• Evaluation metrics
- Mean squared error (MSE, at sensor level)
- Correlation (at sensor level)
- Error distance between dipole locations (at source level)
Mean Euclidean distance
between voxels that activate over arbitrary power thresholds and original dipoles
32
Results 1
How does noise type affect the EEG SR process?
33
Result 1 White Gaussian Noise
• According to SNR (when 16 to 64)
- For each LR, HR, and SR case, when SNR decreases, MSE increases
- For all SNR, SR case has minimum loss
34
• According to SNR (when 16 to 64)
- For each LR, HR, and SR case, when SNR decreases, correlation decreases
- For all SNR, SR case has maximum correlation
Result 1 White Gaussian Noise
35
• According to SNR (when 16 to 64)
- For each LR, HR, and SR case, when SNR decreases, error distance increases
- For most of SNR, SR case has minimum error distance
Result 1 White Gaussian Noise
36
• According to SNR (when 16 to 64)
The time series of one trial at E01 channel, when SNR 0.5
- The SSR catches well the shape of noiseless scalp EEG
0 200 400 600 800 1000
Result 1 White Gaussian Noise
37
Source localization results, when SNR 0.5
HR
SR
LR
Result 1 White Gaussian Noise
38Source localization results, when SNR 0.5
Source localization results, when SNR 0.5
Result 1 White Gaussian Noise
39Source localization results, when SNR 0.5
The SR case detects
dipole position well
HR
SR
LR
• According to SNR (when 16 to 64)
- For each LR, HR, and SR case, when SNR decreases, MSE increases
- For all SNR, SR case has similar loss with HR case
Result 1 Real Noise
40
• According to SNR (when 16 to 64)
- For each LR, HR, and SR case, when SNR decreases, correlation decreases
- For most of SNR, SR case has similar correlation with HR case
Result 1 Real Noise
41
• According to SNR (when 16 to 64)
- For each LR, HR, and SR case, when SNR decreases, error distance increases
- Except for very low SNR, SR case has similar error distance with HR case
Result 1 Real Noise
42
• According to SNR (when 16 to 64)
The time series of one trial at E01 channel, when SNR 0.5
- It is hard to find general shape of SSR
- But the SSR follows tendency of SHR
0 200 400 600 800 1000
Result 1 Real Noise
43
Source localization results, when SNR 0.5
Result 1 Real Noise
44
HR
SR
LR
Conclusion 1
• The case of white Gaussian noise
- SR recovered SLR beyond the level of SHR
(both at sensor and source)
• The case of real noise
- SR recovered SLR to the level of SHR
(at sensor, but not convinced at source)
45
Results 2
How does SR deep learning work over various up-scaling sizes?
46
• According to upscaling ratio (when SNR 0.5)
- When upscaling ratio increases, MSE decreases
- For all upscaling ratio, SR case has minimum loss
Result 2 White Gaussian Noise
47
• According to upscaling ratio (when SNR 0.5)
- When upscaling ratio increases, correlation increases
- For all upscaling ratio, SR has maximum correlation
Result 2 White Gaussian Noise
48
• According to upscaling ratio (when SNR 0.5)
- At upscaling ratio is set to 16 to 128, error distance is minimum
- For all upscaling ratio, SR case has minimum error distance
Result 2 White Gaussian Noise
49
• According to upscaling ratio (when SNR 0.5)
- SR reproduced the signal from SSR to the level of SHR
Result 2 Real Noise
50
Conclusion 2
• The case of white Gaussian noise
- At higher upscaling ratio, SR can recover signal of better quality
(at sensor, but not convinced at source)
• The case of real noise
- There was no significant difference over various upscaling ratio
51
Conclusion 1 + 2
• The case of white Gaussian noise
- SR recovered SLR beyond the level of SHR
(both at sensor and source)
- At higher upscaling ratio, SR can recover signal of better quality
(at sensor, but not convinced at source)
• The case of real noise
- SR recovered SLR to the level of SHR
(at sensor, but not convinced at source)
- There was no significant difference over various upscaling ratio
52
Conclusion 1 + 2
• The case of white Gaussian noise
- SR recovered SLR beyond the level of SHR
(both at sensor and source)
- At higher upscaling ratio, SR can recover signal of better quality
(at sensor, but not convinced at source)
• The case of real noise
- SR recovered SLR to the level of SHR
(at sensor, but not convinced at source)
- There was no significant difference over various upscaling ratio
Whitening!
53
Results 3
Are there any approaches to improve signal quality during SR?
54
C noise covariance, signal x = source signal s + noise n
xwhitened = C-1/2 x = C-1/2 (s + n) = C-1/2 s + w (white noise)
Result 3 Whitening Real Noise
55
• According to SNR (when 16 to 64)
- For all SNR, whitened SR case is a little nosier than just SR case
Result 3 Whitening Real Noise
56
C noise covariance, signal x = source signal s + noise n
xwhitened = C-1/2 x = C-1/2 (s + n) = C-1/2 s + w (white noise)
• According to SNR (when 16 to 64)
- For most of SNR, whitened SR case is less correlated than SR case
Result 3 Whitening Real Noise
57
C noise covariance, signal x = source signal s + noise n
xwhitened = C-1/2 x = C-1/2 (s + n) = C-1/2 s + w (white noise)
• According to SNR (when 16 to 64)
- At very low SNR, error distance from whitened SR is reduced
Result 3 Whitening Real Noise
58
C noise covariance, signal x = source signal s + noise n
xwhitened = C-1/2 x = C-1/2 (s + n) = C-1/2 s + w (white noise)
Source localization results, when SNR 0.5
Result 3 Whitening Real Noise
59
SR Whitened SR
Conclusion 3
• Whitening of real noise
- can be effective for SR
- especially for source analysis
60
Discussion
61
Discussion 1
• Why simulation study?
- In real EEG, it is difficult to extract only brain signal from its noise
- Because of its noise, we don’t know exact dipole locations
- We can’t observe the influence of noise type
Exact dipole location from simulation data
62
Discussion 2
white Gaussian noise real noise
• On same SNR
- The case of white Gaussian noise seems to be noisier than real noise one
- Eye component in the real noise occupied the noise’s overall power
- It is difficult to make an equivalent comparison between them
63
Discussion 3
• Why does SR work well at 16 to 128?
- Although it is the case of only white Gaussian noise
- We can interpret it as the properties of data-driven approach
64
Discussion 3
- Higher-dimensional answer provides us with more fruitful information
- But in real noise case, it may not be useful information
LR
Conv
Conv
Conv
Features
ConvT
ConvT
ConvT
Conv
Conv
HR
13 X 5 kernel
64 filters
13 X 9 kernel
64 filters
7 X 1 kernel
1 filters
training : min
θ
( 𝐻𝑅 − 𝐿𝑅)2
16
32
64
128
• Our experimental design
65
Discussion 4
• Why did we choose a linear function for deep learning?
- It is typical to use non-linear functions to extract features
hyperbolic tangent function (tanh) rectified linear unit (ReLU)
-1 ≤ y ≤ 1 0 ≤ y ≤ ∞
66
Discussion 4
• Why did we choose a linear function for deep learning?
- Let’s regard our problem as finding optimal fitted line
min
θ
( 𝐻𝑅 − 𝐿𝑅)2
67
Discussion 4
• Why did we choose a linear function for deep learning?
- Let’s regard our problem as finding optimal fitted line
min
θ
( 𝐻𝑅 − 𝐿𝑅)2
68
Summary
69
• Deep learning based SR may be effective on EEG
- EEG SR can reduce experimental cost significantly
- EEG SR can provide high resolution data without much effort
Summary
70
• Deep learning based SR may be effective on EEG
- At sensor and source level
- During SR, ideal noise can be canceled out => improve signal quality
- In real noisy environment, EEG may be acceptably super-resolved
- If we know more sensor information, it may be useful for SR
- Whitening could be effective for SR
• Limitations
- However, it has limitation of data-driven approach => need HR data
- We need to conduct more experiments on real EEG data
Publication
• EEG super-resolution
[1] Sangjun Han, Moonyoung Kwon, Sung Chan Jun, “Feasibility Study of EEG Super-Resolution Using Deep Convolutional
Networks,” IEEE International Conference on Systems, Man, and Cybernetics, Oct 2018 (Submitted)
[2] Sangjun Han, Moonyoung Kwon, Sunghan Lee, Sung Chan Jun, “EEG Spatial Super-Resolution Using Deep Convolutional
Linear Networks : a Simulation Study,” Korean Society of Medical & Biological Engineering, Nov 2017 (Best Paper)
• EEG emotion classification using deep learning
[3] Sunghan Lee, Sangjun Han, Sung Chan Jun, “EEG-based Classification of Multi-class Emotional States Using One-
dimensional Convolutional Neural Networks,” 7th Graz BCI Conference, July 2017
[4] Sunghan Lee, Sangjun Han, Sung Chan Jun, “Four-Class Emotion Classification Using One-dimensional Convolutional
Neural Networks - An EEG Study,” Society for Neuroscience, Nov 2017
• Improving sleep quality by acoustic stimulation
[5] Jinyoung Choi, Sangjun Han, Moonyoung Kwon, Hyeon Seo, Sehyeon Jang, Sung Chan Jun, “Study on Subject-Specific
Parameters in Sleep Spindle Detection Algorithm,” The IEEE Engineering in Medicine and Biology Conference, July 2017
[6] Jinyoung Choi, Sangjun Han, Kyungho Won, Sung Chan Jun, “Effect of Acoustic Stimulation after Sleep Spindle Activity,”
Sleep Medicine, Oct 2017
[7] Jinyoung Choi, Sangjun Han, Kyungho Won, Sung Chan Jun, “The Neurophysiological Effect of Acoustic Stimulation with
Real-time Sleep Spindle Detection,” The IEEE Engineering in Medicine and Biology Conference, July 2018
Refereed Conference Paper
71
References
[1] D. M. Tucker, “Spatial Sampling of Head Electrical Fields: The Geodesic Sensor Net,” Electroencephalography and
Clinical Neurophysiology, vol. 87, pp. 154–163, September 1993.
[2] A. Sohrabpour, Y. Lu, P. Kankirawatana, J. Blount, H. Kim, and B. He, “Effect of EEG Electrode Number on Epileptic
Source Localization in Pediatric Patients,” Clinical Neurophysiology, vol. 126, pp. 472-480, December 2015.
[3] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning Representations by Back-propagating errors,” Nature,
vol. 323, pp. 533-536, October 1986.
[4] G. E. Hinton, “A Practical Guide to Training Restricted Boltzmann Machines,” Lecture Notes in Department of
Computer Science, University of Toronto, August 2010.
[5] D. George, Y. Dong, D. Li and A. Alex, “Context-Dependent Pre-Trained Deep Neural Networks for Large-
Vocabulary Speech Recognition,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 20, no. 1, pp. 30-
42, January 2012.
[6] K. Alex, S. Ilya and H. Geoffrey, “ImageNet Classification with Deep Convolutional Neural Networks,” in Proceedings
of the Neural Information Processing Systems, December 2012.
[7] C. Dong, C. C. Loy, and X. Tang, “Image Super-Resolution Using Deep Convolutional Networks,” IEEE Transactions
on Pattern Analysis and Machie Intelligence, vol. 38, pp. 295–307, June 2015.
[8] J. Kim, J. K. Lee, and L. M. Lee, “Deeply-Recursive Convolutional Network for Image Super-Resolution,” Conference
on Computer Vision and Pattern Recognition, pp. 1637–1645, June 2016.
[9] W. Shi, J. Caballero, F. Huszar, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-Time Single Image
and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network,” Conference on Computer
Vision and Pattern Recognition, pp. 1874–1883, June 2016.
[10] C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, amd W.
Shi, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” Conference on Computer
Vision and Pattern Recognition, pp. 4681–4690, July 2017.
[11] V. Kuleshov, S. Z. Enam, and S. Ermon, “Audio Super-Resolution Using Neural Nets,” Workshop of International
Conference on Learning Representation, April 2017.
[12] I. A. Corley, and Y. Huang, “Deep EEG Super-Resolution: Upsampling EEG Spatial Resolution with Generative
Adversarial Networks,” IEEE EMBS International Conference on Biomedical & Health Informatics, March 2018
[13] K. Sekihara, and S. S. Nagarajan, Adaptive Spatial Filters for Electromagnetic Brain Imaging, 1st ed., Springer-Verlag
Berlin Heidelberg, 2008.
[14] P. Kingma, and J. Ba, “ADAM: A Method for Stochastic Optimization,” International Conference on Learning
Representation, arXiv:1412.6980, May 2015.
[15] K. He, X, Zhang, S. Ren, and J. Sun, “Delving Deep into Rectifiers: Surpassing Human-Level Performance on
ImageNet Classification,” International Conference on Computer Vision, pp. 1026–1034, December 2015. 72
Thank you
73

More Related Content

Similar to Feasibility of EEG Super-Resolution Using Deep Convolutional Networks

11 (l)random vibrations
11 (l)random vibrations11 (l)random vibrations
11 (l)random vibrations
chowdavaramsaiprasad
 
Sparse and Redundant Representations: Theory and Applications
Sparse and Redundant Representations: Theory and ApplicationsSparse and Redundant Representations: Theory and Applications
Sparse and Redundant Representations: Theory and Applications
Distinguished Lecturer Series - Leon The Mathematician
 
Fast Sparse 2-D DFT Computation using Sparse-Graph Alias Codes
Fast Sparse 2-D DFT Computation using Sparse-Graph Alias CodesFast Sparse 2-D DFT Computation using Sparse-Graph Alias Codes
Fast Sparse 2-D DFT Computation using Sparse-Graph Alias Codes
Frank Ong
 
Defense Presentation
Defense PresentationDefense Presentation
Defense PresentationSahil Chaubal
 
Analysis and Compression of Reflectance Data Using An Evolved Spectral Correl...
Analysis and Compression of Reflectance Data Using An Evolved Spectral Correl...Analysis and Compression of Reflectance Data Using An Evolved Spectral Correl...
Analysis and Compression of Reflectance Data Using An Evolved Spectral Correl...
Peter Morovic
 
Image pipeline
Image pipelineImage pipeline
Image pipeline
Omer Korech
 
Low Coherence Interferometry: From Sensor Multiplexing to Biomedical Imaging
Low Coherence Interferometry:  From Sensor Multiplexing to Biomedical ImagingLow Coherence Interferometry:  From Sensor Multiplexing to Biomedical Imaging
Low Coherence Interferometry: From Sensor Multiplexing to Biomedical Imaging
Antonio Lobo
 
Study on Data Augmentation Methods for Sonar Image Analysis
Study on Data Augmentation Methods for Sonar Image AnalysisStudy on Data Augmentation Methods for Sonar Image Analysis
Study on Data Augmentation Methods for Sonar Image Analysis
harmonylab
 
Us machine
Us machineUs machine
Us machine
KamalEldirawi
 
NASA Fundamental of FSO.pdf
NASA Fundamental of FSO.pdfNASA Fundamental of FSO.pdf
NASA Fundamental of FSO.pdf
zoohir
 
Digital communication
Digital communicationDigital communication
Digital communicationmeashi
 
Development of a low cost pc-based single-channel eeg monitoring system
Development of a low cost pc-based single-channel eeg monitoring systemDevelopment of a low cost pc-based single-channel eeg monitoring system
Development of a low cost pc-based single-channel eeg monitoring system
Md Kafiul Islam
 
Emotion Recognition.pptx
Emotion Recognition.pptxEmotion Recognition.pptx
Emotion Recognition.pptx
tazim68
 
2015-04 PhD defense
2015-04 PhD defense2015-04 PhD defense
2015-04 PhD defenseNil Garcia
 
UCSF Hyperpolarized MR #4: Acquisition and RF Coils (2019)
UCSF Hyperpolarized MR #4: Acquisition and RF Coils (2019)UCSF Hyperpolarized MR #4: Acquisition and RF Coils (2019)
UCSF Hyperpolarized MR #4: Acquisition and RF Coils (2019)
Peder Larson
 
[Research] Detection of MCI using EEG Relative Power + DNN
[Research] Detection of MCI using EEG Relative Power + DNN[Research] Detection of MCI using EEG Relative Power + DNN
[Research] Detection of MCI using EEG Relative Power + DNN
Donghyeon Kim
 
Audio Fundamentals
Audio Fundamentals Audio Fundamentals
Audio Fundamentals
James West
 
FMRI medical imagining
FMRI  medical imaginingFMRI  medical imagining
FMRI medical imagining
Vishwas N
 
Online Stochastic Tensor Decomposition for Background Subtraction in Multispe...
Online Stochastic Tensor Decomposition for Background Subtraction in Multispe...Online Stochastic Tensor Decomposition for Background Subtraction in Multispe...
Online Stochastic Tensor Decomposition for Background Subtraction in Multispe...
ActiveEon
 
Sampling
SamplingSampling

Similar to Feasibility of EEG Super-Resolution Using Deep Convolutional Networks (20)

11 (l)random vibrations
11 (l)random vibrations11 (l)random vibrations
11 (l)random vibrations
 
Sparse and Redundant Representations: Theory and Applications
Sparse and Redundant Representations: Theory and ApplicationsSparse and Redundant Representations: Theory and Applications
Sparse and Redundant Representations: Theory and Applications
 
Fast Sparse 2-D DFT Computation using Sparse-Graph Alias Codes
Fast Sparse 2-D DFT Computation using Sparse-Graph Alias CodesFast Sparse 2-D DFT Computation using Sparse-Graph Alias Codes
Fast Sparse 2-D DFT Computation using Sparse-Graph Alias Codes
 
Defense Presentation
Defense PresentationDefense Presentation
Defense Presentation
 
Analysis and Compression of Reflectance Data Using An Evolved Spectral Correl...
Analysis and Compression of Reflectance Data Using An Evolved Spectral Correl...Analysis and Compression of Reflectance Data Using An Evolved Spectral Correl...
Analysis and Compression of Reflectance Data Using An Evolved Spectral Correl...
 
Image pipeline
Image pipelineImage pipeline
Image pipeline
 
Low Coherence Interferometry: From Sensor Multiplexing to Biomedical Imaging
Low Coherence Interferometry:  From Sensor Multiplexing to Biomedical ImagingLow Coherence Interferometry:  From Sensor Multiplexing to Biomedical Imaging
Low Coherence Interferometry: From Sensor Multiplexing to Biomedical Imaging
 
Study on Data Augmentation Methods for Sonar Image Analysis
Study on Data Augmentation Methods for Sonar Image AnalysisStudy on Data Augmentation Methods for Sonar Image Analysis
Study on Data Augmentation Methods for Sonar Image Analysis
 
Us machine
Us machineUs machine
Us machine
 
NASA Fundamental of FSO.pdf
NASA Fundamental of FSO.pdfNASA Fundamental of FSO.pdf
NASA Fundamental of FSO.pdf
 
Digital communication
Digital communicationDigital communication
Digital communication
 
Development of a low cost pc-based single-channel eeg monitoring system
Development of a low cost pc-based single-channel eeg monitoring systemDevelopment of a low cost pc-based single-channel eeg monitoring system
Development of a low cost pc-based single-channel eeg monitoring system
 
Emotion Recognition.pptx
Emotion Recognition.pptxEmotion Recognition.pptx
Emotion Recognition.pptx
 
2015-04 PhD defense
2015-04 PhD defense2015-04 PhD defense
2015-04 PhD defense
 
UCSF Hyperpolarized MR #4: Acquisition and RF Coils (2019)
UCSF Hyperpolarized MR #4: Acquisition and RF Coils (2019)UCSF Hyperpolarized MR #4: Acquisition and RF Coils (2019)
UCSF Hyperpolarized MR #4: Acquisition and RF Coils (2019)
 
[Research] Detection of MCI using EEG Relative Power + DNN
[Research] Detection of MCI using EEG Relative Power + DNN[Research] Detection of MCI using EEG Relative Power + DNN
[Research] Detection of MCI using EEG Relative Power + DNN
 
Audio Fundamentals
Audio Fundamentals Audio Fundamentals
Audio Fundamentals
 
FMRI medical imagining
FMRI  medical imaginingFMRI  medical imagining
FMRI medical imagining
 
Online Stochastic Tensor Decomposition for Background Subtraction in Multispe...
Online Stochastic Tensor Decomposition for Background Subtraction in Multispe...Online Stochastic Tensor Decomposition for Background Subtraction in Multispe...
Online Stochastic Tensor Decomposition for Background Subtraction in Multispe...
 
Sampling
SamplingSampling
Sampling
 

Recently uploaded

By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024
Pierluigi Pugliese
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
Product School
 
Elizabeth Buie - Older adults: Are we really designing for our future selves?
Elizabeth Buie - Older adults: Are we really designing for our future selves?Elizabeth Buie - Older adults: Are we really designing for our future selves?
Elizabeth Buie - Older adults: Are we really designing for our future selves?
Nexer Digital
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
91mobiles
 
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
Jemma Hussein Allen
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
Prayukth K V
 
Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !
KatiaHIMEUR1
 
Generative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionGenerative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to Production
Aggregage
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
BookNet Canada
 
When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...
Elena Simperl
 
Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
Safe Software
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance
 
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptx
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptxSecstrike : Reverse Engineering & Pwnable tools for CTF.pptx
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptx
nkrafacyberclub
 
Introduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - CybersecurityIntroduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - Cybersecurity
mikeeftimakis1
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
Guy Korland
 
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
UiPathCommunity
 
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfObservability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Paige Cruz
 
Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
Alison B. Lowndes
 
Elevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object CalisthenicsElevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object Calisthenics
Dorra BARTAGUIZ
 

Recently uploaded (20)

By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
 
Elizabeth Buie - Older adults: Are we really designing for our future selves?
Elizabeth Buie - Older adults: Are we really designing for our future selves?Elizabeth Buie - Older adults: Are we really designing for our future selves?
Elizabeth Buie - Older adults: Are we really designing for our future selves?
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
 
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
 
Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !Securing your Kubernetes cluster_ a step-by-step guide to success !
Securing your Kubernetes cluster_ a step-by-step guide to success !
 
Generative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionGenerative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to Production
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
 
When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...
 
Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
 
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptx
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptxSecstrike : Reverse Engineering & Pwnable tools for CTF.pptx
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptx
 
Introduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - CybersecurityIntroduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - Cybersecurity
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
 
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
 
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfObservability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
 
Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
 
Elevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object CalisthenicsElevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object Calisthenics
 

Feasibility of EEG Super-Resolution Using Deep Convolutional Networks

  • 1. A Simulation Study of EEG Spatial Super-Resolution Using Deep Convolutional Networks 2018. 05. 30 Sangjun Han Gwangju Institute of Science and Technology School of Electrical Engineering and Computer Science BioComputing Lab, Prof. Sung Chan Jun Presentation for Master’s Thesis
  • 2. • Introduction - Electroencephalography - Deep Learning - Related Work - Motivation • Method - Data Generation - Source Localization - Data Preparation - Deep Convolutional Networks - Evaluation Metrics • Results - Result 1 – Conclusion 1 - Result 2 – Conclusion 2 - Result 3 – Conclusion 3 • Discussion • Summary • Publication • References Index 2
  • 4. Electroencephalography • Electroencephalography (EEG) - Measures electrical potential of brain on the scalp - Temporal and spatial dynamics - Non-invasively measured - Is mixed signal originated from brain sources EEG systems Sensor and source level source sensor 4
  • 5. Electroencephalography • Improving spatial resolution of EEG - High-density EEG hardware can be used, but it requires a lot of cost 32 channels 64 channels 128 channels 256 channels Experimental cost↑ • Resolution of EEG - High temporal resolution - But relatively low spatial resolution 5
  • 6. Electroencephalography • Low spatial resolution EEG... - May cause aliasing in spatial frequency [1] Topological difference between 16 channels and 64 channels EEG 64 channels16 channels 6
  • 7. Electroencephalography • Low spatial resolution EEG... - Increasing the electrode number helps decrease localization error [2] Mean source localization error for 5 subjects 7
  • 8. Deep Learning • The success of deep learning ... - Backpropagation appeared (1986) [3] - Weight initialization by restricted Boltzmann machine (2010) [4] - High accuracy in speech recognition (2012) [5] - High accuracy in image classification (2012) [6] - Image localization, detection, segmentation, ... super-resolution! Super-resolution (SR) Recovering a high-resolution image from a single low-resolution image • Image super-resolution 8
  • 9. • Image super-resolution SRCNN, Dong et al. 2015 [7] DRCN, Kim et al. 2015 [8] ESPCN, Shi et al. 2016 [9] SRGAN, Ledig et al. 2016 [10] Related Work 9
  • 10. • Image super-resolution SRCNN, Dong et al. 2015 [7] DRCN, Kim et al. 2015 [8] ESPCN, Shi et al. 2016 [9] SRGAN, Ledig et al. 2016 [10] How to optimize effectively and efficiently by reconstructing networks’ structure Related Work 10
  • 11. • Image super-resolution SRCNN, Dong et al. 2015 [7] DRCN, Kim et al. 2015 [8] ESPCN, Shi et al. 2016 [9] SRGAN, Ledig et al. 2016 [10] To satisfy human’s visual perception with a new concept of loss function Related Work 11
  • 12. • Image super-resolution High-resolution (HR) Original image Related Work 12
  • 13. • Image super-resolution High-resolution (HR) Blurring, Sub-sampling Original image Low-resolution (LR) Related Work 13
  • 14. • Image super-resolution High-resolution (HR) Train neural networks min θ ( 𝐻𝑅 − 𝐿𝑅)2 Original image Low-resolution (LR) Related Work 14
  • 15. • Image super-resolution High-resolution (HR) Original image Low-resolution (LR) Trained model Super-resolution (SR) Recovered image Related Work 15
  • 16. • Audio super-resolution - V. Kuleshov, 2017 [11] - Regarded as generative model - Temporally up-scaled - Bandwidth extension, thus predicting higher frequencies Related Work 16
  • 17. • EEG super-resolution - I. A. Corley, 2018 [12] - Mental imagery open dataset, 3 classes - Spatially up-scaled, 16 to 32 channels (2x), 8 to 32 channels (4x) - Evaluated SR performance by classification results Related Work 17
  • 18. Motivation • Enhancing spatial resolution of EEG using deep learning - Not merely interpolating a few missing channels - Rather, scaling up the number of channels to several folds - We can acquire high quality data without high experimental cost - Observing properties of super-resolved EEG at sensor and source level Super-resolution (SR) • Limitation of previous work - How about properties of super-resolved EEG signal? 18
  • 19. Motivation • Questions 1. How does noise type affect the EEG SR process? 2. How does SR deep learning work over various upscaling sizes? (2x, 4x, 8x) 3. Are there any approaches to improve signal quality during SR process? Sensor and source level source sensor white Gaussian noise real environmental noise 19
  • 21. Data Generation • Head model and channel information - 3-shell spherical boundary element method (BEM) - HydroCel GSN systems (Electrical Geodescis. Inc.) 1 0.92 0.87 Brain σ : 1 Skull σ : 0.0125 Scalp σ : 1 spherical head model GSN 128 layout 21
  • 22. Data Generation noiseless scalp EEG • Noiseless scalp EEG - Two dipoles were projected on scalp EEG sensors - Sampled at 250 Hz, and one trial lasted for 1 second two dipoles (blue dots) 22
  • 23. Data Generation + Simulation EEG noiseless scalp EEG white Gaussian noise real noise or • Adding noise to scalp EEG - Adding white Gaussian noise and real noise - Real noise was measured from one subject resting state - Adjusting SNR 10, 5, 1, 0.5, 0.1, 0.05, and 0.01 two dipoles (blue dots) 23
  • 24. Source Localization + Simulation EEG noiseless scalp EEG white Gaussian noise real noise or two dipoles (blue dots) • Source Localization - Array-gain minimum variance beamformer [13] - Beamforming scanned at a 7 mm scanning interval - On 10,000 voxels 24
  • 25. Data Preparation ex) For 16 to 128 channels HR (128 channels) 0 200 400 600 800 1000 25
  • 26. Data Preparation ex) For super-resolution from 16 to 128 channels HR (128 channels)LR (16 channels) select 16 channels 0 200 400 600 800 10000 200 400 600 800 1000 26
  • 27. Data Preparation HR (128 channels)LR (16 channels) select 16 channels interpolated with the average of its neighbor LR (128 channels) ex) For super-resolution from 16 to 128 channels 0 200 400 600 800 10000 200 400 600 800 1000 0 200 400 600 800 1000 27
  • 28. Data Preparation HR (128 channels)LR (16 channels) select 16 channels interpolated with the average of its neighbor LR (128 channels) train neural networks min θ ( 𝐻𝑅 − 𝐿𝑅)2 - This is an ill-posed problem - For good starting initialization [7] 16 to 32 (2x) 16 to 64 (4x) 16 to 128 (8x) ex) For super-resolution from 16 to 128 channels 0 200 400 600 800 10000 200 400 600 800 1000 0 200 400 600 800 1000 28
  • 29. Deep Convolutional Networks LR Conv Conv Conv Features ConvT ConvT ConvT Conv Conv HR 13 X 5 kernel 64 filters 13 X 9 kernel 64 filters 7 X 1 kernel 1 filters training : min θ ( 𝐻𝑅 − 𝐿𝑅)2 • Settings - Convolution for down-sampling - Transposed convolution for up-sampling - Adam optimizer (first-order gradient optimization) [14] - He initializer [15] - Linear activation function (y = x) was used 29
  • 30. Deep Convolutional Networks • Dataset - Training for 1,600 trials - Testing for 400 trials * 50 times = 20,000 trials - Averaging testing results for statistical stability LR Conv Conv Conv Features ConvT ConvT ConvT Conv Conv HR 13 X 5 kernel 64 filters 13 X 9 kernel 64 filters 7 X 1 kernel 1 filters training : min θ ( 𝐻𝑅 − 𝐿𝑅)2 30
  • 31. Evaluation Metrics • Evaluation metrics - Mean squared error (MSE, at sensor level) - Correlation (at sensor level) - Error distance between dipole locations (at source level) SLR SHR SSRvs. vs. Noiseless Scalp EEG SLR : Low-resolution signal, SHR : High-resolution signal, SSR : Super-resolved signal 31 MSE Correlation Error distance MSE Correlation Error distance MSE Correlation Error distance
  • 32. Evaluation Metrics • Evaluation metrics - Mean squared error (MSE, at sensor level) - Correlation (at sensor level) - Error distance between dipole locations (at source level) Mean Euclidean distance between voxels that activate over arbitrary power thresholds and original dipoles 32
  • 33. Results 1 How does noise type affect the EEG SR process? 33
  • 34. Result 1 White Gaussian Noise • According to SNR (when 16 to 64) - For each LR, HR, and SR case, when SNR decreases, MSE increases - For all SNR, SR case has minimum loss 34
  • 35. • According to SNR (when 16 to 64) - For each LR, HR, and SR case, when SNR decreases, correlation decreases - For all SNR, SR case has maximum correlation Result 1 White Gaussian Noise 35
  • 36. • According to SNR (when 16 to 64) - For each LR, HR, and SR case, when SNR decreases, error distance increases - For most of SNR, SR case has minimum error distance Result 1 White Gaussian Noise 36
  • 37. • According to SNR (when 16 to 64) The time series of one trial at E01 channel, when SNR 0.5 - The SSR catches well the shape of noiseless scalp EEG 0 200 400 600 800 1000 Result 1 White Gaussian Noise 37
  • 38. Source localization results, when SNR 0.5 HR SR LR Result 1 White Gaussian Noise 38Source localization results, when SNR 0.5
  • 39. Source localization results, when SNR 0.5 Result 1 White Gaussian Noise 39Source localization results, when SNR 0.5 The SR case detects dipole position well HR SR LR
  • 40. • According to SNR (when 16 to 64) - For each LR, HR, and SR case, when SNR decreases, MSE increases - For all SNR, SR case has similar loss with HR case Result 1 Real Noise 40
  • 41. • According to SNR (when 16 to 64) - For each LR, HR, and SR case, when SNR decreases, correlation decreases - For most of SNR, SR case has similar correlation with HR case Result 1 Real Noise 41
  • 42. • According to SNR (when 16 to 64) - For each LR, HR, and SR case, when SNR decreases, error distance increases - Except for very low SNR, SR case has similar error distance with HR case Result 1 Real Noise 42
  • 43. • According to SNR (when 16 to 64) The time series of one trial at E01 channel, when SNR 0.5 - It is hard to find general shape of SSR - But the SSR follows tendency of SHR 0 200 400 600 800 1000 Result 1 Real Noise 43
  • 44. Source localization results, when SNR 0.5 Result 1 Real Noise 44 HR SR LR
  • 45. Conclusion 1 • The case of white Gaussian noise - SR recovered SLR beyond the level of SHR (both at sensor and source) • The case of real noise - SR recovered SLR to the level of SHR (at sensor, but not convinced at source) 45
  • 46. Results 2 How does SR deep learning work over various up-scaling sizes? 46
  • 47. • According to upscaling ratio (when SNR 0.5) - When upscaling ratio increases, MSE decreases - For all upscaling ratio, SR case has minimum loss Result 2 White Gaussian Noise 47
  • 48. • According to upscaling ratio (when SNR 0.5) - When upscaling ratio increases, correlation increases - For all upscaling ratio, SR has maximum correlation Result 2 White Gaussian Noise 48
  • 49. • According to upscaling ratio (when SNR 0.5) - At upscaling ratio is set to 16 to 128, error distance is minimum - For all upscaling ratio, SR case has minimum error distance Result 2 White Gaussian Noise 49
  • 50. • According to upscaling ratio (when SNR 0.5) - SR reproduced the signal from SSR to the level of SHR Result 2 Real Noise 50
  • 51. Conclusion 2 • The case of white Gaussian noise - At higher upscaling ratio, SR can recover signal of better quality (at sensor, but not convinced at source) • The case of real noise - There was no significant difference over various upscaling ratio 51
  • 52. Conclusion 1 + 2 • The case of white Gaussian noise - SR recovered SLR beyond the level of SHR (both at sensor and source) - At higher upscaling ratio, SR can recover signal of better quality (at sensor, but not convinced at source) • The case of real noise - SR recovered SLR to the level of SHR (at sensor, but not convinced at source) - There was no significant difference over various upscaling ratio 52
  • 53. Conclusion 1 + 2 • The case of white Gaussian noise - SR recovered SLR beyond the level of SHR (both at sensor and source) - At higher upscaling ratio, SR can recover signal of better quality (at sensor, but not convinced at source) • The case of real noise - SR recovered SLR to the level of SHR (at sensor, but not convinced at source) - There was no significant difference over various upscaling ratio Whitening! 53
  • 54. Results 3 Are there any approaches to improve signal quality during SR? 54
  • 55. C noise covariance, signal x = source signal s + noise n xwhitened = C-1/2 x = C-1/2 (s + n) = C-1/2 s + w (white noise) Result 3 Whitening Real Noise 55
  • 56. • According to SNR (when 16 to 64) - For all SNR, whitened SR case is a little nosier than just SR case Result 3 Whitening Real Noise 56 C noise covariance, signal x = source signal s + noise n xwhitened = C-1/2 x = C-1/2 (s + n) = C-1/2 s + w (white noise)
  • 57. • According to SNR (when 16 to 64) - For most of SNR, whitened SR case is less correlated than SR case Result 3 Whitening Real Noise 57 C noise covariance, signal x = source signal s + noise n xwhitened = C-1/2 x = C-1/2 (s + n) = C-1/2 s + w (white noise)
  • 58. • According to SNR (when 16 to 64) - At very low SNR, error distance from whitened SR is reduced Result 3 Whitening Real Noise 58 C noise covariance, signal x = source signal s + noise n xwhitened = C-1/2 x = C-1/2 (s + n) = C-1/2 s + w (white noise)
  • 59. Source localization results, when SNR 0.5 Result 3 Whitening Real Noise 59 SR Whitened SR
  • 60. Conclusion 3 • Whitening of real noise - can be effective for SR - especially for source analysis 60
  • 62. Discussion 1 • Why simulation study? - In real EEG, it is difficult to extract only brain signal from its noise - Because of its noise, we don’t know exact dipole locations - We can’t observe the influence of noise type Exact dipole location from simulation data 62
  • 63. Discussion 2 white Gaussian noise real noise • On same SNR - The case of white Gaussian noise seems to be noisier than real noise one - Eye component in the real noise occupied the noise’s overall power - It is difficult to make an equivalent comparison between them 63
  • 64. Discussion 3 • Why does SR work well at 16 to 128? - Although it is the case of only white Gaussian noise - We can interpret it as the properties of data-driven approach 64
  • 65. Discussion 3 - Higher-dimensional answer provides us with more fruitful information - But in real noise case, it may not be useful information LR Conv Conv Conv Features ConvT ConvT ConvT Conv Conv HR 13 X 5 kernel 64 filters 13 X 9 kernel 64 filters 7 X 1 kernel 1 filters training : min θ ( 𝐻𝑅 − 𝐿𝑅)2 16 32 64 128 • Our experimental design 65
  • 66. Discussion 4 • Why did we choose a linear function for deep learning? - It is typical to use non-linear functions to extract features hyperbolic tangent function (tanh) rectified linear unit (ReLU) -1 ≤ y ≤ 1 0 ≤ y ≤ ∞ 66
  • 67. Discussion 4 • Why did we choose a linear function for deep learning? - Let’s regard our problem as finding optimal fitted line min θ ( 𝐻𝑅 − 𝐿𝑅)2 67
  • 68. Discussion 4 • Why did we choose a linear function for deep learning? - Let’s regard our problem as finding optimal fitted line min θ ( 𝐻𝑅 − 𝐿𝑅)2 68
  • 69. Summary 69 • Deep learning based SR may be effective on EEG - EEG SR can reduce experimental cost significantly - EEG SR can provide high resolution data without much effort
  • 70. Summary 70 • Deep learning based SR may be effective on EEG - At sensor and source level - During SR, ideal noise can be canceled out => improve signal quality - In real noisy environment, EEG may be acceptably super-resolved - If we know more sensor information, it may be useful for SR - Whitening could be effective for SR • Limitations - However, it has limitation of data-driven approach => need HR data - We need to conduct more experiments on real EEG data
  • 71. Publication • EEG super-resolution [1] Sangjun Han, Moonyoung Kwon, Sung Chan Jun, “Feasibility Study of EEG Super-Resolution Using Deep Convolutional Networks,” IEEE International Conference on Systems, Man, and Cybernetics, Oct 2018 (Submitted) [2] Sangjun Han, Moonyoung Kwon, Sunghan Lee, Sung Chan Jun, “EEG Spatial Super-Resolution Using Deep Convolutional Linear Networks : a Simulation Study,” Korean Society of Medical & Biological Engineering, Nov 2017 (Best Paper) • EEG emotion classification using deep learning [3] Sunghan Lee, Sangjun Han, Sung Chan Jun, “EEG-based Classification of Multi-class Emotional States Using One- dimensional Convolutional Neural Networks,” 7th Graz BCI Conference, July 2017 [4] Sunghan Lee, Sangjun Han, Sung Chan Jun, “Four-Class Emotion Classification Using One-dimensional Convolutional Neural Networks - An EEG Study,” Society for Neuroscience, Nov 2017 • Improving sleep quality by acoustic stimulation [5] Jinyoung Choi, Sangjun Han, Moonyoung Kwon, Hyeon Seo, Sehyeon Jang, Sung Chan Jun, “Study on Subject-Specific Parameters in Sleep Spindle Detection Algorithm,” The IEEE Engineering in Medicine and Biology Conference, July 2017 [6] Jinyoung Choi, Sangjun Han, Kyungho Won, Sung Chan Jun, “Effect of Acoustic Stimulation after Sleep Spindle Activity,” Sleep Medicine, Oct 2017 [7] Jinyoung Choi, Sangjun Han, Kyungho Won, Sung Chan Jun, “The Neurophysiological Effect of Acoustic Stimulation with Real-time Sleep Spindle Detection,” The IEEE Engineering in Medicine and Biology Conference, July 2018 Refereed Conference Paper 71
  • 72. References [1] D. M. Tucker, “Spatial Sampling of Head Electrical Fields: The Geodesic Sensor Net,” Electroencephalography and Clinical Neurophysiology, vol. 87, pp. 154–163, September 1993. [2] A. Sohrabpour, Y. Lu, P. Kankirawatana, J. Blount, H. Kim, and B. He, “Effect of EEG Electrode Number on Epileptic Source Localization in Pediatric Patients,” Clinical Neurophysiology, vol. 126, pp. 472-480, December 2015. [3] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning Representations by Back-propagating errors,” Nature, vol. 323, pp. 533-536, October 1986. [4] G. E. Hinton, “A Practical Guide to Training Restricted Boltzmann Machines,” Lecture Notes in Department of Computer Science, University of Toronto, August 2010. [5] D. George, Y. Dong, D. Li and A. Alex, “Context-Dependent Pre-Trained Deep Neural Networks for Large- Vocabulary Speech Recognition,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 20, no. 1, pp. 30- 42, January 2012. [6] K. Alex, S. Ilya and H. Geoffrey, “ImageNet Classification with Deep Convolutional Neural Networks,” in Proceedings of the Neural Information Processing Systems, December 2012. [7] C. Dong, C. C. Loy, and X. Tang, “Image Super-Resolution Using Deep Convolutional Networks,” IEEE Transactions on Pattern Analysis and Machie Intelligence, vol. 38, pp. 295–307, June 2015. [8] J. Kim, J. K. Lee, and L. M. Lee, “Deeply-Recursive Convolutional Network for Image Super-Resolution,” Conference on Computer Vision and Pattern Recognition, pp. 1637–1645, June 2016. [9] W. Shi, J. Caballero, F. Huszar, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network,” Conference on Computer Vision and Pattern Recognition, pp. 1874–1883, June 2016. [10] C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, amd W. Shi, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” Conference on Computer Vision and Pattern Recognition, pp. 4681–4690, July 2017. [11] V. Kuleshov, S. Z. Enam, and S. Ermon, “Audio Super-Resolution Using Neural Nets,” Workshop of International Conference on Learning Representation, April 2017. [12] I. A. Corley, and Y. Huang, “Deep EEG Super-Resolution: Upsampling EEG Spatial Resolution with Generative Adversarial Networks,” IEEE EMBS International Conference on Biomedical & Health Informatics, March 2018 [13] K. Sekihara, and S. S. Nagarajan, Adaptive Spatial Filters for Electromagnetic Brain Imaging, 1st ed., Springer-Verlag Berlin Heidelberg, 2008. [14] P. Kingma, and J. Ba, “ADAM: A Method for Stochastic Optimization,” International Conference on Learning Representation, arXiv:1412.6980, May 2015. [15] K. He, X, Zhang, S. Ren, and J. Sun, “Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification,” International Conference on Computer Vision, pp. 1026–1034, December 2015. 72