This document summarizes a master's thesis presentation on using deep convolutional networks for EEG spatial super-resolution. The study used simulated EEG data to test how different noise types and upscaling ratios affect the super-resolution process. Key findings include that super-resolution recovered low-resolution signals beyond the level of high-resolution signals for white noise, but only to the level of high-resolution signals for real noise. Higher upscaling ratios yielded better quality signals for white noise. Whitening real noise helped super-resolution, especially for source analysis at low SNR. The study used simulations to isolate the effects of noise types since real EEG noise sources cannot be extracted.
Noise has always been unwanted and undesirable, so the need for noise removal or cancellation comes into the picture. It involves removing the undesirable noise providing the source sound as output. The method of removing the noise is classified as passive noise reduction and active noise cancellation. The former uses the sound absorbing material to block the incoming noise, and the latter uses an adaptive algorithm to synthesize the anti-noise so as to cancel the noise through destructive interference. The project work is aimed at understanding the various noise cancellation techniques and to design a real-time system which helps in removing or minimizing the effect of
noise. There are different algorithms for cancellation of noise, here some algorithms are mentioned and the project is carried out with Least Mean Square (LMS) Algorithm. It involves both hardware implementation and software simulation. The software environment uses MATLAB Simulink environment to design the noise cancellation system and the hardware or real-time implementation is done using Texas Instrument (TI) DSK6713 which is basically a DSP processor, used to interface the programming environment with the real-time environment. The program for noise cancellation is written in Code Composer Studio v5.4.0 which is an Integral Development Environment to develop applications for TI Embedded Systems. The algorithm for Noise Cancellation focuses on Adaptive Filters which have the capability to change their transfer function by adjusting their weight coefficients to get the desired noise free signal.
Keywords: Active Noise Cancellation, Least Mean Square Algorithm, Adaptive Filters, TI DSK6713, Code Composer Studio
Robust Video Denoising and Singing-Voice Separation using Low-rank matrix com...Ayush Singh, MS
Slide decks on two famous papers on applications of low-rank matrix completion.
• Huang, Chen, Smaragdis, Hasegawa-Johnson, Singing-Voice
Separation from Monaural Recordings Using Robust Principal
Component Analysis, ICASSP 2012.
• H. Ji, C. Liu, Z. Shen and Y. Xu, Robust Video Denoising using Low Rank Matrix Completion, CVPR 2010.
By:
Ayush Singh
Northeastern University
the generation of panning laws for irregular speaker arrays using heuristic m...Bruce Wiggins
A presentation made at the 31st International AES conference in 2007 on the generation of higher order Ambisonic decoders for the irregular, 5 speaker, ITU speaker arrangement.
Noise has always been unwanted and undesirable, so the need for noise removal or cancellation comes into the picture. It involves removing the undesirable noise providing the source sound as output. The method of removing the noise is classified as passive noise reduction and active noise cancellation. The former uses the sound absorbing material to block the incoming noise, and the latter uses an adaptive algorithm to synthesize the anti-noise so as to cancel the noise through destructive interference. The project work is aimed at understanding the various noise cancellation techniques and to design a real-time system which helps in removing or minimizing the effect of
noise. There are different algorithms for cancellation of noise, here some algorithms are mentioned and the project is carried out with Least Mean Square (LMS) Algorithm. It involves both hardware implementation and software simulation. The software environment uses MATLAB Simulink environment to design the noise cancellation system and the hardware or real-time implementation is done using Texas Instrument (TI) DSK6713 which is basically a DSP processor, used to interface the programming environment with the real-time environment. The program for noise cancellation is written in Code Composer Studio v5.4.0 which is an Integral Development Environment to develop applications for TI Embedded Systems. The algorithm for Noise Cancellation focuses on Adaptive Filters which have the capability to change their transfer function by adjusting their weight coefficients to get the desired noise free signal.
Keywords: Active Noise Cancellation, Least Mean Square Algorithm, Adaptive Filters, TI DSK6713, Code Composer Studio
Robust Video Denoising and Singing-Voice Separation using Low-rank matrix com...Ayush Singh, MS
Slide decks on two famous papers on applications of low-rank matrix completion.
• Huang, Chen, Smaragdis, Hasegawa-Johnson, Singing-Voice
Separation from Monaural Recordings Using Robust Principal
Component Analysis, ICASSP 2012.
• H. Ji, C. Liu, Z. Shen and Y. Xu, Robust Video Denoising using Low Rank Matrix Completion, CVPR 2010.
By:
Ayush Singh
Northeastern University
the generation of panning laws for irregular speaker arrays using heuristic m...Bruce Wiggins
A presentation made at the 31st International AES conference in 2007 on the generation of higher order Ambisonic decoders for the irregular, 5 speaker, ITU speaker arrangement.
Aggelos Katsaggelos, Professor and AT&T Chair, Northwestern University, Department of Electrical Engineering & Computer Science (IEEE/ SPIE Fellow, IEEE SPS DL), Sparse and Redundant Representations: Theory and Applications
Analysis and Compression of Reflectance Data Using An Evolved Spectral Correl...Peter Morovic
Since spectral data is significantly higher-dimensional than colorimetric data, the choice of operating in a spectral domain brings memory, storage and computational throughput hits with it. While spectral compression techniques exist, e.g., on the basis of Multivariate Analysis (mainly Principal Component Analysis and related methods), they result in representations of spectra that no longer have a direct physical meaning in that their individual val- ues no longer directly express properties at a specific wavelength interval. As a result, such compressed spectral data is not suitable for direct application of physically meaningful computation and analysis. The framework presented here is an evolution and exten- sion of the spectral correlation profile published before. It is a simple model, driven by a few adjustable parameters, that allows for the generation of nearly arbitrary, but physically realistic, spectra that can be computed efficiently, and are useful over a wide range of conditions. A practical application of its principles then includes a spectral compression approach that relies on dis- carding spectral wavelengths that are most redundant, given cor- relation to their neighbors. The goodness of representing realistic spectra is evaluated using the MIPE metric as applied to the SOCS and other databases as a reference. The end result is an efficient, yet physically meaningful, compressed spectral representation that benefits computation, transmission and storage of spectral content.
Low Coherence Interferometry: From Sensor Multiplexing to Biomedical ImagingAntonio Lobo
Interferometric sensors offer the highest accuracy in optical metrology, but a basic problem is all systems of this type is how to transduce optical information from an interferometer to an electrical signal with sufficient accuracy and reproducibility, over a reasonable large measurement range with re-initialization capability thus avoiding that optical information being lost. A general interferometric technique providing the above capabilities is often called as “Low Coherence Interferometry (LCI)”, also known, as “White-Light” Interferometry (WLI)”. This talk will review the main characteristics, configurations and methods of using this interferometric technique on the interrogation and multiplexing of fiber optic sensors. Then, its evolution and application towards biomedical optical imaging (namely, optical coherence tomography - OCT), will be addressed taking into consideration, the optical source characteristics used and the different interferometric configuration schemes.
Study on Data Augmentation Methods for Sonar Image Analysisharmonylab
Data augmentation plays an important role in deep learning. Recently, RandAugment and Augmix have been proposed as effective augmentation methods. In aquaculture, echo sounder image analysis is used for catching specific kind of fish to improve annual catches. Therefore, it remains significant to research for effects of different methods and find suitable augmentation method for echo sounder image analysis.
This research aims at finding effective settings for RandAugment and Augmix by comparing effects of different transformation methods. The experiment results show improvement on recall rate and f1 score of distinguishing tuna in echo sounder images with suitable transformation methods.
This research aims to identify six types of emotions (anger, disgust, fear, happiness, sadness, and surprise)
in humans to enhance naturalness in human-machine interaction, pain monitoring in patients, and
detection and treatment of anxiety and depression. 1166 video sequences of 42 subjects from the publicly
available eNTERFACE’05 database have been used for audio-visual emotion recognition. The core part
of this research includes segmentation, face detection, and histogram features from different color
domains (RGB, HSV, and YCbCr) extraction from video frames as well as segmentation, PLPC, MFCC
feature extraction from audio. After feature matrix generation, five-folds cross-validation has been done with cubic SVM classifier and KNN classifier for emotion recognition.
UCSF Hyperpolarized MR #4: Acquisition and RF Coils (2019)Peder Larson
UCSF Hyperpolarized MR Seminar
Summer 2019, Lecture #4
"Hyperpolarized MR Acquisition and RF Coils"
Lecturer: Jeremy Gordon
Sponsored by the NIH/NIBIB-supported UCSF Hyperpolarized MRI Technology Resource Center (P41EB013598)
https://radiology.ucsf.edu/research/labs/hyperpolarized-mri-tech
[Research] Detection of MCI using EEG Relative Power + DNNDonghyeon Kim
* This is a summarized presentation material for the conference paper:
Donghyeon Kim, and Kiseon Kim "Detection of Early Stage Alzheimer's Disease using EEG Relative Power with Deep Neural Network," IEEE EMBC 2018
* It was addressed in A-GIST group, study team for Artificial Intelligence in Gwangju Institute of Science and Technology (GIST)
* Youtube (in Korean): https://youtu.be/2maphOXkB6k
Online Stochastic Tensor Decomposition for Background Subtraction in Multispe...ActiveEon
Background subtraction is an important task for visual surveillance systems. However, this task becomes more complex when the data size grows since the real-world scenario requires larger data to be processed in a more efficient way, and in some cases, in a continuous manner. Until now, most of background subtraction algorithms were designed for mono or trichromatic cameras within the visible spectrum or near infrared part. Recent advances in multispectral imaging technologies give the possibility to record multispectral videos for video surveillance applications. Due to the specific nature of these data, many of the bands within multispectral images are often strongly correlated. In addition, processing multispectral images with hundreds of bands can be computationally burdensome. In order to address these major difficulties of multispectral imaging for video surveillance, this paper propose an online stochastic framework for tensor decomposition of multispectral video sequences (OSTD). First, the experimental evaluations on synthetic generated data show the robustness of the OSTD with other state of the art approaches then, we apply the same idea on seven multispectral video bands to show that only RGB features are not sufficient to tackle color saturation, illumination variations and shadows problem, but the addition of six visible spectral bands together with one near infra-red spectra provides a better background/foreground separation.
Aggelos Katsaggelos, Professor and AT&T Chair, Northwestern University, Department of Electrical Engineering & Computer Science (IEEE/ SPIE Fellow, IEEE SPS DL), Sparse and Redundant Representations: Theory and Applications
Analysis and Compression of Reflectance Data Using An Evolved Spectral Correl...Peter Morovic
Since spectral data is significantly higher-dimensional than colorimetric data, the choice of operating in a spectral domain brings memory, storage and computational throughput hits with it. While spectral compression techniques exist, e.g., on the basis of Multivariate Analysis (mainly Principal Component Analysis and related methods), they result in representations of spectra that no longer have a direct physical meaning in that their individual val- ues no longer directly express properties at a specific wavelength interval. As a result, such compressed spectral data is not suitable for direct application of physically meaningful computation and analysis. The framework presented here is an evolution and exten- sion of the spectral correlation profile published before. It is a simple model, driven by a few adjustable parameters, that allows for the generation of nearly arbitrary, but physically realistic, spectra that can be computed efficiently, and are useful over a wide range of conditions. A practical application of its principles then includes a spectral compression approach that relies on dis- carding spectral wavelengths that are most redundant, given cor- relation to their neighbors. The goodness of representing realistic spectra is evaluated using the MIPE metric as applied to the SOCS and other databases as a reference. The end result is an efficient, yet physically meaningful, compressed spectral representation that benefits computation, transmission and storage of spectral content.
Low Coherence Interferometry: From Sensor Multiplexing to Biomedical ImagingAntonio Lobo
Interferometric sensors offer the highest accuracy in optical metrology, but a basic problem is all systems of this type is how to transduce optical information from an interferometer to an electrical signal with sufficient accuracy and reproducibility, over a reasonable large measurement range with re-initialization capability thus avoiding that optical information being lost. A general interferometric technique providing the above capabilities is often called as “Low Coherence Interferometry (LCI)”, also known, as “White-Light” Interferometry (WLI)”. This talk will review the main characteristics, configurations and methods of using this interferometric technique on the interrogation and multiplexing of fiber optic sensors. Then, its evolution and application towards biomedical optical imaging (namely, optical coherence tomography - OCT), will be addressed taking into consideration, the optical source characteristics used and the different interferometric configuration schemes.
Study on Data Augmentation Methods for Sonar Image Analysisharmonylab
Data augmentation plays an important role in deep learning. Recently, RandAugment and Augmix have been proposed as effective augmentation methods. In aquaculture, echo sounder image analysis is used for catching specific kind of fish to improve annual catches. Therefore, it remains significant to research for effects of different methods and find suitable augmentation method for echo sounder image analysis.
This research aims at finding effective settings for RandAugment and Augmix by comparing effects of different transformation methods. The experiment results show improvement on recall rate and f1 score of distinguishing tuna in echo sounder images with suitable transformation methods.
This research aims to identify six types of emotions (anger, disgust, fear, happiness, sadness, and surprise)
in humans to enhance naturalness in human-machine interaction, pain monitoring in patients, and
detection and treatment of anxiety and depression. 1166 video sequences of 42 subjects from the publicly
available eNTERFACE’05 database have been used for audio-visual emotion recognition. The core part
of this research includes segmentation, face detection, and histogram features from different color
domains (RGB, HSV, and YCbCr) extraction from video frames as well as segmentation, PLPC, MFCC
feature extraction from audio. After feature matrix generation, five-folds cross-validation has been done with cubic SVM classifier and KNN classifier for emotion recognition.
UCSF Hyperpolarized MR #4: Acquisition and RF Coils (2019)Peder Larson
UCSF Hyperpolarized MR Seminar
Summer 2019, Lecture #4
"Hyperpolarized MR Acquisition and RF Coils"
Lecturer: Jeremy Gordon
Sponsored by the NIH/NIBIB-supported UCSF Hyperpolarized MRI Technology Resource Center (P41EB013598)
https://radiology.ucsf.edu/research/labs/hyperpolarized-mri-tech
[Research] Detection of MCI using EEG Relative Power + DNNDonghyeon Kim
* This is a summarized presentation material for the conference paper:
Donghyeon Kim, and Kiseon Kim "Detection of Early Stage Alzheimer's Disease using EEG Relative Power with Deep Neural Network," IEEE EMBC 2018
* It was addressed in A-GIST group, study team for Artificial Intelligence in Gwangju Institute of Science and Technology (GIST)
* Youtube (in Korean): https://youtu.be/2maphOXkB6k
Online Stochastic Tensor Decomposition for Background Subtraction in Multispe...ActiveEon
Background subtraction is an important task for visual surveillance systems. However, this task becomes more complex when the data size grows since the real-world scenario requires larger data to be processed in a more efficient way, and in some cases, in a continuous manner. Until now, most of background subtraction algorithms were designed for mono or trichromatic cameras within the visible spectrum or near infrared part. Recent advances in multispectral imaging technologies give the possibility to record multispectral videos for video surveillance applications. Due to the specific nature of these data, many of the bands within multispectral images are often strongly correlated. In addition, processing multispectral images with hundreds of bands can be computationally burdensome. In order to address these major difficulties of multispectral imaging for video surveillance, this paper propose an online stochastic framework for tensor decomposition of multispectral video sequences (OSTD). First, the experimental evaluations on synthetic generated data show the robustness of the OSTD with other state of the art approaches then, we apply the same idea on seven multispectral video bands to show that only RGB features are not sufficient to tackle color saturation, illumination variations and shadows problem, but the addition of six visible spectral bands together with one near infra-red spectra provides a better background/foreground separation.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Elevating Tactical DDD Patterns Through Object Calisthenics
Feasibility of EEG Super-Resolution Using Deep Convolutional Networks
1. A Simulation Study of EEG Spatial Super-Resolution
Using Deep Convolutional Networks
2018. 05. 30
Sangjun Han
Gwangju Institute of Science and Technology
School of Electrical Engineering and Computer Science
BioComputing Lab, Prof. Sung Chan Jun
Presentation for Master’s Thesis
2. • Introduction
- Electroencephalography
- Deep Learning
- Related Work
- Motivation
• Method
- Data Generation
- Source Localization
- Data Preparation
- Deep Convolutional Networks
- Evaluation Metrics
• Results
- Result 1 – Conclusion 1
- Result 2 – Conclusion 2
- Result 3 – Conclusion 3
• Discussion
• Summary
• Publication
• References
Index
2
4. Electroencephalography
• Electroencephalography (EEG)
- Measures electrical potential of brain on the scalp
- Temporal and spatial dynamics
- Non-invasively measured
- Is mixed signal originated from brain sources
EEG systems Sensor and source level
source
sensor
4
5. Electroencephalography
• Improving spatial resolution of EEG
- High-density EEG hardware can be used, but it requires a lot of cost
32 channels 64 channels 128 channels 256 channels
Experimental cost↑
• Resolution of EEG
- High temporal resolution
- But relatively low spatial resolution
5
6. Electroencephalography
• Low spatial resolution EEG...
- May cause aliasing in spatial frequency [1]
Topological difference between 16 channels and 64 channels EEG
64 channels16 channels
6
7. Electroencephalography
• Low spatial resolution EEG...
- Increasing the electrode number helps decrease localization error [2]
Mean source localization error for 5 subjects
7
8. Deep Learning
• The success of deep learning ...
- Backpropagation appeared (1986) [3]
- Weight initialization by restricted Boltzmann machine (2010) [4]
- High accuracy in speech recognition (2012) [5]
- High accuracy in image classification (2012) [6]
- Image localization, detection, segmentation, ... super-resolution!
Super-resolution (SR)
Recovering a high-resolution image
from a single low-resolution image
• Image super-resolution
8
9. • Image super-resolution
SRCNN, Dong et al. 2015 [7] DRCN, Kim et al. 2015 [8]
ESPCN, Shi et al. 2016 [9]
SRGAN, Ledig et al. 2016 [10]
Related Work
9
10. • Image super-resolution
SRCNN, Dong et al. 2015 [7] DRCN, Kim et al. 2015 [8]
ESPCN, Shi et al. 2016 [9]
SRGAN, Ledig et al. 2016 [10]
How to optimize effectively and efficiently
by reconstructing networks’ structure
Related Work
10
11. • Image super-resolution
SRCNN, Dong et al. 2015 [7] DRCN, Kim et al. 2015 [8]
ESPCN, Shi et al. 2016 [9]
SRGAN, Ledig et al. 2016 [10]
To satisfy human’s visual perception
with a new concept of loss function
Related Work
11
16. • Audio super-resolution
- V. Kuleshov, 2017 [11]
- Regarded as generative model
- Temporally up-scaled
- Bandwidth extension, thus predicting higher frequencies
Related Work
16
17. • EEG super-resolution
- I. A. Corley, 2018 [12]
- Mental imagery open dataset, 3 classes
- Spatially up-scaled, 16 to 32 channels (2x), 8 to 32 channels (4x)
- Evaluated SR performance by classification results
Related Work
17
18. Motivation
• Enhancing spatial resolution of EEG using deep learning
- Not merely interpolating a few missing channels
- Rather, scaling up the number of channels to several folds
- We can acquire high quality data without high experimental cost
- Observing properties of super-resolved EEG at sensor and source level
Super-resolution (SR)
• Limitation of previous work
- How about properties of super-resolved EEG signal?
18
19. Motivation
• Questions
1. How does noise type affect the EEG SR process?
2. How does SR deep learning work over various upscaling sizes? (2x, 4x, 8x)
3. Are there any approaches to improve signal quality during SR process?
Sensor and source level
source
sensor white Gaussian noise
real environmental noise
19
21. Data Generation
• Head model and channel information
- 3-shell spherical boundary element method (BEM)
- HydroCel GSN systems (Electrical Geodescis. Inc.)
1
0.92
0.87
Brain σ : 1
Skull σ : 0.0125
Scalp σ : 1
spherical head model
GSN 128 layout
21
22. Data Generation
noiseless scalp EEG
• Noiseless scalp EEG
- Two dipoles were projected on scalp EEG sensors
- Sampled at 250 Hz, and one trial lasted for 1 second
two dipoles (blue dots)
22
23. Data Generation
+ Simulation EEG
noiseless scalp EEG
white Gaussian noise
real noise
or
• Adding noise to scalp EEG
- Adding white Gaussian noise and real noise
- Real noise was measured from one subject resting state
- Adjusting SNR 10, 5, 1, 0.5, 0.1, 0.05, and 0.01
two dipoles (blue dots)
23
24. Source Localization
+ Simulation EEG
noiseless scalp EEG
white Gaussian noise
real noise
or
two dipoles (blue dots)
• Source Localization
- Array-gain minimum variance beamformer [13]
- Beamforming scanned at a 7 mm scanning interval
- On 10,000 voxels
24
26. Data Preparation
ex) For super-resolution from 16 to 128 channels
HR (128 channels)LR (16 channels)
select 16 channels
0 200 400 600 800 10000 200 400 600 800 1000
26
27. Data Preparation
HR (128 channels)LR (16 channels)
select 16 channels
interpolated with
the average of its neighbor
LR (128 channels)
ex) For super-resolution from 16 to 128 channels
0 200 400 600 800 10000 200 400 600 800 1000
0 200 400 600 800 1000
27
28. Data Preparation
HR (128 channels)LR (16 channels)
select 16 channels
interpolated with
the average of its neighbor
LR (128 channels)
train neural networks
min
θ
( 𝐻𝑅 − 𝐿𝑅)2
- This is an ill-posed problem
- For good starting initialization [7]
16 to 32 (2x)
16 to 64 (4x)
16 to 128 (8x)
ex) For super-resolution from 16 to 128 channels
0 200 400 600 800 10000 200 400 600 800 1000
0 200 400 600 800 1000
28
29. Deep Convolutional Networks
LR
Conv
Conv
Conv
Features
ConvT
ConvT
ConvT
Conv
Conv
HR
13 X 5 kernel
64 filters
13 X 9 kernel
64 filters
7 X 1 kernel
1 filters
training : min
θ
( 𝐻𝑅 − 𝐿𝑅)2
• Settings
- Convolution for down-sampling
- Transposed convolution for up-sampling
- Adam optimizer (first-order gradient optimization) [14]
- He initializer [15]
- Linear activation function (y = x) was used
29
30. Deep Convolutional Networks
• Dataset
- Training for 1,600 trials
- Testing for 400 trials * 50 times = 20,000 trials
- Averaging testing results for statistical stability
LR
Conv
Conv
Conv
Features
ConvT
ConvT
ConvT
Conv
Conv
HR
13 X 5 kernel
64 filters
13 X 9 kernel
64 filters
7 X 1 kernel
1 filters
training : min
θ
( 𝐻𝑅 − 𝐿𝑅)2
30
31. Evaluation Metrics
• Evaluation metrics
- Mean squared error (MSE, at sensor level)
- Correlation (at sensor level)
- Error distance between dipole locations (at source level)
SLR SHR SSRvs. vs.
Noiseless Scalp EEG
SLR : Low-resolution signal, SHR : High-resolution signal, SSR : Super-resolved signal
31
MSE
Correlation
Error distance
MSE
Correlation
Error distance
MSE
Correlation
Error distance
32. Evaluation Metrics
• Evaluation metrics
- Mean squared error (MSE, at sensor level)
- Correlation (at sensor level)
- Error distance between dipole locations (at source level)
Mean Euclidean distance
between voxels that activate over arbitrary power thresholds and original dipoles
32
34. Result 1 White Gaussian Noise
• According to SNR (when 16 to 64)
- For each LR, HR, and SR case, when SNR decreases, MSE increases
- For all SNR, SR case has minimum loss
34
35. • According to SNR (when 16 to 64)
- For each LR, HR, and SR case, when SNR decreases, correlation decreases
- For all SNR, SR case has maximum correlation
Result 1 White Gaussian Noise
35
36. • According to SNR (when 16 to 64)
- For each LR, HR, and SR case, when SNR decreases, error distance increases
- For most of SNR, SR case has minimum error distance
Result 1 White Gaussian Noise
36
37. • According to SNR (when 16 to 64)
The time series of one trial at E01 channel, when SNR 0.5
- The SSR catches well the shape of noiseless scalp EEG
0 200 400 600 800 1000
Result 1 White Gaussian Noise
37
38. Source localization results, when SNR 0.5
HR
SR
LR
Result 1 White Gaussian Noise
38Source localization results, when SNR 0.5
39. Source localization results, when SNR 0.5
Result 1 White Gaussian Noise
39Source localization results, when SNR 0.5
The SR case detects
dipole position well
HR
SR
LR
40. • According to SNR (when 16 to 64)
- For each LR, HR, and SR case, when SNR decreases, MSE increases
- For all SNR, SR case has similar loss with HR case
Result 1 Real Noise
40
41. • According to SNR (when 16 to 64)
- For each LR, HR, and SR case, when SNR decreases, correlation decreases
- For most of SNR, SR case has similar correlation with HR case
Result 1 Real Noise
41
42. • According to SNR (when 16 to 64)
- For each LR, HR, and SR case, when SNR decreases, error distance increases
- Except for very low SNR, SR case has similar error distance with HR case
Result 1 Real Noise
42
43. • According to SNR (when 16 to 64)
The time series of one trial at E01 channel, when SNR 0.5
- It is hard to find general shape of SSR
- But the SSR follows tendency of SHR
0 200 400 600 800 1000
Result 1 Real Noise
43
45. Conclusion 1
• The case of white Gaussian noise
- SR recovered SLR beyond the level of SHR
(both at sensor and source)
• The case of real noise
- SR recovered SLR to the level of SHR
(at sensor, but not convinced at source)
45
46. Results 2
How does SR deep learning work over various up-scaling sizes?
46
47. • According to upscaling ratio (when SNR 0.5)
- When upscaling ratio increases, MSE decreases
- For all upscaling ratio, SR case has minimum loss
Result 2 White Gaussian Noise
47
48. • According to upscaling ratio (when SNR 0.5)
- When upscaling ratio increases, correlation increases
- For all upscaling ratio, SR has maximum correlation
Result 2 White Gaussian Noise
48
49. • According to upscaling ratio (when SNR 0.5)
- At upscaling ratio is set to 16 to 128, error distance is minimum
- For all upscaling ratio, SR case has minimum error distance
Result 2 White Gaussian Noise
49
50. • According to upscaling ratio (when SNR 0.5)
- SR reproduced the signal from SSR to the level of SHR
Result 2 Real Noise
50
51. Conclusion 2
• The case of white Gaussian noise
- At higher upscaling ratio, SR can recover signal of better quality
(at sensor, but not convinced at source)
• The case of real noise
- There was no significant difference over various upscaling ratio
51
52. Conclusion 1 + 2
• The case of white Gaussian noise
- SR recovered SLR beyond the level of SHR
(both at sensor and source)
- At higher upscaling ratio, SR can recover signal of better quality
(at sensor, but not convinced at source)
• The case of real noise
- SR recovered SLR to the level of SHR
(at sensor, but not convinced at source)
- There was no significant difference over various upscaling ratio
52
53. Conclusion 1 + 2
• The case of white Gaussian noise
- SR recovered SLR beyond the level of SHR
(both at sensor and source)
- At higher upscaling ratio, SR can recover signal of better quality
(at sensor, but not convinced at source)
• The case of real noise
- SR recovered SLR to the level of SHR
(at sensor, but not convinced at source)
- There was no significant difference over various upscaling ratio
Whitening!
53
55. C noise covariance, signal x = source signal s + noise n
xwhitened = C-1/2 x = C-1/2 (s + n) = C-1/2 s + w (white noise)
Result 3 Whitening Real Noise
55
56. • According to SNR (when 16 to 64)
- For all SNR, whitened SR case is a little nosier than just SR case
Result 3 Whitening Real Noise
56
C noise covariance, signal x = source signal s + noise n
xwhitened = C-1/2 x = C-1/2 (s + n) = C-1/2 s + w (white noise)
57. • According to SNR (when 16 to 64)
- For most of SNR, whitened SR case is less correlated than SR case
Result 3 Whitening Real Noise
57
C noise covariance, signal x = source signal s + noise n
xwhitened = C-1/2 x = C-1/2 (s + n) = C-1/2 s + w (white noise)
58. • According to SNR (when 16 to 64)
- At very low SNR, error distance from whitened SR is reduced
Result 3 Whitening Real Noise
58
C noise covariance, signal x = source signal s + noise n
xwhitened = C-1/2 x = C-1/2 (s + n) = C-1/2 s + w (white noise)
62. Discussion 1
• Why simulation study?
- In real EEG, it is difficult to extract only brain signal from its noise
- Because of its noise, we don’t know exact dipole locations
- We can’t observe the influence of noise type
Exact dipole location from simulation data
62
63. Discussion 2
white Gaussian noise real noise
• On same SNR
- The case of white Gaussian noise seems to be noisier than real noise one
- Eye component in the real noise occupied the noise’s overall power
- It is difficult to make an equivalent comparison between them
63
64. Discussion 3
• Why does SR work well at 16 to 128?
- Although it is the case of only white Gaussian noise
- We can interpret it as the properties of data-driven approach
64
65. Discussion 3
- Higher-dimensional answer provides us with more fruitful information
- But in real noise case, it may not be useful information
LR
Conv
Conv
Conv
Features
ConvT
ConvT
ConvT
Conv
Conv
HR
13 X 5 kernel
64 filters
13 X 9 kernel
64 filters
7 X 1 kernel
1 filters
training : min
θ
( 𝐻𝑅 − 𝐿𝑅)2
16
32
64
128
• Our experimental design
65
66. Discussion 4
• Why did we choose a linear function for deep learning?
- It is typical to use non-linear functions to extract features
hyperbolic tangent function (tanh) rectified linear unit (ReLU)
-1 ≤ y ≤ 1 0 ≤ y ≤ ∞
66
67. Discussion 4
• Why did we choose a linear function for deep learning?
- Let’s regard our problem as finding optimal fitted line
min
θ
( 𝐻𝑅 − 𝐿𝑅)2
67
68. Discussion 4
• Why did we choose a linear function for deep learning?
- Let’s regard our problem as finding optimal fitted line
min
θ
( 𝐻𝑅 − 𝐿𝑅)2
68
69. Summary
69
• Deep learning based SR may be effective on EEG
- EEG SR can reduce experimental cost significantly
- EEG SR can provide high resolution data without much effort
70. Summary
70
• Deep learning based SR may be effective on EEG
- At sensor and source level
- During SR, ideal noise can be canceled out => improve signal quality
- In real noisy environment, EEG may be acceptably super-resolved
- If we know more sensor information, it may be useful for SR
- Whitening could be effective for SR
• Limitations
- However, it has limitation of data-driven approach => need HR data
- We need to conduct more experiments on real EEG data
71. Publication
• EEG super-resolution
[1] Sangjun Han, Moonyoung Kwon, Sung Chan Jun, “Feasibility Study of EEG Super-Resolution Using Deep Convolutional
Networks,” IEEE International Conference on Systems, Man, and Cybernetics, Oct 2018 (Submitted)
[2] Sangjun Han, Moonyoung Kwon, Sunghan Lee, Sung Chan Jun, “EEG Spatial Super-Resolution Using Deep Convolutional
Linear Networks : a Simulation Study,” Korean Society of Medical & Biological Engineering, Nov 2017 (Best Paper)
• EEG emotion classification using deep learning
[3] Sunghan Lee, Sangjun Han, Sung Chan Jun, “EEG-based Classification of Multi-class Emotional States Using One-
dimensional Convolutional Neural Networks,” 7th Graz BCI Conference, July 2017
[4] Sunghan Lee, Sangjun Han, Sung Chan Jun, “Four-Class Emotion Classification Using One-dimensional Convolutional
Neural Networks - An EEG Study,” Society for Neuroscience, Nov 2017
• Improving sleep quality by acoustic stimulation
[5] Jinyoung Choi, Sangjun Han, Moonyoung Kwon, Hyeon Seo, Sehyeon Jang, Sung Chan Jun, “Study on Subject-Specific
Parameters in Sleep Spindle Detection Algorithm,” The IEEE Engineering in Medicine and Biology Conference, July 2017
[6] Jinyoung Choi, Sangjun Han, Kyungho Won, Sung Chan Jun, “Effect of Acoustic Stimulation after Sleep Spindle Activity,”
Sleep Medicine, Oct 2017
[7] Jinyoung Choi, Sangjun Han, Kyungho Won, Sung Chan Jun, “The Neurophysiological Effect of Acoustic Stimulation with
Real-time Sleep Spindle Detection,” The IEEE Engineering in Medicine and Biology Conference, July 2018
Refereed Conference Paper
71
72. References
[1] D. M. Tucker, “Spatial Sampling of Head Electrical Fields: The Geodesic Sensor Net,” Electroencephalography and
Clinical Neurophysiology, vol. 87, pp. 154–163, September 1993.
[2] A. Sohrabpour, Y. Lu, P. Kankirawatana, J. Blount, H. Kim, and B. He, “Effect of EEG Electrode Number on Epileptic
Source Localization in Pediatric Patients,” Clinical Neurophysiology, vol. 126, pp. 472-480, December 2015.
[3] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning Representations by Back-propagating errors,” Nature,
vol. 323, pp. 533-536, October 1986.
[4] G. E. Hinton, “A Practical Guide to Training Restricted Boltzmann Machines,” Lecture Notes in Department of
Computer Science, University of Toronto, August 2010.
[5] D. George, Y. Dong, D. Li and A. Alex, “Context-Dependent Pre-Trained Deep Neural Networks for Large-
Vocabulary Speech Recognition,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 20, no. 1, pp. 30-
42, January 2012.
[6] K. Alex, S. Ilya and H. Geoffrey, “ImageNet Classification with Deep Convolutional Neural Networks,” in Proceedings
of the Neural Information Processing Systems, December 2012.
[7] C. Dong, C. C. Loy, and X. Tang, “Image Super-Resolution Using Deep Convolutional Networks,” IEEE Transactions
on Pattern Analysis and Machie Intelligence, vol. 38, pp. 295–307, June 2015.
[8] J. Kim, J. K. Lee, and L. M. Lee, “Deeply-Recursive Convolutional Network for Image Super-Resolution,” Conference
on Computer Vision and Pattern Recognition, pp. 1637–1645, June 2016.
[9] W. Shi, J. Caballero, F. Huszar, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-Time Single Image
and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network,” Conference on Computer
Vision and Pattern Recognition, pp. 1874–1883, June 2016.
[10] C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, amd W.
Shi, “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network,” Conference on Computer
Vision and Pattern Recognition, pp. 4681–4690, July 2017.
[11] V. Kuleshov, S. Z. Enam, and S. Ermon, “Audio Super-Resolution Using Neural Nets,” Workshop of International
Conference on Learning Representation, April 2017.
[12] I. A. Corley, and Y. Huang, “Deep EEG Super-Resolution: Upsampling EEG Spatial Resolution with Generative
Adversarial Networks,” IEEE EMBS International Conference on Biomedical & Health Informatics, March 2018
[13] K. Sekihara, and S. S. Nagarajan, Adaptive Spatial Filters for Electromagnetic Brain Imaging, 1st ed., Springer-Verlag
Berlin Heidelberg, 2008.
[14] P. Kingma, and J. Ba, “ADAM: A Method for Stochastic Optimization,” International Conference on Learning
Representation, arXiv:1412.6980, May 2015.
[15] K. He, X, Zhang, S. Ren, and J. Sun, “Delving Deep into Rectifiers: Surpassing Human-Level Performance on
ImageNet Classification,” International Conference on Computer Vision, pp. 1026–1034, December 2015. 72