This document is a dissertation submitted by Steven Berard in partial fulfillment of the requirements for a Master of Science degree in Electrical Engineering from New Mexico State University. The dissertation evaluates the ability of artificial neural networks to localize dipole sources within a high-resolution realistic head model using different sensor configurations and levels of noise. Results show that neural networks can localize dipoles with accuracy even in the presence of noise, and that performance generally improves with more sensors and complex network architectures.
This master's thesis examines the design and simulation of next-generation label-free biosensors based on planar waveguides. Existing wavelength interrogated optical sensor chips are modeled and compared to experimental data. Passive sensor designs utilizing Bragg gratings at 850nm and 1550nm wavelengths are developed, simulated, and evaluated. The designs include single Bragg grating and double Bragg grating and resonator structures. Recommendations are made for three sensor designs for production. Potential active sensor designs incorporating waveguide amplifiers are also discussed theoretically.
This thesis examines methods for reducing the memory and complexity requirements of deep learning models to enable processing and learning on chip. It reviews techniques for compressing model size and operations count, such as pruning connections, quantization, and lightweight architectures. It also introduces a new shift attention layer method for replacing convolutions with multiplications. The thesis also studies incremental learning approaches that can continuously update models as new data becomes available. Hardware implementations of these compressed models and learning methods are explored to enable deep learning inference and training directly on embedded systems.
This bachelor thesis examines applying machine learning algorithms to generate checking circuits. It describes algorithms like artificial neural networks, decision trees, random forests, nearest neighbors, and others. Experiments were conducted classifying sine/cosine values and components of a robot controller, including a Position Evaluation Unit, Barrier Detection Unit, and Direction Unit. The experiments compared classifiers' performance using different data representations and parameter settings. Random forest and multilayer perceptron classifiers generally achieved the highest accuracy.
This dissertation proposes using neural networks and field programmable gate arrays to control reconfigurable antennas. A new approach is presented to model reconfigurable antennas using neural networks trained in Matlab. The neural network model is then implemented on an FPGA board using Xilinx System Generator blocks. With the neural network embedded on the FPGA board, it acts as a real-time controller for the reconfigurable antenna to optimize its configuration based on the antenna behavior it has learned. Several examples of reconfigurable antenna modeling and FPGA-based neural network control are provided to demonstrate the approach.
This document is the master's thesis of Miquel Perelló Nieto submitted to Aalto University. The thesis examines merging chrominance and luminance in early, medium, and late fusion using Convolutional Neural Networks (CNNs) for image classification. The thesis demonstrates that fusing luminance and chrominance channels can improve CNNs' ability to learn visual features and outperforms models that do not fuse the channels. The thesis contains background chapters on image classification, neuroscience, artificial neural networks, CNNs, and the history of connectionism. It then describes the author's experiments comparing CNN architectures that fuse luminance and chrominance channels at different stages to a basic CNN model.
This document presents a master's thesis that designed coded excitation and filter techniques to improve 3D ultrasound computer tomography (USCT) imaging for early breast cancer detection. The thesis aimed to suppress side lobes and increase separability of reflections in USCT data by developing customized mismatch filters. Signal and image evaluations showed the best designed filter and coded excitation combination improved image contrast by 143% compared to the standard USCT approach, under the same system constraints.
Trade-off between recognition an reconstruction: Application of Robotics Visi...stainvai
Autonomous and ecient action of robots requires a robust robot vision system that can
cope with variable light and view conditions. These include partial occlusion, blur, and
mainly a large scale dierence of object size due to variable distance to the objects. This
change in scale leads to reduced resolution for objects seen from a distance. One of the
most important tasks for the robot's visual system is object recognition. This task is also
aected by orientation and background changes. These real-world conditions require a
development of specic object recognition methods.
This work is devoted to robotic object recognition. We develop recognition methods
based on training that includes incorporation of prior knowledge about the problem.
The prior knowledge is incorporated via learning constraints during training (parameter
estimation). A signicant part of the work is devoted to the study of reconstruction
constraints. In general, there is a tradeo between the prior-knowledge constraints and
the constraints emerging from the classication or regression task at hand. In order to
avoid the additional estimation of the optimal tradeo between these two constraints, we
consider this tradeo as a hyper parameter (under Bayesian framework) and integrate
over a certain (discrete) distribution. We also study various constraints resulting from
information theory considerations.
Experimental results on two face data-sets are presented. Signicant improvement in
face recognition is achieved for various image degradations such as, various forms of image
blur, partial occlusion, and noise. Additional improvement in recognition performance is
achieved when preprocessing the degraded images via state of the art image restoration
techniques.
This document is a thesis presented by Miguel de Vega Rodrigo to obtain a doctorate in engineering sciences from the Université libre de Bruxelles in 2008. The thesis models future all-optical networks without buffering capabilities, specifically optical burst switching (OBS) and optical packet switching (OPS) networks. It covers the functional and hardware implementation of such networks, characterization of internet traffic that will enter these networks, and mathematical modeling approaches for the networks and traffic.
This master's thesis examines the design and simulation of next-generation label-free biosensors based on planar waveguides. Existing wavelength interrogated optical sensor chips are modeled and compared to experimental data. Passive sensor designs utilizing Bragg gratings at 850nm and 1550nm wavelengths are developed, simulated, and evaluated. The designs include single Bragg grating and double Bragg grating and resonator structures. Recommendations are made for three sensor designs for production. Potential active sensor designs incorporating waveguide amplifiers are also discussed theoretically.
This thesis examines methods for reducing the memory and complexity requirements of deep learning models to enable processing and learning on chip. It reviews techniques for compressing model size and operations count, such as pruning connections, quantization, and lightweight architectures. It also introduces a new shift attention layer method for replacing convolutions with multiplications. The thesis also studies incremental learning approaches that can continuously update models as new data becomes available. Hardware implementations of these compressed models and learning methods are explored to enable deep learning inference and training directly on embedded systems.
This bachelor thesis examines applying machine learning algorithms to generate checking circuits. It describes algorithms like artificial neural networks, decision trees, random forests, nearest neighbors, and others. Experiments were conducted classifying sine/cosine values and components of a robot controller, including a Position Evaluation Unit, Barrier Detection Unit, and Direction Unit. The experiments compared classifiers' performance using different data representations and parameter settings. Random forest and multilayer perceptron classifiers generally achieved the highest accuracy.
This dissertation proposes using neural networks and field programmable gate arrays to control reconfigurable antennas. A new approach is presented to model reconfigurable antennas using neural networks trained in Matlab. The neural network model is then implemented on an FPGA board using Xilinx System Generator blocks. With the neural network embedded on the FPGA board, it acts as a real-time controller for the reconfigurable antenna to optimize its configuration based on the antenna behavior it has learned. Several examples of reconfigurable antenna modeling and FPGA-based neural network control are provided to demonstrate the approach.
This document is the master's thesis of Miquel Perelló Nieto submitted to Aalto University. The thesis examines merging chrominance and luminance in early, medium, and late fusion using Convolutional Neural Networks (CNNs) for image classification. The thesis demonstrates that fusing luminance and chrominance channels can improve CNNs' ability to learn visual features and outperforms models that do not fuse the channels. The thesis contains background chapters on image classification, neuroscience, artificial neural networks, CNNs, and the history of connectionism. It then describes the author's experiments comparing CNN architectures that fuse luminance and chrominance channels at different stages to a basic CNN model.
This document presents a master's thesis that designed coded excitation and filter techniques to improve 3D ultrasound computer tomography (USCT) imaging for early breast cancer detection. The thesis aimed to suppress side lobes and increase separability of reflections in USCT data by developing customized mismatch filters. Signal and image evaluations showed the best designed filter and coded excitation combination improved image contrast by 143% compared to the standard USCT approach, under the same system constraints.
Trade-off between recognition an reconstruction: Application of Robotics Visi...stainvai
Autonomous and ecient action of robots requires a robust robot vision system that can
cope with variable light and view conditions. These include partial occlusion, blur, and
mainly a large scale dierence of object size due to variable distance to the objects. This
change in scale leads to reduced resolution for objects seen from a distance. One of the
most important tasks for the robot's visual system is object recognition. This task is also
aected by orientation and background changes. These real-world conditions require a
development of specic object recognition methods.
This work is devoted to robotic object recognition. We develop recognition methods
based on training that includes incorporation of prior knowledge about the problem.
The prior knowledge is incorporated via learning constraints during training (parameter
estimation). A signicant part of the work is devoted to the study of reconstruction
constraints. In general, there is a tradeo between the prior-knowledge constraints and
the constraints emerging from the classication or regression task at hand. In order to
avoid the additional estimation of the optimal tradeo between these two constraints, we
consider this tradeo as a hyper parameter (under Bayesian framework) and integrate
over a certain (discrete) distribution. We also study various constraints resulting from
information theory considerations.
Experimental results on two face data-sets are presented. Signicant improvement in
face recognition is achieved for various image degradations such as, various forms of image
blur, partial occlusion, and noise. Additional improvement in recognition performance is
achieved when preprocessing the degraded images via state of the art image restoration
techniques.
This document is a thesis presented by Miguel de Vega Rodrigo to obtain a doctorate in engineering sciences from the Université libre de Bruxelles in 2008. The thesis models future all-optical networks without buffering capabilities, specifically optical burst switching (OBS) and optical packet switching (OPS) networks. It covers the functional and hardware implementation of such networks, characterization of internet traffic that will enter these networks, and mathematical modeling approaches for the networks and traffic.
An investigation into the building blocks for Neural Networks and modern day machine learning. This investigation touches on the evolution of the most basic of neural networks to more modern day concepts, particularly in methodologies that allow better training of these networks to produce more accurate real-life models.
Im-ception - An exploration into facial PAD through the use of fine tuning de...Cooper Wakefield
This document is a thesis submitted by Cooper Wakefield to the University of Queensland for the degree of Bachelor of Engineering. The thesis proposes developing a presentation attack detection (PAD) system through fine tuning a deep convolutional neural network. It aims to leverage pre-trained networks and fine tune the upper layers to differentiate between real and fake facial images with a high degree of accuracy. The thesis outlines the problem of presentation attacks on facial recognition systems, reviews prior approaches to PAD, and describes the proposed solution of using transfer learning on a CNN to classify images as real or fake.
This document proposes using MPLS network recovery models to enable smart grid communications. It discusses how the existing electric grid needs to transform into a smart grid to handle increasing energy demands. A robust, reliable, self-healing WAN is needed for smart grid monitoring and control. MPLS recovery models are proposed and simulated to provide different classes of service with minimal disruption times. The models aim to overcome challenges from fiber cuts, component failures or natural disasters.
This document describes research into designing an optimal acoustic enclosure for a portable generator. It investigates how the enclosure's material properties, panel dimensions, and source-to-panel distance affect the enclosure's insertion loss based on theoretical models. Prototypes were constructed and tested to compare actual noise reduction to predicted values from the insertion loss model. An optimization code was developed to determine the optimal material properties for maximum insertion loss given constraints like cost and weight.
This dissertation by Harsh Pandey investigates the manipulation and separation of objects at the microscale using simulations and microfluidic experiments. The document includes 7 chapters that cover: 1) developing a coarse-grained model to simulate the conformation-dependent electrophoretic mobility of polymers, 2) using the model to simulate the trapping and manipulation of deformable objects in microfluidic devices, 3) simulating the trapping of rigid objects using electric and flow fields, 4) microfluidic experiments observing the behavior of particles under flow and electric fields, 5) simulating the self-assembly of polymers at liquid interfaces, and 6) conclusions and potential future directions. The overall goal is to better understand and
This document provides an overview of VLSI-compatible implementations for artificial neural networks. It begins with an introduction and motivation for the work. The objectives are to develop generalized artificial neural network models and architectures that can be implemented using standard VLSI technologies. Various hardware implementation techniques for neural networks are reviewed, including pulse-coded, digital, and analog approaches. Different analog implementations like resistive synaptic weights, switched capacitor neurons, current-mode and sub-threshold designs are discussed. The document concludes with a comparison of some existing neural network hardware systems and a summary of the chapter.
This document is a dissertation submitted by Aniket Pingley to The George Washington University in partial fulfillment of the requirements for the degree of Doctor of Science in August 2011. The dissertation addresses privacy issues related to location-based services and introduces four client-centric privacy protection systems - CAP, BACK-TRACK, DUMMY-Q, and Digital Marauder's Map. For each system, the dissertation presents theoretical analysis and experimental evaluation to demonstrate the effectiveness of the proposed techniques on privacy protection and efficiency.
This thesis proposes and evaluates a compressive sensing (CS)-based indoor positioning and tracking system using received signal strength (RSS) from wireless local area network access points. The system is designed and implemented on mobile devices with limited resources.
In the offline phase, RSS fingerprints are collected and clustered using affinity propagation. In the online phase, coarse localization is done by matching RSS measurements to precomputed clusters, and fine localization refines the position using CS recovery on the sparse location signal.
An indoor tracking system is also presented, which integrates the CS-based positioning with a Kalman filter for sequential location estimates. Experimental results on two testbeds show the system achieves better accuracy than other fingerprinting methods, suitable for implementation
This document is a thesis submitted by Uppu Karthik for the degree of Master of Science in Electrical Engineering at IIT Madras in February 2014. The thesis describes the design, simulation, fabrication and characterization of a compact 2x2 integrated optical Mach-Zehnder interferometer in silicon-on-insulator platform for dense wavelength division multiplexing applications. Key aspects covered include the design of single-mode waveguides and multi-mode interference based power splitters, fabrication using photolithography and reactive ion etching, and experimental characterization showing polarization independent operation over 1520-1600nm with 15dB extinction and 100GHz bandwidth.
This dissertation describes the development of a monostatic all-fiber laser detection and ranging (LADAR) system. A key innovation is a fused-fiber coupler multiplexer that transmits laser light through its core and receives returned light through its cladding. This allows for a compact, aligned transmit/receive path. The dissertation demonstrates a static 1D rangefinder and a scanning 3D LADAR using a vibrating fiber scanner and position sensor. The goal is a small, low-power, low-cost LADAR suitable for unmanned vehicles.
Efficiency Optimization of Realtime GPU Raytracing in Modeling of Car2Car Com...Alexander Zhdanov
This master's thesis investigates efficiency optimization techniques for real-time GPU raytracing used in modeling car-to-car communication systems. Specifically, it aims to improve the simulation of the propagation channel through ray reordering and caching. The research analyzes existing caching schemes exploiting frame coherence, GPU data structures, and ray reordering techniques. It proposes algorithms for ray sorting on the CPU and caching tracing data. The thesis then implements and evaluates the proposed methods, analyzing system performance for static and dynamic scenes. Testing shows ray reordering significantly increases efficiency, though caching provides varying benefits depending on the scheme used.
This document summarizes Ali Farzanehfar's research on mitigating anomalous spike signals observed in the CMS barrel electromagnetic calorimeter. Spikes mimic real electron and photon signals and can reduce the efficiency of the CMS trigger system if not addressed. The document investigates the distinguishing properties of spikes, current mitigation techniques, and uses a Monte Carlo simulation to evaluate potential improvements like optimizing the shaping time, digitization phase, and number of digitized samples. Tuning these parameters was found to better separate spike and electromagnetic shower pulses and improve the efficiency of spike rejection while maintaining high acceptance of real signals.
Repeated Games For Inter-operator Spectrum Sharing - Bikramjit Singh, MSc Thesis, Aalto University, 2014
Other works
Coordination protocol for inter-operator spectrum sharing based on spectrum usage favors - http://arxiv.org/pdf/1505.02898v1.pdf
Co-primary inter-operator spectrum sharing using repeated games - http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=7024767&queryText%3Dco-primary+sharing
Co-primary inter-operator spectrum sharing using repeated games - http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=7024767&queryText%3Dco-primary+sharing
Repeated spectrum sharing games in multi-operator heterogeneous networks -
Coordination protocol for inter-operator spectrum sharing in co-primary 5G small cell networks - http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=7158263&filter%3DAND%28p_IS_Number%3A7158253%29
Intermediate description of the spectrum needs and usage principles - https://www.metis2020.com/wp-content/uploads/deliverables/METIS_D5.1_v1.pdf
The use of synchrophasors for monitoring and improving the stability of power transmission networks is gaining in significance all over the world. The aim is to monitor the system state, to intensify awareness for system stability and to make optimal use of existing lines. This way, system stability can be improved overall and even the transmission performance can be increased. The data from so many PMU’s and PDC’s needs to be collected and directed to proper channels for its efficient use. Thus we need to develop an efficient, flexible and hybrid data concentrator that can serve this purpose. Besides accepting the data from PMU’s, PDC should be able to accept the data also from other PDC. We have designed such a PDC (iPDC) that accepts data from PMU & PDC that are IEEEC37.118 standard compliant.
WAMS architecture with iPDC and PMU at different levels. This architecture enables iPDC to receive data either from a PMU or other iPDC. Both PMU and iPDC from whom the data is being received should be IEEE C37.118 synchrophasor standard compliant. It is hybrid architecture.
iPDC Design
The client server architecture is common in networks when two peers are communicating with each other. Of the two peers (PMU and iPDC) that are communicating with each other in WAMS one acts as a client and the other as a server. Since PMU saves the requests coming
from iPDC by sending data or configuration frames it acts as a server. It listens for command frames from iPDC. PMU-iPDC communication can be either over TCP or UDP communication protocols. On receiving command frames, PMU replies to the iPDC with data or configuration frames according to the type of request.
iPDC functionality is bifurcated as server and client. iPDC as a Client - When iPDC receives data or configuration frames its acts as a client. When acting as a client, it creates a new thread for each PMU or a PDC from whom it is going to receive data/configuration frames. This thread would establish connection between the two communication entities. It handles both TCP and UDP connections. The first frame that the server (PMU/PDC) would receive is the command for sending the configuration frame. When the server replies with the configuration frame, iPDC (client) would generate another request to start sending the data frames. On receiving
such a command frame, the server starts sending the data frames. If there is some change in the status bits of data frame which the client (iPDC) notices, it would take an action. For example if it notices a bit 10 has been set, it would internally send a command to server to send the latest configuration frame.
iPDC as a Server- When iPDC receives command frames from another PDC it would acts as a server. There would be two reserved ports one for UDP and other for TCP on which the PDC would receive command frame requests. Thus PDC now plays the role of PMU waiting
for command frames.
The use of synchrophasors for monitoring and improving the stability of power transmission networks is gaining in significance all over the world. The aim is to monitor the system state, to intensify awareness for system stability and to make optimal use of existing lines. This way, system stability can be improved overall and even the transmission performance can be increased. The data from so many PMU’s and PDC’s needs to be collected and directed to proper channels for its efficient use. Thus we need to develop an efficient, flexible and hybrid data concentrator that can serve this purpose. Besides accepting the data from PMU’s, PDC should be able to accept the data also from other PDC. We have designed such a PDC (iPDC) that accepts data from PMU & PDC that are IEEEC37.118 standard compliant.
WAMS architecture with iPDC and PMU at different levels. This architecture enables iPDC to receive data either from a PMU or other iPDC. Both PMU and iPDC from whom the data is being received should be IEEE C37.118 synchrophasor standard compliant. It is hybrid architecture.
iPDC Design
The client server architecture is common in networks when two peers are communicating with each other. Of the two peers (PMU and iPDC) that are communicating with each other in WAMS one acts as a client and the other as a server. Since PMU saves the requests coming
from iPDC by sending data or configuration frames it acts as a server. It listens for command frames from iPDC. PMU-iPDC communication can be either over TCP or UDP communication protocols. On receiving command frames, PMU replies to the iPDC with data or configuration frames according to the type of request.
iPDC functionality is bifurcated as server and client. iPDC as a Client - When iPDC receives data or configuration frames its acts as a client. When acting as a client, it creates a new thread for each PMU or a PDC from whom it is going to receive data/configuration frames. This thread would establish connection between the two communication entities. It handles both TCP and UDP connections. The first frame that the server (PMU/PDC) would receive is the command for sending the configuration frame. When the server replies with the configuration frame, iPDC (client) would generate another request to start sending the data frames. On receiving
such a command frame, the server starts sending the data frames. If there is some change in the status bits of data frame which the client (iPDC) notices, it would take an action. For example if it notices a bit 10 has been set, it would internally send a command to server to send the latest configuration frame.
iPDC as a Server- When iPDC receives command frames from another PDC it would acts as a server. There would be two reserved ports one for UDP and other for TCP on which the PDC would receive command frame requests. Thus PDC now plays the role of PMU waiting
for command frames.
Depth sensor independent body part localization in depth images using a multi...Rasmus Johansson
This thesis explores using multiple depth cameras to improve body part localization of occluded joints. A random forest algorithm is trained on low-resolution depth images to classify pixels and estimate joint positions. Testing shows this multi-camera approach provides more stable estimations than a single camera, though limitations of low-resolution training data impact generalization to high-resolution Kinect images. Overall, the results are satisfactory on low-resolution data but struggles are encountered when applying the model to real Kinect footage.
This document provides an extensive literature review and overview of automatic text summarization from multiple documents. It discusses definitions of text summarization and the summarization process, which includes steps like domain definition, subject analysis, data analysis, feature generation, information aggregation, summary representation, generation and evaluation. It also describes using n-gram graphs as a text representation for summarization and the operators and algorithms involved. Finally it discusses evaluation of summarization systems and using background knowledge to improve the summarization process.
This thesis examines bifacial photovoltaic modules through simulation and experiment. A software tool is developed to simulate module performance by modeling the optical and electrical characteristics. Simulation results show bifacial gains of up to 35% for stand-alone modules and over 60% for modules with tracking systems. Outdoor experiments validate the simulation findings regarding factors like reflective surface size and blocking effects between modules. Detailed long-term measurements are also taken of a bifacial installation. The research aims to better understand bifacial technology and its potential to further reduce the cost of solar energy.
20150202 bunk-alliance RA-GmbH anonymous_enBunk, Artur
The document provides information about the law firm bunk-alliance Rechtsanwaltsgesellschaft mbH. It lists the firm's offices in Berlin, Worms, Frankfurt, and Warsaw. It identifies the executive directors and attorneys. The firm's practice areas include M&A transactions, corporate law, finance and banking law, international tax law, intellectual property law, and unfair competition.
An investigation into the building blocks for Neural Networks and modern day machine learning. This investigation touches on the evolution of the most basic of neural networks to more modern day concepts, particularly in methodologies that allow better training of these networks to produce more accurate real-life models.
Im-ception - An exploration into facial PAD through the use of fine tuning de...Cooper Wakefield
This document is a thesis submitted by Cooper Wakefield to the University of Queensland for the degree of Bachelor of Engineering. The thesis proposes developing a presentation attack detection (PAD) system through fine tuning a deep convolutional neural network. It aims to leverage pre-trained networks and fine tune the upper layers to differentiate between real and fake facial images with a high degree of accuracy. The thesis outlines the problem of presentation attacks on facial recognition systems, reviews prior approaches to PAD, and describes the proposed solution of using transfer learning on a CNN to classify images as real or fake.
This document proposes using MPLS network recovery models to enable smart grid communications. It discusses how the existing electric grid needs to transform into a smart grid to handle increasing energy demands. A robust, reliable, self-healing WAN is needed for smart grid monitoring and control. MPLS recovery models are proposed and simulated to provide different classes of service with minimal disruption times. The models aim to overcome challenges from fiber cuts, component failures or natural disasters.
This document describes research into designing an optimal acoustic enclosure for a portable generator. It investigates how the enclosure's material properties, panel dimensions, and source-to-panel distance affect the enclosure's insertion loss based on theoretical models. Prototypes were constructed and tested to compare actual noise reduction to predicted values from the insertion loss model. An optimization code was developed to determine the optimal material properties for maximum insertion loss given constraints like cost and weight.
This dissertation by Harsh Pandey investigates the manipulation and separation of objects at the microscale using simulations and microfluidic experiments. The document includes 7 chapters that cover: 1) developing a coarse-grained model to simulate the conformation-dependent electrophoretic mobility of polymers, 2) using the model to simulate the trapping and manipulation of deformable objects in microfluidic devices, 3) simulating the trapping of rigid objects using electric and flow fields, 4) microfluidic experiments observing the behavior of particles under flow and electric fields, 5) simulating the self-assembly of polymers at liquid interfaces, and 6) conclusions and potential future directions. The overall goal is to better understand and
This document provides an overview of VLSI-compatible implementations for artificial neural networks. It begins with an introduction and motivation for the work. The objectives are to develop generalized artificial neural network models and architectures that can be implemented using standard VLSI technologies. Various hardware implementation techniques for neural networks are reviewed, including pulse-coded, digital, and analog approaches. Different analog implementations like resistive synaptic weights, switched capacitor neurons, current-mode and sub-threshold designs are discussed. The document concludes with a comparison of some existing neural network hardware systems and a summary of the chapter.
This document is a dissertation submitted by Aniket Pingley to The George Washington University in partial fulfillment of the requirements for the degree of Doctor of Science in August 2011. The dissertation addresses privacy issues related to location-based services and introduces four client-centric privacy protection systems - CAP, BACK-TRACK, DUMMY-Q, and Digital Marauder's Map. For each system, the dissertation presents theoretical analysis and experimental evaluation to demonstrate the effectiveness of the proposed techniques on privacy protection and efficiency.
This thesis proposes and evaluates a compressive sensing (CS)-based indoor positioning and tracking system using received signal strength (RSS) from wireless local area network access points. The system is designed and implemented on mobile devices with limited resources.
In the offline phase, RSS fingerprints are collected and clustered using affinity propagation. In the online phase, coarse localization is done by matching RSS measurements to precomputed clusters, and fine localization refines the position using CS recovery on the sparse location signal.
An indoor tracking system is also presented, which integrates the CS-based positioning with a Kalman filter for sequential location estimates. Experimental results on two testbeds show the system achieves better accuracy than other fingerprinting methods, suitable for implementation
This document is a thesis submitted by Uppu Karthik for the degree of Master of Science in Electrical Engineering at IIT Madras in February 2014. The thesis describes the design, simulation, fabrication and characterization of a compact 2x2 integrated optical Mach-Zehnder interferometer in silicon-on-insulator platform for dense wavelength division multiplexing applications. Key aspects covered include the design of single-mode waveguides and multi-mode interference based power splitters, fabrication using photolithography and reactive ion etching, and experimental characterization showing polarization independent operation over 1520-1600nm with 15dB extinction and 100GHz bandwidth.
This dissertation describes the development of a monostatic all-fiber laser detection and ranging (LADAR) system. A key innovation is a fused-fiber coupler multiplexer that transmits laser light through its core and receives returned light through its cladding. This allows for a compact, aligned transmit/receive path. The dissertation demonstrates a static 1D rangefinder and a scanning 3D LADAR using a vibrating fiber scanner and position sensor. The goal is a small, low-power, low-cost LADAR suitable for unmanned vehicles.
Efficiency Optimization of Realtime GPU Raytracing in Modeling of Car2Car Com...Alexander Zhdanov
This master's thesis investigates efficiency optimization techniques for real-time GPU raytracing used in modeling car-to-car communication systems. Specifically, it aims to improve the simulation of the propagation channel through ray reordering and caching. The research analyzes existing caching schemes exploiting frame coherence, GPU data structures, and ray reordering techniques. It proposes algorithms for ray sorting on the CPU and caching tracing data. The thesis then implements and evaluates the proposed methods, analyzing system performance for static and dynamic scenes. Testing shows ray reordering significantly increases efficiency, though caching provides varying benefits depending on the scheme used.
This document summarizes Ali Farzanehfar's research on mitigating anomalous spike signals observed in the CMS barrel electromagnetic calorimeter. Spikes mimic real electron and photon signals and can reduce the efficiency of the CMS trigger system if not addressed. The document investigates the distinguishing properties of spikes, current mitigation techniques, and uses a Monte Carlo simulation to evaluate potential improvements like optimizing the shaping time, digitization phase, and number of digitized samples. Tuning these parameters was found to better separate spike and electromagnetic shower pulses and improve the efficiency of spike rejection while maintaining high acceptance of real signals.
Repeated Games For Inter-operator Spectrum Sharing - Bikramjit Singh, MSc Thesis, Aalto University, 2014
Other works
Coordination protocol for inter-operator spectrum sharing based on spectrum usage favors - http://arxiv.org/pdf/1505.02898v1.pdf
Co-primary inter-operator spectrum sharing using repeated games - http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=7024767&queryText%3Dco-primary+sharing
Co-primary inter-operator spectrum sharing using repeated games - http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=7024767&queryText%3Dco-primary+sharing
Repeated spectrum sharing games in multi-operator heterogeneous networks -
Coordination protocol for inter-operator spectrum sharing in co-primary 5G small cell networks - http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=7158263&filter%3DAND%28p_IS_Number%3A7158253%29
Intermediate description of the spectrum needs and usage principles - https://www.metis2020.com/wp-content/uploads/deliverables/METIS_D5.1_v1.pdf
The use of synchrophasors for monitoring and improving the stability of power transmission networks is gaining in significance all over the world. The aim is to monitor the system state, to intensify awareness for system stability and to make optimal use of existing lines. This way, system stability can be improved overall and even the transmission performance can be increased. The data from so many PMU’s and PDC’s needs to be collected and directed to proper channels for its efficient use. Thus we need to develop an efficient, flexible and hybrid data concentrator that can serve this purpose. Besides accepting the data from PMU’s, PDC should be able to accept the data also from other PDC. We have designed such a PDC (iPDC) that accepts data from PMU & PDC that are IEEEC37.118 standard compliant.
WAMS architecture with iPDC and PMU at different levels. This architecture enables iPDC to receive data either from a PMU or other iPDC. Both PMU and iPDC from whom the data is being received should be IEEE C37.118 synchrophasor standard compliant. It is hybrid architecture.
iPDC Design
The client server architecture is common in networks when two peers are communicating with each other. Of the two peers (PMU and iPDC) that are communicating with each other in WAMS one acts as a client and the other as a server. Since PMU saves the requests coming
from iPDC by sending data or configuration frames it acts as a server. It listens for command frames from iPDC. PMU-iPDC communication can be either over TCP or UDP communication protocols. On receiving command frames, PMU replies to the iPDC with data or configuration frames according to the type of request.
iPDC functionality is bifurcated as server and client. iPDC as a Client - When iPDC receives data or configuration frames its acts as a client. When acting as a client, it creates a new thread for each PMU or a PDC from whom it is going to receive data/configuration frames. This thread would establish connection between the two communication entities. It handles both TCP and UDP connections. The first frame that the server (PMU/PDC) would receive is the command for sending the configuration frame. When the server replies with the configuration frame, iPDC (client) would generate another request to start sending the data frames. On receiving
such a command frame, the server starts sending the data frames. If there is some change in the status bits of data frame which the client (iPDC) notices, it would take an action. For example if it notices a bit 10 has been set, it would internally send a command to server to send the latest configuration frame.
iPDC as a Server- When iPDC receives command frames from another PDC it would acts as a server. There would be two reserved ports one for UDP and other for TCP on which the PDC would receive command frame requests. Thus PDC now plays the role of PMU waiting
for command frames.
The use of synchrophasors for monitoring and improving the stability of power transmission networks is gaining in significance all over the world. The aim is to monitor the system state, to intensify awareness for system stability and to make optimal use of existing lines. This way, system stability can be improved overall and even the transmission performance can be increased. The data from so many PMU’s and PDC’s needs to be collected and directed to proper channels for its efficient use. Thus we need to develop an efficient, flexible and hybrid data concentrator that can serve this purpose. Besides accepting the data from PMU’s, PDC should be able to accept the data also from other PDC. We have designed such a PDC (iPDC) that accepts data from PMU & PDC that are IEEEC37.118 standard compliant.
WAMS architecture with iPDC and PMU at different levels. This architecture enables iPDC to receive data either from a PMU or other iPDC. Both PMU and iPDC from whom the data is being received should be IEEE C37.118 synchrophasor standard compliant. It is hybrid architecture.
iPDC Design
The client server architecture is common in networks when two peers are communicating with each other. Of the two peers (PMU and iPDC) that are communicating with each other in WAMS one acts as a client and the other as a server. Since PMU saves the requests coming
from iPDC by sending data or configuration frames it acts as a server. It listens for command frames from iPDC. PMU-iPDC communication can be either over TCP or UDP communication protocols. On receiving command frames, PMU replies to the iPDC with data or configuration frames according to the type of request.
iPDC functionality is bifurcated as server and client. iPDC as a Client - When iPDC receives data or configuration frames its acts as a client. When acting as a client, it creates a new thread for each PMU or a PDC from whom it is going to receive data/configuration frames. This thread would establish connection between the two communication entities. It handles both TCP and UDP connections. The first frame that the server (PMU/PDC) would receive is the command for sending the configuration frame. When the server replies with the configuration frame, iPDC (client) would generate another request to start sending the data frames. On receiving
such a command frame, the server starts sending the data frames. If there is some change in the status bits of data frame which the client (iPDC) notices, it would take an action. For example if it notices a bit 10 has been set, it would internally send a command to server to send the latest configuration frame.
iPDC as a Server- When iPDC receives command frames from another PDC it would acts as a server. There would be two reserved ports one for UDP and other for TCP on which the PDC would receive command frame requests. Thus PDC now plays the role of PMU waiting
for command frames.
Depth sensor independent body part localization in depth images using a multi...Rasmus Johansson
This thesis explores using multiple depth cameras to improve body part localization of occluded joints. A random forest algorithm is trained on low-resolution depth images to classify pixels and estimate joint positions. Testing shows this multi-camera approach provides more stable estimations than a single camera, though limitations of low-resolution training data impact generalization to high-resolution Kinect images. Overall, the results are satisfactory on low-resolution data but struggles are encountered when applying the model to real Kinect footage.
This document provides an extensive literature review and overview of automatic text summarization from multiple documents. It discusses definitions of text summarization and the summarization process, which includes steps like domain definition, subject analysis, data analysis, feature generation, information aggregation, summary representation, generation and evaluation. It also describes using n-gram graphs as a text representation for summarization and the operators and algorithms involved. Finally it discusses evaluation of summarization systems and using background knowledge to improve the summarization process.
This thesis examines bifacial photovoltaic modules through simulation and experiment. A software tool is developed to simulate module performance by modeling the optical and electrical characteristics. Simulation results show bifacial gains of up to 35% for stand-alone modules and over 60% for modules with tracking systems. Outdoor experiments validate the simulation findings regarding factors like reflective surface size and blocking effects between modules. Detailed long-term measurements are also taken of a bifacial installation. The research aims to better understand bifacial technology and its potential to further reduce the cost of solar energy.
20150202 bunk-alliance RA-GmbH anonymous_enBunk, Artur
The document provides information about the law firm bunk-alliance Rechtsanwaltsgesellschaft mbH. It lists the firm's offices in Berlin, Worms, Frankfurt, and Warsaw. It identifies the executive directors and attorneys. The firm's practice areas include M&A transactions, corporate law, finance and banking law, international tax law, intellectual property law, and unfair competition.
El documento describe las partes principales de un teclado, incluyendo la tecla, los conectores, la tarjeta lógica, la tapa trasera, la membrana, los tornillos y la placa de circuito plástica. Explica brevemente la función de cada parte como producir un efecto al presionar las teclas, conectar otros elementos, procesar la información de las teclas presionadas y mantener la estructura y presión del teclado.
Fred Kampo has a Bachelor of Science in Business Administration from the University of Colorado Boulder with a 3.2 GPA. He has over 5 years of experience in customer service, sales, and loan documentation. His experience includes positions at Silver Star Brands, US Bank CLS, UMC Connection, and CU Call Center. He is proficient in Microsoft Office and has strong communication, business, and problem-solving skills.
The document discusses the color scheme for a 2 minute horror/thriller film opening. It recommends a scheme using dark, dull colors like black, grey, and shades of red, as these colors are conventional for the horror genre. Examples of existing horror films that use similar dark color schemes, like Shutter Island, provide evidence and inspiration for this choice.
This presentation was created for a seminar by Kate Austin-Avon of Advokate that took place in summer of 2013 at LARAC's Lapham Gallery.
www.advokate.net
This document discusses how continuous delivery can help achieve business goals such as speeding time-to-market, increasing deployment success, enabling rapid emergency responses, and reducing business risk. It recommends using continuous delivery practices like quick automated code builds and deployments, testing in a production-like environment, and improved visibility to help meet these goals. The document also presents a continuous delivery model showing the flow from idea to deployment using practices such as collaboration, communication, and getting customer feedback at each stage.
2015 EGW Special Projects Division BrochureJeff Wiegers
This document provides information on products and services from White Rhino, including:
- Gaskets, bolts, nuts, specialty fasteners, pipe wraps, corrosion protection, and flange insulation kits.
- The Special Projects Division focuses on quoting water and wastewater treatment plant projects.
- Profiles for Jeff Wiegers and Philip Peterson, quotations specialists in the Special Projects Division.
- A list of notable projects the Special Projects Division has worked on.
The document presents information on petroleum or crude oil. It discusses the history of petroleum usage dating back thousands of years, describes where petroleum is most commonly found, and outlines the petroleum industry including exploration, refining, and major products like gasoline and plastics. Both advantages like being an energy source and disadvantages like contributing to global warming are mentioned. The document concludes by discussing conflicts between countries over petroleum resources and their importance.
This document is a resume for Ginger Lanmon, a reporter and photographer with 16 years of experience at the Hamilton Herald-News. It provides her contact information, work history including roles as a reporter, substitute teacher, and volunteer, as well as her education and references. The resume demonstrates Lanmon's experience covering local news stories and events in her community through organized and impartial reporting. It also outlines her volunteer work as a Court Appointed Special Advocate, where she advocated for the needs and interests of children in the court system.
This short document promotes creating presentations using Haiku Deck on SlideShare. It encourages the reader to get started making their own Haiku Deck presentation by providing a button to click to begin the process. The document is advertising the creation of presentations on Haiku Deck and SlideShare.
Este documento describe diferentes tipos de memoria RAM, incluyendo:
- RAM dinámica (DRAM) que necesita ser refrescada constantemente para mantener su contenido.
- RAM estática (SRAM) que es más rápida que la DRAM pero requiere más espacio y energía.
- Tipos más rápidos de memoria RAM como EDO, SDRAM, DDR SDRAM y DDR3 SDRAM que mejoran el rendimiento al duplicar la velocidad de transferencia de datos.
La memoria RAM y ROM son componentes fundamentales de una computadora. La memoria RAM almacena datos de forma temporal y es volátil, mientras que la memoria ROM almacena datos de forma permanente aunque se corte la energía eléctrica. Existen diferentes tipos de RAM como DRAM, SDRAM y DDR RAM, y de ROM como PROM, EPROM, EEPROM y memoria flash. La memoria caché es una memoria de acceso rápido que almacena datos recientemente usados para mejorar el rendimiento.
This document discusses several environmental problems affecting the community of Baja California Sur, Mexico. It identifies 5 main issues: 1) sewage spills and potholes in streets around Cabo San Lucas, 2) garbage accumulating on corners due to lack of garbage truck service in Cabo San Lucas and San Jose del Cabo, 3) contamination of Chileno Beach from diesel discharged from nearby hotel buildings, 4) contamination of beaches and seas from tourists, hotels, and ships in Cabo San Lucas and San Jose del Cabo, and 5) stagnation of rain water in streets leading to mosquito breeding and dengue cases. Potential solutions provided include improving drainage, organizing garbage collection, regulating discharges from buildings, increasing
This document is the abstract of a Master's dissertation on developing a physical model of a plucked acoustic guitar. The author created a real-time guitar synthesizer using the Karplus-Strong algorithm and Max/MSP. The model includes individual strings, a body resonator, and calibration to match a reference guitar. Evaluation showed the model can be improved by adding more parameters like the bridge and bending, and rewriting the code in open source Pure Data. The model provides a foundation for further developing virtual acoustic guitar synthesis.
This document describes an FPGA-based graphics pipeline and three advanced 3D rendering effects implemented in VHDL as part of a diploma thesis. The graphics pipeline includes vertex processing, rasterization, shading, and texture mapping. The three effects implemented are Perlin noise mapping to create a ramp texture, a particle system, and displacement mapping using Perlin noise. The project aims to understand how a simple GPU works at a low level and implement graphics algorithms in a hardware description language for FPGA. Evaluation of the implemented graphics pipeline and effects is also discussed.
This document describes research on spectral X-ray computed tomography using energy sensitive pixel detectors. It introduces X-ray CT and reconstruction techniques. It then describes the Medipix chip, which can perform photon counting and energy discrimination on individual pixels. Experiments were conducted to characterize the energy response of Medipix detectors using particle beams and synchrotron radiation. The energy response was modeled based on charge sharing between pixels. Finally, the document proposes methods for spectral CT reconstruction that leverage the energy information from Medipix to distinguish and quantify different materials.
Robustness in Deep Learning - Single Image Denoising using Untrained Networks...Daniel983829
This document is a thesis submitted by Esha Singh to the University of Minnesota for the degree of Master of Science in May 2021. The thesis explores single image denoising using untrained neural networks. It first provides background on deep learning, inverse problems, image denoising and neural networks. It then reviews existing image denoising algorithms including spatial/transform domain and neural network methods. The thesis also discusses recent work on deep image priors and rethinking single image denoising using over-parameterization and low-rank matrix recovery. Preliminary experiments on denoising images with salt and pepper noise are presented to demonstrate the proposed methodology.
Pulse Preamplifiers for CTA Camera Photodetectorsnachod40
This document is a master's thesis that examines preamplifier designs for the Cherenkov Telescope Array (CTA). It begins with an introduction to gamma ray astronomy and the photodetectors used in imaging atmospheric Cherenkov telescopes (IACTs). The thesis then analyzes the benefits of voltage and transimpedance amplification approaches. Prototypes of MMIC and transimpedance amplifiers are designed, simulated, implemented on printed circuit boards, and tested. Experimental results demonstrate the superiority of transimpedance amplifiers for meeting CTA's demanding front-end electronics specifications.
Implementation of a Localization System for Sensor Networks-berkleyFarhad Gholami
This dissertation discusses the implementation of a localization system for sensor networks. It addresses two main tasks: establishing relationships to reference points (e.g. distance measurements) and using those relationships and reference point positions to calculate sensor positions algorithmically.
The dissertation first presents various centralized and distributed localization algorithms from existing research. It then focuses on implementing a distributed, least-squares-based localization algorithm and designing an ultra-low power hardware architecture for it. Measurement errors due to fixed-point arithmetic are also analyzed.
The second part of the dissertation proposes, designs and prototypes an RF signal-based time-of-flight ranging system. The prototype achieves a measurement error within -0.5m to 2m at 100
This thesis explores accelerating isosurface rendering of volume data using GPU ray casting with an octree. It analyzes octree traversal types suitable for GPU implementation and develops a hybrid traversal combining stackless octree traversal with direct grid ray marching. The method was integrated into the WisS anthropology data viewer. The implementation achieves up to 3.5x speedup over the original ray marching, improving interactivity on large datasets. Challenges for GPU octree traversal are discussed along with directions for future work.
This document is a project report submitted by four students for their Bachelor of Engineering degree. It describes the development of a microcontroller-based interactive voice response system. The system uses a microcontroller and other ICs interfaced to a PC to allow telephone users to access information from a database by following voice prompts. The report includes details of the hardware and software design, component selection, circuit diagrams, programming code and testing procedures. It aims to provide a low-cost alternative to commercial IVR systems for small businesses.
From sound to grammar: theory, representations and a computational modelMarco Piccolino
This thesis contributes to the investigation of the sound-to-grammar mapping by developing a computational model in which complex acoustic patterns can be represented conveniently, and exploited for simulating the prediction of English prefixes by human listeners.
The model is rooted in the principles of rational analysis and Firthian prosodic analysis, and formulated in Bayesian terms. It is based on three core theoretical assumptions: first, that the goals to be achieved and the computations to be performed in speech recognition, as well as the representation and processing mechanisms recruited, crucially depend on the task a listener is facing, and on the environment in which the task occurs. Second, that whatever the task and the environment, the human speech recognition system behaves optimally with respect to them. Third, that internal representations of acoustic patterns are distinct from the linguistic categories associated with them.
The representational level exploits several tools and findings from the fields of machine learning and signal processing, and interprets them in the context of human speech recognition. Because of their suitability for the modelling task at hand, two tools are dealt with in particular: the relevance vector machine (Tipping, 2001), which is capable of simulating the formation of linguistic categories from complex acoustic spaces, and the auditory primal sketch (Todd, 1994), which is capable of extracting the multi-dimensional features of the acoustic signal that are connected to prominence and rhythm, and represent them in an integrated fashion. Model components based on these tools are designed, implemented and evaluated.
The implemented model, which accepts recordings of real speech as input, is compared in a simulation with the qualitative results of an eye-tracking experiment. The comparison provides useful insights about model behaviour, which are discussed.
Throughout the thesis, a clear distinction is drawn between the computational, representational and implementation devices adopted for model specification.
This document provides an overview and summary of a thesis on visualizing uncertainty in fiber tracking based on diffusion tensor imaging (DTI). The thesis addresses challenges with visualizing uncertainty throughout the DTI and fiber tracking pipeline, including image acquisition, diffusion modeling, fiber tracking, and visualization. It proposes and evaluates various techniques for visualizing different types of uncertainty, such as value uncertainty, location uncertainty, and parameter uncertainty. The visualization techniques are applied to fiber tracking results to aid in neurosurgical planning and other medical applications.
This document is a thesis submitted by Johann C. Rocholl to the University of Stuttgart describing a novel method for decoding blurry linear barcodes from camera images on mobile devices. The proposed algorithm locates the barcode, extracts a scan line, simulates the blurry barcode according to a mathematical model, and chooses digits that best approximate the camera input through iterative adjustment. A prototype was implemented to recognize UPC-A and EAN-13 barcodes on the Apple iPhone and MacBook. The decoder was tested on several hundred images and correctly recognized a high percentage of blurry barcodes, outperforming four existing decoders.
This thesis introduces an efficient radar transmitter enabled by supply modulation. It compares the theoretical behavior of Class-B and Class-A power amplifiers (PAs) under Gaussian envelope signals and validates this experimentally on a 10-GHz GaN MMIC PA. When driven by a Gaussian pulse with linear frequency modulation, the PA achieves 31% average efficiency over the pulse. To improve efficiency further, a resonant supply modulator is used, achieving 40% average PAE for the PA alone and reducing spectral emissions by 40 dB compared to a rectangular pulse with the same energy. The thesis also presents the design of an X-Band GaN Doherty MMIC PA for a variable power radar with over 40% PAE from 30-35
This document is a lab manual for EE380 (Control Lab) that describes the experimental setup using a dsPIC30F4012 microcontroller and peripherals to control a PMDC motor. It outlines 10 experiments covering topics like modeling and identification of the motor, implementing proportional and PID speed/position controllers, Ziegler-Nichols tuning, disturbance observation, and compensating plant nonlinearities. The manual provides goals, background, and questions for each experiment along with code listings and descriptions to help students complete the experiments.
This master's thesis investigates methods for computing multiple sense embeddings per polysemous word. The author extends the word2vec model to build a sense assignment model that simultaneously selects word senses in sentences and adapts the sense embedding vectors in an unsupervised learning approach. The model is implemented in Spark to enable training on large corpora. Sense vectors are trained on a Wikipedia corpus of nearly 1 billion tokens and evaluated on word similarity tasks using the SCWS and WordSim-353 datasets. Hyperparameter tuning is performed to analyze the effect of vector size, learning rate, and other parameters on model performance and training time. Nearest neighbors and example sentences are also examined to analyze the quality of the computed sense embeddings.
This document summarizes a student project on predicting malicious activity using real-time video surveillance. The project applies techniques like super-resolution, face and object recognition using HOG features, and neural networks to enhance video quality, identify objects and faces, and semantically describe scenes to detect unusual activity. Algorithms were implemented in MATLAB and results were stored in a MongoDB database. Key techniques included super-resolution, PCA-based face recognition, HOG-based object detection, and neural networks like CNNs and RNNs for image captioning. The project aims to help detect criminal activity and track convicted individuals in public spaces.
Directional patch antenna array design for desktop wireless interTung Huynh
This document describes the design of a directional patch antenna array for desktop wireless internet. It begins with an introduction to wireless interference issues and the motivation for a directional antenna. It then provides background on patch antenna design. The document describes the design of a single patch antenna and a 4-element patch antenna array, including the substrate, feed network, and testing procedures. Key results showed that both patch antennas had higher gain than a standard dipole antenna, and the array provided improved directivity over a single patch. However, the dipole performed better when obstructions prevented line-of-sight communication. The document concludes with a discussion of design difficulties and potential improvements.
This document describes the development of a magnetic component design environment. It includes the development of an automated core loss measurement system to characterize magnetic materials, a database to store measurement data, and a design software tool. The core loss measurement system allows for easy and automated measurement of core loss over a range of frequencies and flux densities. Measured data is stored in a database, which can be accessed by the design software to enable accurate prediction of core losses during the design process. Validation measurements on inductors showed errors between predicted and measured losses and temperatures were less than 10%, demonstrating the effectiveness of the design environment for improving magnetic component design.
1. DIPOLE SOURCE LOCALIZATION USING
ARTIFICIAL NEURAL NETWORKS FOR A
HIGH DEFINITION REALISTIC HEAD MODEL
BY
STEVEN BERARD
A dissertation submitted to the Graduate School
in partial fulfillment of the requirements
for the degree
Master of Science
Major Subject: Electrical Engineering
New Mexico State University
Las Cruces New Mexico
December 2013
2. “Dipole Source Localization Using Artificial Neural Networks for a High Definition
Realistic Head Model,” a thesis prepared by Steven Berard in partial fulfillment
of the requirements for the degree, Master of Science, has been approved and
accepted by the following:
Linda Lacey
Dean of the Graduate School
Dr. Kwong Ng
Chair of the Examining Committee
Date
Committee in charge:
Dr. Kwong Ng, PhD, Chair
Dr. Dr. Nadipuram Prasad, PhD
ii
3. ACKNOWLEDGMENTS
I would like to thank my advisor, Dr. Kwong Ng, for his encouragement,
advice, and support for the last two years, Dr. Prasad for making me believe
that anything is possible, my parents for always believing in me and helping me
succeed even when I would get in my own way, Dr. Ranade, Dr. De Leon, and the
NMSU College of Engineering for the opportunity to work as a TA while I finished
this, all my professors for helping prepare for this moment, and my wonderful EE
161 students for always testing me and giving me something to brag about. I also
want to send out a special thank you to Dr. Gert Van Hoey who helped me find
my way when I was most lost.
iii
4. VITA
January 5, 1985 Born in Lancaster, California
January 2008 - December 2009 B.A. in Finance, New Mexico State
University, Las Cruces, NM
August 2012 - December 2013 Graduate Student, New Mexico State
University, Las Cruces, NM
January 2012 - May 2013 Teaching Assistant, New Mexico State
University, Las Cruces, NM
PROFESSIONAL AND HONORARY SOCIETIES
IEEE Student Member
Eta Kappa Nu
PUBLICATIONS
None
iv
5. ABSTRACT
DIPOLE SOURCE LOCALIZATION USING
ARTIFICIAL NEURAL NETWORKS FOR A
HIGH DEFINITION REALISTIC HEAD MODEL
BY
STEVEN BERARD
Master of Science
New Mexico State University
Las Cruces, New Mexico, 2013
Dr. Kwong Ng, PhD, Chair
It is desired to determine the source of electrical activity in the human brain
with accuracy in real time. Artificial neural networks have been shown do this
in brain-like models, however there are only a limited number of studies on this
subject, and the models used have been of low resolution or simplified geometries.
This paper presents the findings from testing several different neural network
configurations’ ability to source localize within a high fidelity realistic head model
with a resolution of 1 mm × 1 mm × 1 mm with and without noise.
v
8. LIST OF TABLES
1 Conductivities for Realistic Head Model with Homogeneous Brain
Region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2 Results for 2D Head Model Network 32-30-30-4 . . . . . . . . . . 21
3 Results for Realistic Head Model with Homogeneous Brain Region
(No Noise) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4 Results for Realistic Head Model with Homogeneous Brain Region
(30 dB SNR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
5 Results for Realistic Head Model with Homogeneous Brain Region
(20 dB SNR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
6 Results for Realistic Head Model with Homogeneous Brain Region
(10 dB SNR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
7 Results for Realistic Head Model - Complex 32 Sensors (No Noise) 30
8 Results for Realistic Head Model - Complex 32 Sensors (30 dB SNR) 30
9 Results for Realistic Head Model - Complex 32 Sensors (20 dB SNR) 32
10 Results for Realistic Head Model - Complex 32 Sensors (10 dB SNR) 32
11 Results for Realistic Head Model - Complex 32 Sensors (No Noise)
Random Training Pattern . . . . . . . . . . . . . . . . . . . . . . 36
viii
9. 12 Results for Realistic Head Model - Complex 32 Sensors (30 dB
SNR) Random Training Pattern . . . . . . . . . . . . . . . . . . . 37
13 Results for Realistic Head Model - Complex 32 Sensors (20 dB
SNR) Random Training Pattern . . . . . . . . . . . . . . . . . . . 38
14 Results for Realistic Head Model - Complex 32 Sensors (10 dB
SNR) Random Training Pattern . . . . . . . . . . . . . . . . . . . 39
15 Results for Realistic Head Model - Complex 64 Sensors (No Noise) 41
16 Results for Realistic Head Model - Complex 64 Sensors (30 dB SNR) 42
17 Results for Realistic Head Model - Complex 64 Sensors (20 dB SNR) 42
18 Results for Realistic Head Model - Complex 64 Sensors (10 dB SNR) 44
19 Results for Realistic Head Model - Complex 64 Sensors (No Noise)
Random Training Pattern . . . . . . . . . . . . . . . . . . . . . . 48
20 Results for Realistic Head Model - Complex 64 Sensors (30 dB
SNR) Random Training Pattern . . . . . . . . . . . . . . . . . . . 49
21 Results for Realistic Head Model - Complex 64 Sensors (20 dB
SNR) Random Training Pattern . . . . . . . . . . . . . . . . . . . 50
22 Results for Realistic Head Model - Complex 64 Sensors (10 dB
SNR) Random Training Pattern . . . . . . . . . . . . . . . . . . . 51
23 Results for Realistic Head Model - Complex 128 Sensors (No Noise) 53
24 Results for Realistic Head Model - Complex 128 Sensors (30 dB
SNR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
ix
10. 25 Results for Realistic Head Model - Complex 128 Sensors (20 dB
SNR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
26 Results for Realistic Head Model - Complex 128 Sensors (10 dB
SNR) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
27 Results for Realistic Head Model - Complex 128 Sensors (No Noise)
Random Training Pattern . . . . . . . . . . . . . . . . . . . . . . 59
28 Results for Realistic Head Model - Complex 128 Sensors (30 dB
SNR) Random Training Pattern . . . . . . . . . . . . . . . . . . . 60
29 Results for Realistic Head Model - Complex 128 Sensors (20 dB
SNR) Random Training Pattern . . . . . . . . . . . . . . . . . . . 61
30 Results for Realistic Head Model - Complex 128 Sensors (10 dB
SNR) Random Training Pattern . . . . . . . . . . . . . . . . . . . 62
x
11. LIST OF FIGURES
1 Example of a Single Neuron . . . . . . . . . . . . . . . . . . . . . 5
2 Example of a Single Layer of Neurons . . . . . . . . . . . . . . . . 6
3 Example of a Three Layer of Neural Network . . . . . . . . . . . . 7
4 Example of a Single Dipole in a 2D Airhead Model . . . . . . . . 13
5 Sensor and Training Dipole Locations for 2D Homogeneous Head
Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
6 FMRI Image Used for Realistic Head Model (ITK-SNAP [13]) . . 16
7 Sensor Placement for 32 Electrodes . . . . . . . . . . . . . . . . . 17
8 Sensor Placement for 64 Electrodes . . . . . . . . . . . . . . . . . 19
9 Sensor Placement for 128 Electrodes . . . . . . . . . . . . . . . . 19
10 Location Error With and Without Added Noise for Airhead 32-
30-30-4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
11 Location Error Distribution With and Without Added Noise for
2D Head Model with Network Configuration: 32-30-30-6 . . . . 23
12 Location Error With and Without Added Noise for Realistic Head
Model with Homogeneous Brain Tissue with Network Configura-
tion: 32-30-30-6. The figures on the right restrict the voxels tested
to a 50 mm radius from the centroid. . . . . . . . . . . . . . . . . 26
xi
12. 13 Location Error Distribution With and Without Added Noise for
Realistic Head Model with Homogeneous Brain Tissue with Net-
work Configuration: 32-30-30-6 . . . . . . . . . . . . . . . . . . 28
14 Location Error With and Without Added Noise for Realistic Head
Model with Network Configuration: 32-30-30-6 (Grid Pattern
Training). The figures on the right restrict the voxels tested to
a 50 mm radius from the centroid. . . . . . . . . . . . . . . . . . . 31
15 Location Error Distribution With and Without Added Noise for
Realistic Head Model with Network Configuration: 32-30-30-6
(Grid Pattern Training) . . . . . . . . . . . . . . . . . . . . . . . 33
16 Location Error With and Without Added Noise for Realistic Head
Model with Network Configuration: 32-30-30-6 (Random Pattern
Training). The figures on the right restrict the voxels tested to a
50 mm radius from the centroid. . . . . . . . . . . . . . . . . . . . 35
17 Location Error Distribution With and Without Added Noise for
Realistic Head Model with Network Configuration: 32-30-30-6
(Random Pattern Training) . . . . . . . . . . . . . . . . . . . . . 40
18 Location Error With and Without Added Noise for Realistic Head
Model with Network Configuration: 64-30-30-6 (Grid Pattern
Training). The figures on the right restrict the voxels tested to
a 50 mm radius from the centroid. . . . . . . . . . . . . . . . . . . 43
xii
13. 19 Location Error Distribution With and Without Added Noise for
Realistic Head Model with Network Configuration: 64-30-30-6
(Grid Pattern Training) . . . . . . . . . . . . . . . . . . . . . . . 45
20 Location Error With and Without Added Noise for Realistic Head
Model with Network Configuration: 64-30-30-6 (Random Pattern
Training). The figures on the right restrict the voxels tested to a
50 mm radius from the centroid. . . . . . . . . . . . . . . . . . . . 47
21 Location Error Distribution With and Without Added Noise for
Realistic Head Model with Network Configuration: 64-30-30-6
(Random Pattern Training) . . . . . . . . . . . . . . . . . . . . . 52
22 Location Error With and Without Added Noise for Realistic Head
Model with Network Configuration: 128-45-45-6 (Grid Pattern
Training) The figures on the right restrict the voxels tested to a 50
mm radius from the centroid. . . . . . . . . . . . . . . . . . . . . 55
23 Location Error Distribution With and Without Added Noise for
Realistic Head Model with Network Configuration: 128-45-45-6
(Grid Pattern Training) . . . . . . . . . . . . . . . . . . . . . . . 56
24 Location Error With and Without Added Noise for Realistic Head
Model with Network Configuration: 128-30-30-6 (Random Pat-
tern Training). The figures on the right restrict the voxels tested
to a 50 mm radius from the centroid. . . . . . . . . . . . . . . . . 58
xiii
14. 25 Location Error Distribution With and Without Added Noise for
Realistic Head Model with Network Configuration: 128-30-30-6
(Random Pattern Training) . . . . . . . . . . . . . . . . . . . . . 63
xiv
15. 1 INTRODUCTION
The brain is widely recognized as the main controller of the human body. It
is also extremely hard to study without causing harm to the subject. Electroen-
cephalography (EEG) is a promising method for studying the way the brain works
using only passive means of observation. Unfortunately there is still the problem
of interpreting the data that we receive from EEG readings into accurate data
that we can use.
Source location is a significant problem due to the fact that it is ill posed.
Given a set of potentials for the electrodes there is an infinite number of possible
dipole strengths and locations that could have created this data set. There have
been many proposed solutions to this problem: iterative techniques, beamforming,
and artificial neural networks. Iterative techniques require immense amounts of
computations to arrive at their solutions and are not very robust to noise [2].
Beamformers have been shown to localize well with and without the presence of
noise [4], however they are still rather computationally intensive and are difficult
to impossible to perform in real time. Artificial neural networks (ANNs) could
provide us with a solution that is robust to noise [1][2][10][14][15] and can make
accurate location predictions fast enough to work in real time [10].
The ability to accurately detect brain activity location in real time could
lead to breakthroughs in psychology and thought activated devices. At the time of
1
16. writing I have only found one published article that tests artificial neural networks
with a realistic head model [10]. In said article the model was not as detailed as
models we could create today. While ANNs have been shown to be accurate
enough using simplistic head models, can an ANN show similar results if the head
is more complex, more closely related to our own heads?
2
17. 2 THEORY
This paper requires the knowledge of two subjects, the process for which the
forward solution was obtained, Finite Difference Neuroelectromagnetic Modeling
Software (FNS), and the process for which the inverse solution was obtained,
artificial neural networking.
2.1 FNS: Finite Difference Neuroelectromagnetic Modeling Software
The Finite Difference Neuroelectromagnetic Modeling Software (FNS) written
by Hung Dang [3] is a realistic head model EEG forward solution package. It
uses finite difference formulation for a general inhomogeneous anisotropic body
to obtain the system matrix equation, which is then solved using the conjugate
gradient algorithm. Reciprocity is then utilized to limit the number of solutions
to a manageable level.
This software attempts to solve the Poisson equation that governs electric
potential φ:
· (σ φ) = · Ji (1)
where σ is the conductivity and Ji is the impressed current density. It accom-
plishes this using the finite difference approximation for the Laplacian:
3
18. · (σ φ)
node0
≈
6
i=1
Aiφi −
6
i=1
Ai φ0 (2)
where φi is the potential at node i, and the coefficients Ai depend on conductivities
of the elements and node spacings.
2.2 Artificial Neural Networks
Biological nervous systems are capable of learning and replicating extremely
complex tasks. Artificial neural networks are mathematical models designed to
imitate these biological systems. They are attractive, because they are capable
of establishing input/output relationships with only training data and no prior
knowledge of the system.
An artificial neural network is nothing more than an inter-connected col-
lection of neurons. Each neuron consists of one or more weighted inputs and a bias
that are summed together. This value is then passed through a transfer function.
The transfer function is chosen to satisfy some specification of the problem and
may be linear or nonlinear. Neurons are then organized into layers. A typical
layer contains several neurons that all have the same inputs, however the weights
for these inputs are usually different for each neuron and input. The output for
each layer can be written:
am+1
= fm+1
(Wm+1
am
+ bm+1
) for m = 0, 1, ..., M − 1,
4
19. The outputs of a layer can be fed into another layer of neurons as many times
as desired. The layer whose output is the network output is called the output
layer and every other layer is called a hidden layer. Some texts use the convention
that the input layer should be counted as a layer when describing the size of a
network, however in this paper the input layer will not be counted in the number
of layers. For example, a neural network with only one hidden layer will be said
to be a two layer network: one for the hidden layer and one for the output layer.
All the networks used in this paper are three layer networks. An example of a
three layer artificial neural network can be seen in Figure 3.
Figure 1: Example of a Single Neuron
Example 2.1. An example of a single neuron can be seen in Figure 1. The 1×R
matrix P contains all the inputs. The R × 1 matrix W contains all the weights.
The variable b is the bias. The scalar a is the output.
5
20. Figure 2: Example of a Single Layer of Neurons
Example 2.2. An example of a single layer of neurons can be seen in Figure 2.
The input matrix P remains a 1 × R matrix, however the weight matrix becomes
an R × S matrix where S is the number of neurons in the layer. This is because
even though each neuron receives the same inputs, the weight applied to each
input is different for each neuron/input combination. The 1 × S matrix, A, is the
output.
Example 2.3. An example of a three layer multi-input/multi-output artificial
neural network can be seen in Figure 3. As you can see the outputs from one
layer become the inputs for the next layer. The final equation that ties the input
6
21. Figure 3: Example of a Three Layer of Neural Network
matrix, P, to the output matrix, A3
, is
A3
= f3
(W3
f2
(W2
f1
(W1
P + b1
) + b2
) + b3
)
2.2.1 Backpropagation
The best part about neural networks is their ability to replicate complex sys-
tems with only knowing input-output combinations. There are basically two ways
for a network to do this, supervised and unsupervised learning. For this paper we
will focus on supervised learning. The most common form of supervised learning
is the backpropagation method [1]. The backpropagation method uses calculus’
Chain Rule to propagate the mean square error of an input-output pair back
7
22. through a network. Weights and biases are then updated such that the mean
square error is reduced.
Definition 2.1. The basic algorithm for the back propagation method is as fol-
lows:
A0
= P (3)
Am+1
= fm+1
(Wm+1
Am
+ bm+1
) for m = 0, 1, ..., M − 1 (4)
O = AM
(5)
sM
= −2 ˙FM
(nM
)(T − O) (6)
sm
= ˙Fm
(nm
)(Wm+1
) sm+1
, for m = M − 1, ..., 2, 1 (7)
Wm
(k + 1) = Wm
(k) − αsm
(Am−1
) (8)
bm
(k + 1) = bm
(k) − αsm
(9)
where T is the target vector, O is the output vector, P is the input vector, W is
the weight matrix, b is the bias matrix, and α is the learning rate. Generally the
learning rate is set to a very low number (e.g., α = 0.1 or 0.01).
2.2.2 Levenberg-Marquardt Backpropagation
All the networks in this paper were trained using the Levenberg-Marquardt
backpropagation algorithm. “The Levenberg-Marquardt algorithm is a variation
8
23. of Newton’s method that was designed for minimizing functions that are sums of
squares of other nonlinear functions” [5]. It is a batch learning algorithm that
can adjust its learning rate in order to find the best weights and biases in the
fewest number of iterations. The main problem with this method is the extreme
memory requirement with larger networks. For example Matlab required around
40 gigabytes of virtual memory during the training of each of the 128-30-30-6
networks discussed in this paper.
Definition 2.2. The Levenberg-Marquardt backpropagation algorithm is as fol-
lows:
1. For Q input-output pairs, run all inputs through the network to obtain the
errors, Eq = Tq − AM
q . Then determine the sum of squared errors over all
inputs, F(x).
F(x) =
Q
q=1
(Tq − Aq) (Tq − Aq) (10)
9
25. Where:
v = v1 v2 · · · vN
= e1,1 e2,1 · · · eSM ,1 e1,2 · · · eSM ,Q
(18)
x = x1 x2 · · · xN
= w1
1,1 w1
1,2 · · · w1
S1,R b1
1 · · · b1
S1 w2
1,1 · · · bM
SM
(19)
3. Solve:
∆xk = −[J (xk)J(xk) + µkI]−1
J (xk)v(xk) (20)
4. Compute F(x), Eq (10), using xk + ∆xk. If the result is less than the
previous F(x) divide µ by ϑ, let xk+1 = xk + ∆xk and go back to step 1. If
not, multiply µ by ϑ and go back to step 3. The variable, ϑ, must be greater
than 1 (e.g., ϑ = 10).
11
26. 3 APPROACH
The first step to testing out an artificial neural network on a complex realistic
head model was to test on a simplistic homogeneous model and compare the results
to published findings. The next step was to train and test an ANN for a realistic
head model with less complex conductivities. Finally, I trained and tested several
ANNs for a complex realistic head model. Every network is trained to output
location coordinates and a moment vector. The moment vector is essentially the
direction of the dipole. Both the location and moment vector errors are determined
the same way in this paper:
Error = (x − x )2 + (y − y )2 + (z − z )2 (21)
where (x, y, z) and (x , y , z ) are the network estimated values and the actual
values respectively. For the 2D headmodel the z and z terms are equal to zero.
3.1 2D Headmodel
For this step a simple homogeneous circular two dimensional model needed
to be defined. In order to make it similar to an actual human head two regions
are required: the brain area and the scalp area. The brain area would be the area
where dipoles could be present. The scalp area would be where the sensors could
pick up the potentials created by the dipoles. The brain area was determined to
12
27. have a radius of 6.7 cm. The scalp area was determined to have a radius of 8 cm.
The resolution was determined to be 1 mm × 1 mm. If we consider the entirety
of both circles to be filled with only air the equation to determine the potential
at a given sensor point from a dipole would be:
V =
( ˆds) · ( ¯R − ¯R )
| ¯R − ¯R |3
, where s =
qd
4π 0
or
Id
4πσ
(22)
Example 3.1. A visual representations of this “airhead” can be seen in Figure 4.
In this case the dipole has been placed at X = 90 and Y = 100. The dipole is X
directed. This graphic shows the potential from the dipole at every point in the
brain circle and on the outer scalp circle. The potentials on the scalp have been
multiplied by a factor of 10 in order to make the colors distinguishable.
Figure 4: Example of a Single Dipole in a 2D Airhead Model
Thirty-two scalp nodes were chosen semi-randomly to be the sensor loca-
13
28. tions. Training points were determined by choosing dipole locations in a grid
format with a resolution of 5 mm × 5 mm. All training locations and sensor
locations can be seen in Figure 5. This resulted in 567 training locations. Sensor
values were obtained for each of these locations in all four of the cardinal direc-
tions, +X, −X, +Y , and −Y , resulting in 2,268 total training input-output pairs.
Each of these input-output pairs were presented to a 32-30-30-4 network to train.
Figure 5: Sensor and Training Dipole Locations for 2D Homogeneous Head Model
Once trained our network was subjected to multiple tests. Our ultimate
goal was to test accuracy, so the first test was for each possible dipole location to
set the direction of the dipole to face +X and determine how accurate the network
could determine its actual location and direction. Then noise was added such
that SNR = 10 dB, 20 dB, and 30 dB. Then the tests were conducted with 10,000
dipoles with random location and direction. This case is used to characterize the
14
29. general accuracy of the network. In all cases each dipole was required to have the
same magnitude.
3.2 Realistic Head Model - Homogeneous Brain Region
In order to create a truly realistic head model we must start with an FMRI
image such as can be seen in Figure 6. The FMRI image used had a resolution of
1 mm × 1 mm × 1 mm. The image was segmented using the program, FSL [6].
The segmented image was then fed into FNS [3] to obtain the reciprocity data
for all possible dipole locations at the chosen sensor locations. Sensor locations
were chosen according to the International 10-20 system for 32 electrodes. The
placement of these sensors can be seen in Figure 7. In order to make the brain
area homogeneous the conductivity of the white matter was changed to that of
grey matter. The conductivities can be seen in Table 1.
Once the reciprocity data had been obtained training dipole locations and
directions were chosen. Training locations were chosen in a grid format with a
resolution of 5 mm × 5 mm × 5 mm. Training directions were chosen as +X,
−X, +Y , −Y , +Z, −Z, and 4 other random directions. Because dipoles could
only occur in grey matter this yielded 100,340 different input-output pairs. These
training pairs were then presented to the networks for training.
Once the networks were trained the sensor data from 10,000 dipoles with
random locations and directions were presented to the network. The average
15
30. Figure 6: FMRI Image Used for Realistic Head Model (ITK-SNAP [13])
location and direction errors were recorded. Next every grey matter node where
Z = 178 was used as a dipole location with direction +Z. Layer Z = 178 was
chosen because it is a thick area of the brain near the center of mass. The average
location and direction errors are recorded. Noise is then introduced such that
SNR = 10 dB, 20 dB, and 30 dB. The same tests are performed again for any
voxels within 55 mm of the centroid of the layer. This is done, because it has
been noted that neural networks tend to have larger errors when source locating
near the boundary of the training area and would have better average accuracy
near the centroid of the training area [10].
16
31. Figure 7: Sensor Placement for 32 Electrodes
3.3 Realistic Head Model - Complex
The training and testing process for this model was initially conducted in
almost exactly the same way as the previous model with one major difference, the
white matter’s conductivity was set to σ = 0.14. This is important, because the
brain has more than just grey matter in its center. This other tissue has a different
conductivity and as such distorts the dipole signal as it travels to the sensors on
the scalp. This could cause a neural network to be less accurate, and needed to
be tested separately. It is also the closest model to the physical brain presented
in this paper. In addition to the 32 sensor arrangement used in the homogeneous
brain model 64 and 128 sensor arrangements are used for this model and can be
seen in Figures 8 and 9. The locations of the sensors were also chosen according
to the International 10-20 system.
The same sensor configurations were used to train several other networks
17
32. Table 1: Conductivities for Realistic Head Model with Homogeneous Brain Region
Tissue Type σ (S/m)
Scalp 0.44
Skull 0.018
Cerebro-Spinal Fluid 1.79
Gray Matter 0.33
White Matter 0.33
Muscle 0.11
using random dipole locations and directions. Ten thousand random gray matter
locations were chosen. Sensor data was obtained for 5 random directions at each
of the 10,000 training locations yielding 50,000 training pairs. This was done to
simulate possible real-world experimentation.
18
33. Figure 8: Sensor Placement for 64 Electrodes
Figure 9: Sensor Placement for 128 Electrodes
19
34. 4 RESULTS
This section details the results obtained from the tests described in the Ap-
proach section.
4.1 2D Head Model
Figure 10 visually shows the results from a 32-30-30-4 artificial neural network
trained to detect the locations of a single dipole present in a circular airhead as
described in the Approach section. As you can see from the two figures that had no
noise introduced to them the average error is less than a millimeter or one voxel
in this case. This means that the network is exactly right in most cases when
it comes to predicting location. As we add noise to the signals received by the
sensors we can see the accuracy drop off as we would expect it to. It is interesting
to note that in every case the network is more error prone as the dipole is moved
toward the edge of the training region away from the center. This is shown by
the left images containing all the voxels from our airhead and the right images
containing only those voxels within 55 mm of the center of the airhead. This is
a normal trait of artificial neural networks. It is also interesting to note that the
accuracy is not uniform throughout the no noise cases. This is due to the random
starting weights and biases for each network.
The average location and moment errors from 10,000 random locations and
20
35. directions can be seen in Table 2. The location error distribution for the same
network can be seen in Figure 11.
Table 2: Results for 2D Head Model Network 32-30-30-4
SNR (dB) Avg. Location Error (mm) Avg. Moment Error (mm)
∞ 13.2539 0.229913
30 13.3412 0.230197
20 13.9437 0.232
10 17.8625 0.246121
21
36. (a) No Noise (b) No Noise, 55mm Radius
(c) 30dB SNR (d) 30dB SNR, 55mm Radius
(e) 20dB SNR (f) 20dB SNR, 55mm Radius
(g) 20dB SNR (h) 20dB SNR, 55mm Radius
Figure 10: Location Error With and Without Added Noise for Airhead 32-30-
30-4
22
37. (a) No Noise (b) 30dB SNR
(c) 20dB SNR (d) 10dB SNR
Figure 11: Location Error Distribution With and Without Added Noise for 2D
Head Model with Network Configuration: 32-30-30-6
23
38. 4.2 Realistic Head Model - Homogeneous Brain Region
Every node on the layer Z = 178 was used as a dipole location with the dipole
directed in the +Z direction. Noise was then added such that SNR at the sensors
was equal to 30, 20, and 10 dBs. Figure 12 shows the results for a network of
configuration 32-45-45-6. As you can see the accuracy increases if we ignore the
outer-most voxels and focus on the center area of the brain.
While all the tissue in this brain model is homogeneous, I only placed
training dipoles in the gray matter area of the brain. Just like the outer-most
areas of the brain, the areas close to white matter tissue are further from dipoles
and tend to have lower accuracy. As with the 2D head model we see that the
average accuracy drops as the noise increases.
Tables 3 through 6 show the average errors from testing 10,000 random
dipole locations and directions for each trained network for this model. Figure
13 shows the error distribution for the same test for the network configuration
32-30-30-6.
24
39. Table 3: Results for Realistic Head Model with Homogeneous Brain Region (No
Noise)
Network Configuration Avg. Distance Error (mm) Avg. Moment Error (mm)
NN 32-10-10-6 (1) 21.8264 0.755349
NN 32-20-20-6 (1) 15.7973 0.644345
NN 32-30-30-6 (1) 9.88761 0.538274
NN 32-45-45-6 (1) 7.67768 0.534815
Table 4: Results for Realistic Head Model with Homogeneous Brain Region (30
dB SNR)
Network Configuration Avg. Distance Error (mm) Avg. Moment Error (mm)
NN 32-10-10-6 (1) 22.0736 0.755947
NN 32-20-20-6 (1) 16.1947 0.645
NN 32-30-30-6 (1) 10.5581 0.538412
NN 32-45-45-6 (1) 8.56947 0.535629
25
40. Figure 12: Location Error With and Without Added Noise for Realistic Head
Model with Homogeneous Brain Tissue with Network Configuration: 32-30-30-
6. The figures on the right restrict the voxels tested to a 50 mm radius from the
centroid.
26
41. Table 5: Results for Realistic Head Model with Homogeneous Brain Region (20
dB SNR)
Network Configuration Avg. Distance Error (mm) Avg. Moment Error (mm)
NN 32-10-10-6 (1) 23.773 0.763351
NN 32-20-20-6 (1) 17.8696 0.648145
NN 32-30-30-6 (1) 14.3609 0.542563
NN 32-45-45-6 (1) 13.2894 0.540446
Table 6: Results for Realistic Head Model with Homogeneous Brain Region (10
dB SNR)
Network Configuration Avg. Distance Error (mm) Avg. Moment Error (mm)
NN 32-10-10-6 (1) 34.8025 0.808948
NN 32-20-20-6 (1) 27.3437 0.681496
NN 32-30-30-6 (1) 30.3933 0.572832
NN 32-45-45-6 (1) 30.619 0.577639
27
42. (a) No Noise (b) 30dB SNR
(c) 20dB SNR (d) 10dB SNR
Figure 13: Location Error Distribution With and Without Added Noise for Re-
alistic Head Model with Homogeneous Brain Tissue with Network Configuration:
32-30-30-6
28
43. 4.3 Realistic Head Model - Complex
4.3.1 32 Sensor Configuration - Grid Pattern Training
Every node on the layer Z = 178 was used as a dipole location with the
dipole directed in the +Z direction. Noise was then added such that SNR at
the sensors was equal to 30, 20, and 10 dBs. Figure 14 shows the results for a
network of configuration 32-30-30-6 trained with a grid pattern of dipole locations.
Considering the fact that the brain is only slightly longer than 15 cm at its widest
point these results show that when noise is added to this network the results
become extremely inaccurate.
Tables 7 through 10 show the average errors from testing 10,000 random
dipole locations and directions for each trained network for this model. Figure
15 shows the error distribution for the same test for the network configuration
32-30-30-6. It is interesting to see that the network configuration 32-10-10-6 does
not get much worse from 30 dB SNR to 10 db SNR. This is most likely due to the
network being relatively simple compared to the model. It is so generalized that
when presented with data that is significantly different than what it was trained
on it defaults to within the brain region, albeit nowhere near the actual dipole
location. Whereas the other networks are so complex that when presented with
strange data they determine that the dipole is not even in the brain region.
29
44. Table 7: Results for Realistic Head Model - Complex 32 Sensors (No Noise)
Network Configuration Avg. Distance Error (mm) Avg. Moment Error (mm)
NN 32-10-10-6 (1) 20.8926 0.815659
NN 32-20-20-6 (1) 20.0345 0.801704
NN 32-30-30-6 (1) 16.9694 0.767548
NN 32-45-45-6 (1) 14.4555 0.755999
NN 32-45-45-6 (2) 16.8214 0.781354
Table 8: Results for Realistic Head Model - Complex 32 Sensors (30 dB SNR)
Network Configuration Avg. Distance Error (mm) Avg. Moment Error (mm)
NN 32-10-10-6 (1) 74.6068 1.05411
NN 32-20-20-6 (1) 50.4479 0.805437
NN 32-30-30-6 (1) 64.8371 0.78521
NN 32-45-45-6 (1) 82.575 0.79684
NN 32-45-45-6 (2) 62.8111 0.794551
30
45. Figure 14: Location Error With and Without Added Noise for Realistic Head
Model with Network Configuration: 32-30-30-6 (Grid Pattern Training). The
figures on the right restrict the voxels tested to a 50 mm radius from the centroid.
31
46. Table 9: Results for Realistic Head Model - Complex 32 Sensors (20 dB SNR)
Network Configuration Avg. Distance Error (mm) Avg. Moment Error (mm)
NN 32-10-10-6 (1) 84.1472 1.14192
NN 32-20-20-6 (1) 140.044 0.846319
NN 32-30-30-6 (1) 204.745 0.891064
NN 32-45-45-6 (1) 237.588 1.065
NN 32-45-45-6 (2) 177.966 0.875519
Table 10: Results for Realistic Head Model - Complex 32 Sensors (10 dB SNR)
Network Configuration Avg. Distance Error (mm) Avg. Moment Error (mm)
NN 32-10-10-6 (1) 87.272 1.17102
NN 32-20-20-6 (1) 377.005 1.08792
NN 32-30-30-6 (1) 448.378 1.2539
NN 32-45-45-6 (1) 525.871 1.84189
NN 32-45-45-6 (2) 475.79 1.23946
32
47. (a) No Noise (b) 30dB SNR
(c) 20dB SNR (d) 10dB SNR
Figure 15: Location Error Distribution With and Without Added Noise for Realis-
tic Head Model with Network Configuration: 32-30-30-6 (Grid Pattern Training)
33
48. 4.3.2 32 Sensor Configuration - Random Pattern Training
Every node on the layer Z = 178 was used as a dipole location with the dipole
directed in the +Z direction. Noise was then added such that SNR at the sensors
was equal to 30, 20, and 10 dBs. Figure 16 shows the results for a network of
configuration 32-30-30-6 trained with random training locations.
Tables 11 through 14 show the average errors from testing 10,000 random
dipole locations and directions for each trained network for this model. Figure
17 shows the error distribution for the same test for the network configuration
32-30-30-6.
34
49. Figure 16: Location Error With and Without Added Noise for Realistic Head
Model with Network Configuration: 32-30-30-6 (Random Pattern Training).
The figures on the right restrict the voxels tested to a 50 mm radius from the
centroid.
35
50. Table 11: Results for Realistic Head Model - Complex 32 Sensors (No Noise)
Random Training Pattern
Network Configuration Avg. Distance Error (mm) Avg. Moment Error (mm)
NN 32-10-10-6 (1) 22.6025 0.591719
NN 32-10-10-6 (2) 21.0263 0.546079
NN 32-10-10-6 (3) 18.742 0.526827
NN 32-10-10-6 (4) 23.2004 0.632537
NN 32-10-10-6 (5) 23.404 0.639938
NN 32-20-20-6 (1) 21.2664 0.674938
NN 32-20-20-6 (2) 17.6022 0.561501
NN 32-20-20-6 (3) 22.4638 0.632259
NN 32-20-20-6 (4) 16.0037 0.543202
NN 32-20-20-6 (5) 18.8272 0.583457
NN 32-30-30-6 (1) 12.641 0.448957
NN 32-30-30-6 (2) 16.6299 0.562351
NN 32-30-30-6 (3) 14.3923 0.545352
NN 32-30-30-6 (4) 16.3946 0.627741
NN 32-30-30-6 (5) 15.3464 0.548796
36
51. Table 12: Results for Realistic Head Model - Complex 32 Sensors (30 dB SNR)
Random Training Pattern
Network Configuration Avg. Distance Error (mm) Avg. Moment Error (mm)
NN 32-10-10-6 (1) 105.63 0.602171
NN 32-10-10-6 (2) 190.326 0.851313
NN 32-10-10-6 (3) 328.953 1.17796
NN 32-10-10-6 (4) 64.4797 0.661202
NN 32-10-10-6 (5) 78.1284 0.663518
NN 32-20-20-6 (1) 58.167 0.698245
NN 32-20-20-6 (2) 70.2551 0.56827
NN 32-20-20-6 (3) 47.3972 0.642494
NN 32-20-20-6 (4) 109.43 0.613526
NN 32-20-20-6 (5) 53.9425 0.591589
NN 32-30-30-6 (1) 175.977 1.06722
NN 32-30-30-6 (2) 79.2474 0.588609
NN 32-30-30-6 (3) 96.7045 0.565338
NN 32-30-30-6 (4) 84.3806 0.651342
NN 32-30-30-6 (5) 80.6535 0.567217
37
52. Table 13: Results for Realistic Head Model - Complex 32 Sensors (20 dB SNR)
Random Training Pattern
Network Configuration Avg. Distance Error (mm) Avg. Moment Error (mm)
NN 32-10-10-6 (1) 301.055 0.666173
NN 32-10-10-6 (2) 412.613 1.06911
NN 32-10-10-6 (3) 995.392 2.16258
NN 32-10-10-6 (4) 190.457 0.856056
NN 32-10-10-6 (5) 223.452 0.811432
NN 32-20-20-6 (1) 163.116 0.842883
NN 32-20-20-6 (2) 210.266 0.619929
NN 32-20-20-6 (3) 126.683 0.732775
NN 32-20-20-6 (4) 319.656 0.919337
NN 32-20-20-6 (5) 151.253 0.648156
NN 32-30-30-6 (1) 466.379 2.08674
NN 32-30-30-6 (2) 233.753 0.753264
NN 32-30-30-6 (3) 293.893 0.692741
NN 32-30-30-6 (4) 242.946 0.790716
NN 32-30-30-6 (5) 246.41 0.70903
38
53. Table 14: Results for Realistic Head Model - Complex 32 Sensors (10 dB SNR)
Random Training Pattern
Network Configuration Avg. Distance Error (mm) Avg. Moment Error (mm)
NN 32-10-10-6 (1) 674.89 0.88787
NN 32-10-10-6 (2) 583.549 1.24564
NN 32-10-10-6 (3) 1720.67 3.01336
NN 32-10-10-6 (4) 481.091 1.51873
NN 32-10-10-6 (5) 560.727 1.34214
NN 32-20-20-6 (1) 461.83 1.50074
NN 32-20-20-6 (2) 593.233 0.845856
NN 32-20-20-6 (3) 348.284 1.12408
NN 32-20-20-6 (4) 632.736 1.57508
NN 32-20-20-6 (5) 448.925 0.979637
NN 32-30-30-6 (1) 894.21 3.16333
NN 32-30-30-6 (2) 613.684 1.43791
NN 32-30-30-6 (3) 753.991 1.16095
NN 32-30-30-6 (4) 647.24 1.28043
NN 32-30-30-6 (5) 672.8 1.41217
39
54. (a) No Noise (b) 30dB SNR
(c) 20dB SNR (d) 10dB SNR
Figure 17: Location Error Distribution With and Without Added Noise for Re-
alistic Head Model with Network Configuration: 32-30-30-6 (Random Pattern
Training)
40
55. 4.3.3 64 Sensor Configuration - Grid Pattern Training
Every node on the layer Z = 178 was used as a dipole location with the dipole
directed in the +Z direction. Noise was then added such that SNR at the sensors
was equal to 30, 20, and 10 dBs. Figure 18 shows the results for a network of
configuration 64-30-30-6 trained with a grid pattern of dipole locations.
Tables 15 through 18 show the average errors from testing 10,000 random
dipole locations and directions for each trained network for this model. Figure
19 shows the error distribution for the same test for the network configuration
64-30-30-6.
Table 15: Results for Realistic Head Model - Complex 64 Sensors (No Noise)
Network Configuration Avg. Distance Error (mm) Avg. Moment Error (mm)
NN 64-10-10-6 (1) 21.2667 0.837185
NN 64-20-20-6 (1) 16.7691 0.772504
NN 64-30-30-6 (1) 15.9451 0.770317
NN 64-45-45-6 (1) 12.5213 0.706575
41
56. Table 16: Results for Realistic Head Model - Complex 64 Sensors (30 dB SNR)
Network Configuration Avg. Distance Error (mm) Avg. Moment Error (mm)
NN 64-10-10-6 (1) 96.2673 1.2693
NN 64-20-20-6 (1) 101.037 0.943386
NN 64-30-30-6 (1) 32.5327 0.777203
NN 64-45-45-6 (1) 107.468 0.897266
Table 17: Results for Realistic Head Model - Complex 64 Sensors (20 dB SNR)
Network Configuration Avg. Distance Error (mm) Avg. Moment Error (mm)
NN 64-10-10-6 (1) 117.047 1.37698
NN 64-20-20-6 (1) 301.492 1.58998
NN 64-30-30-6 (1) 81.4571 0.817222
NN 64-45-45-6 (1) 449.939 1.67507
42
57. (g) 10dB SNR
Figure 18: Location Error With and Without Added Noise for Realistic Head
Model with Network Configuration: 64-30-30-6 (Grid Pattern Training). The
figures on the right restrict the voxels tested to a 50 mm radius from the centroid.
43
58. Table 18: Results for Realistic Head Model - Complex 64 Sensors (10 dB SNR)
Network Configuration Avg. Distance Error (mm) Avg. Moment Error (mm)
NN 64-10-10-6 (1) 125.511 1.42438
NN 64-20-20-6 (1) 772.093 2.81019
NN 64-30-30-6 (1) 179.92 0.972
NN 64-45-45-6 (1) 924.377 3.01807
44
59. (a) No Noise (b) 30dB SNR
(c) 20dB SNR (d) 10dB SNR
Figure 19: Location Error Distribution With and Without Added Noise for Realis-
tic Head Model with Network Configuration: 64-30-30-6 (Grid Pattern Training)
45
60. 4.3.4 64 Sensor Configuration - Random Pattern Training
Every node on the layer Z = 178 was used as a dipole location with the dipole
directed in the +Z direction. Noise was then added such that SNR at the sensors
was equal to 30, 20, and 10 dBs. Figure 20 shows the results for a network of
configuration 64-30-30-6 trained with a random pattern of dipole locations.
Tables 19 through 22 show the average errors from testing 10,000 random
dipole locations and directions for each trained network for this model. Figure
21 shows the error distribution for the same test for the network configuration
64-30-30-6.
46
61. Figure 20: Location Error With and Without Added Noise for Realistic Head
Model with Network Configuration: 64-30-30-6 (Random Pattern Training).
The figures on the right restrict the voxels tested to a 50 mm radius from the
centroid.
47
62. Table 19: Results for Realistic Head Model - Complex 64 Sensors (No Noise)
Random Training Pattern
Network Configuration Avg. Distance Error (mm) Avg. Moment Error (mm)
NN 64-10-10-6 (1) 24.1692 0.635177
NN 64-10-10-6 (2) 20.6486 0.587828
NN 64-10-10-6 (3) 24.6244 0.571931
NN 64-10-10-6 (4) 22.5207 0.587945
NN 64-10-10-6 (5) 25.944 0.593513
NN 64-20-20-6 (1) 15.6139 0.559264
NN 64-20-20-6 (2) 13.4774 0.490579
NN 64-20-20-6 (3) 13.0836 0.504072
NN 64-20-20-6 (4) 16.08 0.565132
NN 64-20-20-6 (5) 15.874 0.534065
NN 64-30-30-6 (1) 14.9085 0.574971
NN 64-30-30-6 (2) 12.6549 0.485122
NN 64-30-30-6 (3) 16.5273 0.527957
NN 64-30-30-6 (4) 13.5579 0.524288
NN 64-30-30-6 (5) 13.0949 0.517222
48
63. Table 20: Results for Realistic Head Model - Complex 64 Sensors (30 dB SNR)
Random Training Pattern
Network Configuration Avg. Distance Error (mm) Avg. Moment Error (mm)
NN 64-10-10-6 (1) 27.5132 0.635827
NN 64-10-10-6 (2) 233.082 0.781556
NN 64-10-10-6 (3) 26.4216 0.574001
NN 64-10-10-6 (4) 40.0534 0.590357
NN 64-10-10-6 (5) 26.9238 0.59468
NN 64-20-20-6 (1) 92.1113 0.576248
NN 64-20-20-6 (2) 145.44 1.31693
NN 64-20-20-6 (3) 176.269 0.938542
NN 64-20-20-6 (4) 119.814 0.576287
NN 64-20-20-6 (5) 67.3885 0.560586
NN 64-30-30-6 (1) 73.4353 0.579901
NN 64-30-30-6 (2) 142.21 0.566775
NN 64-30-30-6 (3) 29.937 0.538206
NN 64-30-30-6 (4) 98.0801 0.548388
NN 64-30-30-6 (5) 144.756 0.545153
49
64. Table 21: Results for Realistic Head Model - Complex 64 Sensors (20 dB SNR)
Random Training Pattern
Network Configuration Avg. Distance Error (mm) Avg. Moment Error (mm)
NN 64-10-10-6 (1) 44.5813 0.647002
NN 64-10-10-6 (2) 528.359 1.30203
NN 64-10-10-6 (3) 35.3659 0.592638
NN 64-10-10-6 (4) 97.0448 0.60861
NN 64-10-10-6 (5) 32.8335 0.605491
NN 64-20-20-6 (1) 294.733 0.718333
NN 64-20-20-6 (2) 251.349 2.25947
NN 64-20-20-6 (3) 361.881 1.50797
NN 64-20-20-6 (4) 359.352 0.65569
NN 64-20-20-6 (5) 190.559 0.720509
NN 64-30-30-6 (1) 219.919 0.620129
NN 64-30-30-6 (2) 397.02 0.848923
NN 64-30-30-6 (3) 71.2516 0.617658
NN 64-30-30-6 (4) 294.102 0.697646
NN 64-30-30-6 (5) 443.038 0.731188
50
65. Table 22: Results for Realistic Head Model - Complex 64 Sensors (10 dB SNR)
Random Training Pattern
Network Configuration Avg. Distance Error (mm) Avg. Moment Error (mm)
NN 64-10-10-6 (1) 113.756 0.726667
NN 64-10-10-6 (2) 821.282 2.06998
NN 64-10-10-6 (3) 58.595 0.685853
NN 64-10-10-6 (4) 268.509 0.722049
NN 64-10-10-6 (5) 52.115 0.667441
NN 64-20-20-6 (1) 917.642 1.39624
NN 64-20-20-6 (2) 338.638 2.88179
NN 64-20-20-6 (3) 591.052 2.08556
NN 64-20-20-6 (4) 904.237 0.970277
NN 64-20-20-6 (5) 418.575 1.08791
NN 64-30-30-6 (1) 632.593 0.845024
NN 64-30-30-6 (2) 801.483 1.35452
NN 64-30-30-6 (3) 158.507 0.925805
NN 64-30-30-6 (4) 736.248 1.17994
NN 64-30-30-6 (5) 1160.84 1.36511
51
66. (a) No Noise (b) 30dB SNR
(c) 20dB SNR (d) 10dB SNR
Figure 21: Location Error Distribution With and Without Added Noise for Re-
alistic Head Model with Network Configuration: 64-30-30-6 (Random Pattern
Training)
52
67. 4.3.5 128 Sensor Configuration - Grid Pattern Training
Every node on the layer Z = 178 was used as a dipole location with the dipole
directed in the +Z direction. Noise was then added such that SNR at the sensors
was equal to 30, 20, and 10 dBs. Figure 22 shows the results for a network of
configuration 128-45-45-6 trained with a grid pattern of dipole locations (128-30-
30-6 was corrupted for some reason, however I managed to train a 128-45-45-6
network).
Tables 23 through 26 show the average errors from testing 10,000 random
dipole locations and directions for each trained network for this model. Figure
23 shows the error distribution for the same test for the network configuration
128-45-45-6.
Table 23: Results for Realistic Head Model - Complex 128 Sensors (No Noise)
Network Configuration Avg. Distance Error (mm) Avg. Moment Error (mm)
NN 128-45-45-6 (1) 13.1907 0.720498
Table 24: Results for Realistic Head Model - Complex 128 Sensors (30 dB SNR)
Network Configuration Avg. Distance Error (mm) Avg. Moment Error (mm)
NN 128-45-45-6 (1) 69.6786 0.835909
53
68. Table 25: Results for Realistic Head Model - Complex 128 Sensors (20 dB SNR)
Network Configuration Avg. Distance Error (mm) Avg. Moment Error (mm)
NN 128-45-45-6 (1) 185.398 1.16294
Table 26: Results for Realistic Head Model - Complex 128 Sensors (10 dB SNR)
Network Configuration Avg. Distance Error (mm) Avg. Moment Error (mm)
NN 128-45-45-6 (1) 421.55 2.03229
54
69. Figure 22: Location Error With and Without Added Noise for Realistic Head
Model with Network Configuration: 128-45-45-6 (Grid Pattern Training) The
figures on the right restrict the voxels tested to a 50 mm radius from the centroid.
55
70. (a) No Noise (b) 30dB SNR
(c) 20dB SNR (d) 10dB SNR
Figure 23: Location Error Distribution With and Without Added Noise for Real-
istic Head Model with Network Configuration: 128-45-45-6 (Grid Pattern Train-
ing)
56
71. 4.3.6 128 Sensor Configuration - Random Pattern Training
Every node on the layer Z = 178 was used as a dipole location with the dipole
directed in the +Z direction. Noise was then added such that SNR at the sensors
was equal to 30, 20, and 10 dBs. Figure 24 shows the results for a network of
configuration 128-30-30-6 trained with a random pattern of dipole locations.
Tables 27 through 30 show the average errors from testing 10,000 random
dipole locations and directions for each trained network for this model. Figure
25 shows the error distribution for the same test for the network configuration
128-30-30-6.
57
72. Figure 24: Location Error With and Without Added Noise for Realistic Head
Model with Network Configuration: 128-30-30-6 (Random Pattern Training).
The figures on the right restrict the voxels tested to a 50 mm radius from the
centroid.
58
73. Table 27: Results for Realistic Head Model - Complex 128 Sensors (No Noise)
Random Training Pattern
Network Configuration Avg. Distance Error (mm) Avg. Moment Error (mm)
NN 128-10-10-6 (1) 24.9903 0.602958
NN 128-10-10-6 (2) 19.8442 0.590713
NN 128-10-10-6 (3) 19.3743 0.569592
NN 128-10-10-6 (4) 19.0488 0.550562
NN 128-10-10-6 (5) 20.356 0.594532
NN 128-20-20-6 (1) 22.349 0.721288
NN 128-20-20-6 (2) 11.0142 0.409259
NN 128-20-20-6 (3) 13.6754 0.525906
NN 128-20-20-6 (4) 13.0773 0.498437
NN 128-20-20-6 (5) 15.4137 0.556403
NN 128-30-30-6 (1) 14.2454 0.538284
NN 128-30-30-6 (2) 16.223 0.657255
NN 128-30-30-6 (3) 12.4212 0.486208
NN 128-30-30-6 (4) 13.825 0.550497
NN 128-30-30-6 (5) 14.098 0.563096
59
74. Table 28: Results for Realistic Head Model - Complex 128 Sensors (30 dB SNR)
Random Training Pattern
Network Configuration Avg. Distance Error (mm) Avg. Moment Error (mm)
NN 128-10-10-6 (1) 29.7161 0.606106
NN 128-10-10-6 (2) 170.257 0.929202
NN 128-10-10-6 (3) 115.509 0.690298
NN 128-10-10-6 (4) 127.182 0.715711
NN 128-10-10-6 (5) 96.9592 0.667589
NN 128-20-20-6 (1) 40.6717 0.731618
NN 128-20-20-6 (2) 208.034 1.98317
NN 128-20-20-6 (3) 94.1695 0.594092
NN 128-20-20-6 (4) 223.703 0.944804
NN 128-20-20-6 (5) 75.9097 0.590296
NN 128-30-30-6 (1) 100.907 0.593521
NN 128-30-30-6 (2) 47.0984 0.678514
NN 128-30-30-6 (3) 121.101 0.550582
NN 128-30-30-6 (4) 72.91 0.570018
NN 128-30-30-6 (5) 73.1262 0.582688
60
75. Table 29: Results for Realistic Head Model - Complex 128 Sensors (20 dB SNR)
Random Training Pattern
Network Configuration Avg. Distance Error (mm) Avg. Moment Error (mm)
NN 128-10-10-6 (1) 46.6862 0.627979
NN 128-10-10-6 (2) 311.115 1.49039
NN 128-10-10-6 (3) 249.215 0.900148
NN 128-10-10-6 (4) 274.673 1.01268
NN 128-10-10-6 (5) 217.485 0.877942
NN 128-20-20-6 (1) 93.7372 0.803654
NN 128-20-20-6 (2) 314.809 2.50561
NN 128-20-20-6 (3) 292.702 0.954744
NN 128-20-20-6 (4) 587.44 1.7615
NN 128-20-20-6 (5) 227.72 0.779227
NN 128-30-30-6 (1) 295.368 0.848571
NN 128-30-30-6 (2) 132.824 0.828996
NN 128-30-30-6 (3) 320.591 0.763234
NN 128-30-30-6 (4) 216.616 0.692932
NN 128-30-30-6 (5) 217.378 0.711431
61
76. Table 30: Results for Realistic Head Model - Complex 128 Sensors (10 dB SNR)
Random Training Pattern
Network Configuration Avg. Distance Error (mm) Avg. Moment Error (mm)
NN 128-10-10-6 (1) 72.602 0.703628
NN 128-10-10-6 (2) 416.702 1.82153
NN 128-10-10-6 (3) 369.277 1.04206
NN 128-10-10-6 (4) 422.009 1.32148
NN 128-10-10-6 (5) 385.392 1.12283
NN 128-20-20-6 (1) 210.895 1.13247
NN 128-20-20-6 (2) 389.712 2.71662
NN 128-20-20-6 (3) 742.408 1.63886
NN 128-20-20-6 (4) 1028.59 2.50361
NN 128-20-20-6 (5) 480.844 1.2388
NN 128-30-30-6 (1) 640.518 1.51339
NN 128-30-30-6 (2) 346.707 1.4336
NN 128-30-30-6 (3) 599.468 1.12513
NN 128-30-30-6 (4) 537.584 1.12938
NN 128-30-30-6 (5) 540.226 1.13756
62
77. (a) No Noise (b) 30dB SNR
(c) 20dB SNR (d) 10dB SNR
Figure 25: Location Error Distribution With and Without Added Noise for Re-
alistic Head Model with Network Configuration: 128-30-30-6 (Random Pattern
Training)
63
78. 5 DISCUSSION
To this researcher’s knowledge source localization using this high fidelity of a
head model has not been tried before. Realistic head shapes with realistic sensor
locations have been modelled and tested [10], however the resolution was not as
high as 1 mm × 1 mm × 1 mm, and the recognition of different conductivities in
the grey and white matter tissues was not taken into account. In fact the results
from Tables 3 through 6 confirm the results from previous experiments [10]. It is
of interest to note the differences in results when we do take into account different
conductivities in tissues.
When we compare the results from Tables 3 through 6, the results from our
realistic head model with homogeneous brain area, to Tables 7 through 30, the
results from our more complex realistic head model, we can see a common theme.
For the homogeneous model when we add noise such that the SNR is equal to
30 dB, we only see a jump in average location error of at most 1 mm. This is
completely different in the more complex head model. When we add the same
amount of noise we see jumps in average location error of several centimeters, and
this only gets worse as we add more noise.
Why is this happening? I believe that the reason that a neural network can
source localize a homogeneous head model so well is because of the almost linear
relationship between the dipole and what gets picked up by the sensors on the
64
79. scalp. If a test dipole is a few millimeters from a training dipole the sensor data
would only be slightly different from the training dipole sensor data. This is what
the network sees when you add noise, data that is slightly different from the real
sensor data. In this case the network will believe that the dipole is in a slightly
different location, therefore the location error is slightly off. As we increase noise
we would expect this to get worse and Tables 3 through 6 show this.
However, if a dipole is a few millimeters from a training dipole in the more
complex realistic head model the sensor data may be significantly different from
the training dipole sensor data. This is because the energy will need to pass
through large patches of white matter before it reaches each sensor on the scalp.
As this power travels through each tissue type it is attenuated at different rates.
Slight differences in sensor data could mean large actual differences in location
between two dipoles. Despite this complexity each of the networks trained for
this model show decent average accuracy with none worse than 2.6 cm and most
less than 2 cm. This can be seen by comparing Tables 7, 11, 15, 19, 23, and
27. When these same networks are presented with the exact same test dipoles,
but with slightly noisy sensor data, 30 dB SNR, the networks become extremely
inaccurate as can be seen in Tables 10, 14, 18, 22, 26, and 30. This tells us that
these networks are extremely sensitive to any abnormality to sensor data, and it
only gets worse as we add more noise.
We can also see this disparity when we look at error histograms from two
65
80. networks of the same complexity trained with the exact same grid points. One
network is trained for the homogeneous brain model, Figure 13, and one is trained
for the complex head model, Figure 15. Both networks are 32-30-30-6 in configu-
ration. As you can see for the homogeneous brain model as we increase noise error
values get higher, but in all cases error values remain relatively tightly grouped.
For the complex head model error values are initially tightly grouped but a signif-
icant spread forms when we add noise. This trend continues for each other kind
of network trained as can be seen in Figures 17, 19, 21, 23, and 25.
Another point of note is the locations of where the greatest errors are
occurring in each model. To see this I have chosen a layer of voxels on the Z-
plain, around the center of the brain, Z = 178, and tested each point for dipole
location accuracy. These can be seen in Figures 12, 14, 16, 18, 20, 23, and 25.
It is interesting to note that for the homogeneous brain model without noise the
greatest errors occur at the outermost regions, particularly the frontal lobe area,
and areas where training dipoles were scarce, the edges of white matter regions.
For the complex head model without noise the greatest errors occur in all cases
in the frontal lobe area. This does not mean that the complex head model does
not suffer from the same problem with test dipoles at the edge of white matter
regions. In fact for every case the errors in the frontal lobe areas are so bad that
almost all other errors are drowned out. When we restrict the voxels tested to
only be the ones 50 mm from the center of each figure we see the same if not worse
66
81. error regions than what we see in the homogeneous model.
When we add noise to each case we see something interesting. In the
homogeneous brain model as we increase the noise the greatest errors tend to
occur in the center of the brain. This would be because the signals picked up
from dipoles in the center of the brain would have a much lower power by the
time they reach the sensors, thus making them more susceptible to minor changes
as we would expect. This, however, is not the case with the more complex head
model. In all the cases for the more complex head model the greatest error occurs
in random areas throughout the brain area and gets more random as the noise
increases. This is due to almost all the voxels being very sensitive to noise as
mentioned earlier. And since noise is inherently random we see random error
values everywhere.
67
82. 6 CONCLUSION
Several different configurations were trained for different types of head models:
a 2D universally homogeneous circular head model, a high definition realistic head
model with homogeneous brain region, and a high definition realistic head model
with realistic brain region conductivities.
Each of these networks were subjected to a multitude of tests to determine
average location and moment error with and without noise. The first set of tests
was placing a dipole at every possible grey matter voxel for Z = 178 with its
direction +Z. This was repeated for added noise such that the sensor data had
power of 30 dB, 20 dB, and 10 dB. The next set of tests was placing 10,000 dipoles
at random locations with random directions and determining average location and
moment errors in millimeters. This test was repeated for added noise such that
the sensor data had power of 30 dB, 20 dB, and 10 dB. The error distribution
data for a single network of interest is presented for each model as an example.
In all cases each network is able to reliably source localize a single dipole
with accuracy provide there is absolutely no noise in the signal. Unlike the ho-
mogeneous models, in every case the more complex realistic head model networks
became significantly inaccurate when noise was added to the signal. This is due to
the added complexity that needs to be trained into each of these networks. This
complexity makes these networks far more sensitive to noise.
68
83. If we can agree that the more complex realistic head model is a better model
of the human head than the realistic head model with a homogeneous brain region,
then it is the recommendation of this author that the network configurations
trained for this project should not be used for clinical use. It may be possible
to achieve better results by segmenting the brain into regions, training neural
networks to source localize in those regions, and training an overall network to
determine what region the dipole resides in. Another option may be to train a
significantly more complex network, however time, memory, and the possibility
of overfitting are all problems to consider. It also may be possible to capture
the complexity of this problem in the increasing popular field of neural-fuzzy
networking.
69
84. REFERENCES
[1] Abeyratne, Udantha R., Yohsuke Kinouchi, Hideo Oki, Jun Okada, Fumio
Shichijo, and Keizo Matsumoto. ”Artificial Neural Networks for Source Lo-
calization in the Human Brain.” Brain Topography 4.1 (1991): 3-21. Print.
[2] Abeyratne, Uduntha R., G. Zhang, and P. Saratchandran. ”EEG Source Lo-
calization: A Comparative Study of Classical and Neural Network Methods.”
International Journal of Neural Systems 11.4 (2001): 349-59. Print.
[3] Dang, Hung V., and Kwong T. Ng. ”Finite Difference Neuroelectric Modeling
Software.” Journal of Neuroscience Methods 198.2 (2011): 359-63. Print.
[4] Dang, Hung V. ”Performance Analysis of Adaptive EEG Beamformers.” Diss.
New Mexico State University, 2007. Print.
[5] Hagan, Martin T., Howard B. Demuth, and Mark H. Beale. Neural Network
Design. Boulder, CO: Distributed by Campus Pub. Service, University of
Colorado Bookstore, 2002. Print.
[6] Jenkinson, M., CF Beckmann, TE Behrens, MW Woolrich, and SM Smith.
”FSL.” NeuroImage 62 (2012): 782-90. Print.
[7] Kamijo, Ken’ichi, Tomoharu Kiyuna, Yoko Takaki, Akihisa Kenmochi, Tet-
suji Tanigawa, and Toshimasa Yamazaki. ”Integrated Approach of an Artifi-
cial Neural Network and Numerical Analysis to Multiple Equivalent Current
Dipole Source Localization.” Frontiers of Medical & Biological Engineering
10.4 (2001): 285-301. Print.
[8] Lau, Clifford. Neural Networks: Theoretical Foundations and Analysis. New
York: IEEE, 1992. Print.
[9] Steinberg, Ben Zion, Mark J. Beran, Steven H. Chin, and James H. Howard,
Jr. ”A Neural Network Approach to Source Localization.” The Journal of the
Acoustical Society of America 90.4 (1991): 2081-090. Print.
[10] Van Hoey, Gert, Jeremy De Clercq, Bart Vanrumste, Rik Van De Walle,
Ignace Lemahieu, Michel D’Have, and Paul Boon. ”EEG Dipole Source Lo-
calization Using Artificial Neural Networks.” Physics in Medicine & Biology
45.4 (2000): 997-1011. IOPscience. Web. 22 May 2013.
[11] Vemuri, V. Rao. Artificial Neural Networks: Concepts and Control Applica-
tions. Los Alamitos, CA: IEEE Computer Society, 1992. Print.
70
85. [12] Yuasa, Motohiro, Qinyu Zhang, Hirofumi Nagashino, and Yohsuke Kinouchi.
”EEG Source Localization for Two Dipoles by Neural Networks.” Proceedings
of the 20th Annual International Conference of the IEEE Engineering in
Medicine and Biology Society 20.4 (1998): 2190-192. Print.
[13] Yushkevich, Paul A., Joseph Piven, Heather Cody Hazlett, Rachel Gimpel
Smith, Sean Ho, James C. Gee, and Guido Gerig. ”User-guided 3D Active
Contour Segmentation of Anatomical Structures: Significantly Improved Ef-
ficiency and Reliability.” NeuroImage 31.3 (2006): 1116-128. Print.
[14] Zhang, Q., X. Bai, M. Akutagawa, H. Nagashino, Y. Kinouchi, F. Shichijo,
S. Nagahiro, and L. Ding. ”A Method for Two EEG Sources Localization
by Combining BP Neural Networks with Nonlinear Least Square Method.”
Control, Automation, Robotics and Vision, 2002. ICARCV 2002. 7th Inter-
national Conference 1 (2002): 536-41. Print.
[15] Zhang, Qinyu, Motohiro Yuasa, Hirofumi Nagashino, and Yohsuke Kinouchi.
”Single Dipole Source Localization From Conventional EEG Using BP Neural
Networks.” Engineering in Medicine and Biology Society, 1998. Proceedings
of the 20th Annual International Conference of the IEEE 4 (1998): 2163-166.
Print.
71