This study analyzed spike train data recorded from neurons in the dorsolateral prefrontal cortex (DLPFC) of a monkey performing a working memory task. Spike train distance metrics were applied to quantify how information about the task was encoded temporally. Optimal parameters were identified for single-unit and multi-unit analyses. Information encoding was found to vary across time intervals of the task, with some neuron pairs showing higher information at different times. Visualizations using t-SNE helped demonstrate that target location could be decoded from spike train distances. The study helps quantify temporal encoding in the DLPFC during working memory tasks.
A Threshold Logic Unit (TLU) is a mathematical function conceived as a crude model, or abstraction of biological neurons. Threshold logic units are the constitutive units in an artificial neural network. In this paper a positive clock-edge triggered T flip-flop is designed using Perceptron Learning Algorithm, which is a basic design algorithm of threshold logic units. Then this T flip-flop is used to design a two-bit up-counter that goes through the states 0, 1, 2, 3, 0, 1… Ultimately, the goal is to show how to design simple logic units based on threshold logic based perceptron concepts.
Energy-Efficient Target Coverage in Wireless Sensor Networks Based on Modifie...ijasuc
One of the major issues in Target-coverage problem of wireless sensor network is to increase the network
lifetime. This can be solved by selecting minimum working nodes that will cover all the targets. This
paper proposes a completely new method, in which minimum working node is selected by modified Ant
colony Algorithm. Experimental results show that the lever of algorithmic complication is depressed and
the searching time is reduced, and the proposed algorithm outperforms the other algorithm in terms.
Spiking Neural Networks As Continuous-Time Dynamical Systems: Fundamentals, E...IDES Editor
In this article is presented a very simple and effective
analog spiking neural network simulator, realized with an
event-driven method, taking into account a basic biological
neuron parameter: the spike latency. Also, other fundamentals
biological parameters are considered, such as subthreshold
decay and refractory period. This model allows to synthesize
neural groups able to carry out some substantial functions.
The proposed simulator is applied to elementary structures,
in which some properties and interesting applications are
discussed, such as the realization of a Spiking Neural Network
Classifier.
A Threshold Logic Unit (TLU) is a mathematical function conceived as a crude model, or abstraction of biological neurons. Threshold logic units are the constitutive units in an artificial neural network. In this paper a positive clock-edge triggered T flip-flop is designed using Perceptron Learning Algorithm, which is a basic design algorithm of threshold logic units. Then this T flip-flop is used to design a two-bit up-counter that goes through the states 0, 1, 2, 3, 0, 1… Ultimately, the goal is to show how to design simple logic units based on threshold logic based perceptron concepts.
Energy-Efficient Target Coverage in Wireless Sensor Networks Based on Modifie...ijasuc
One of the major issues in Target-coverage problem of wireless sensor network is to increase the network
lifetime. This can be solved by selecting minimum working nodes that will cover all the targets. This
paper proposes a completely new method, in which minimum working node is selected by modified Ant
colony Algorithm. Experimental results show that the lever of algorithmic complication is depressed and
the searching time is reduced, and the proposed algorithm outperforms the other algorithm in terms.
Spiking Neural Networks As Continuous-Time Dynamical Systems: Fundamentals, E...IDES Editor
In this article is presented a very simple and effective
analog spiking neural network simulator, realized with an
event-driven method, taking into account a basic biological
neuron parameter: the spike latency. Also, other fundamentals
biological parameters are considered, such as subthreshold
decay and refractory period. This model allows to synthesize
neural groups able to carry out some substantial functions.
The proposed simulator is applied to elementary structures,
in which some properties and interesting applications are
discussed, such as the realization of a Spiking Neural Network
Classifier.
Classification of Electroencephalograph (EEG) Signals Using Quantum Neural Ne...CSCJournals
In this paper, quantum neural network (QNN), which is a class of feedforward neural networks (FFNN’s), is used to recognize (EEG) signals. For this purpose ,independent component analysis (ICA), wavelet transform (WT) and Fourier transform (FT) are used as a feature extraction after normalization of these signals. The architecture of (QNN’s) have inherently built in fuzzy. The hidden units of these networks develop quantized representations of the sample information provided by the training data set in various graded levels of certainty. Experimental results presented here show that (QNN’s) are capable of recognizing structures in data, a property that conventional (FFNN’s) with sigmoidal hidden units lack . Finally, (QNN) gave us kind of fast and realistic results compared with the (FFNN). Simulation results show that a total classification of 81.33% for (ICA), 76.67% for (WT) and 67.33% for (FT).
Load balancing in public cloud combining the concepts of data mining and netw...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
On The Application of Hyperbolic Activation Function in Computing the Acceler...iosrjce
Hyperbolic activation function is examined for its ability to accelerate the performance of doing data
mining by using a technique named as Reverse Analysis method. In this paper, we describe how Hopfield
network perform better with hyperbolic activation function and able to induce logical rules from large database
by using reverse analysis method: given the values of the connections of a network, we hope to determine what
logical rules are entrenched in the database. We limit our analysis to Horn clauses. The analysis for this study
was simulated using Microsoft Visual C++ software, 2010 Express.
Incorporating Kalman Filter in the Optimization of Quantum Neural Network Par...Waqas Tariq
Kalman filter have been used for the estimation of instantaneous states of linear dynamic systems. It is a good tool for inferring of missing information from noisy measurement. The quantum neural network is another approach to the merging of fuzzy logic with the neural network and that by the investment of quantum mechanics theory in building the structure of neural network. The gradient descent algorithm has been used, widely, in training the neural network, but the problem of local minima is one of the disadvantages of this algorithm. This paper presents an algorithm to train the quantum neural network by using the extended kalman filter.
Energy aware model for sensor network a nature inspired algorithm approachijdms
In this paper we are proposing to develop energy aware model for sensor network. In our approach, first
we used DBSCAN clustering technique to exploit the spatiotemporal correlation among the sensors, then
we identified subset of sensors called representative sensors which represent the entire network state. And
finally we used nature inspired algorithms such as Ant Colony Optimization, Bees Colony Optimization,
and Simulated Annealing to find the optimal transmission path for data transmission. We have conducted
our experiment on publicly available Intel Berkeley Research Lab dataset and the experimental results
shows that consumption of energy can be reduced.
Web spam classification using supervised artificial neural network algorithmsaciijournal
Due to the rapid growth in technology employed by the spammers, there is a need of classifiers that are more efficient, generic and highly adaptive. Neural Network based technologies have high ability of adaption as well as generalization. As per our knowledge, very little work has been done in this field using neural network. We present this paper to fill this gap. This paper evaluates performance of three supervised learning algorithms of artificial neural network by creating classifiers for the complex problem of latest web spam pattern classification. These algorithms are Conjugate Gradient algorithm, Resilient Backpropagation learning, and Levenberg-Marquardt algorithm.
Erca energy efficient routing and reclusteringaciijournal
The pervasive application of wireless sensor networks (WNSs) is challenged by the scarce energy constraints of sensor nodes. En-route filtering schemes, especially commutative cipher based en-route filtering (CCEF) can saves energy with better filtering capacity. However, this approach suffer from fixed paths and inefficient underlying routing designed for ad-hoc networks. Moreover, with decrease in remaining sensor nodes, the probability of network partition increases. In this paper, we propose energy-efficient routing and re-clustering algorithm (ERCA) to address these limitations. In proposed scheme with reduction in the number of sensor nodes to certain thresh-hold the cluster size and transmission range dynamically maintain cluster node-density. Performance results show that our approach demonstrate filtering-power, better energy-efficiency, and an average gain over 285% in network lifetime.
Further results on the joint time delay and frequency estimation without eige...IJCNCJournal
Joint Time Delay and Frequency Estimation (JTDFE) problem of complex sinusoidal signals received at
two separated sensors is an attractive problem that has been considered for several engineering
applications. In this paper, a high resolution null (noise) subspace method without eigenvalue
decomposition is proposed. The direct data Matrix is replaced by an upper triangular matrix obtained from
Rank-Revealing LU (RRLU) factorization. The RRLU provides accurate information about the rank and the
numerical null space which make it a valuable tool in numerical linear algebra.The proposed novel method
decreases the computational complexity of JTDFE approximately to the half compared with RRQR
methods. The proposed method generates estimates of the unknown parameters which are based on the
observation and/or covariance matrices. This leads to a significant improvement in the computational load.
Computer simulations are included in this paper to demonstrate the proposed method.
Boundness of a neural network weights using the notion of a limit of a sequenceIJDKP
feed forward neural network with backpropagation le
arning algorithm is considered as a black box
learning classifier since there is no certain inter
pretation or anticipation of the behavior of a neur
al
network weights. The weights of a neural network ar
e considered as the learning tool of the classifier
, and
the learning task is performed by the repetition mo
dification of those weights. This modification is
performed using the delta rule which is mainly used
in the gradient descent technique. In this article
a
proof is provided that helps to understand and expl
ain the behavior of the weights in a feed forward n
eural
network with backpropagation learning algorithm. Al
so, it illustrates why a feed forward neural networ
k is
not always guaranteed to converge in a global minim
um. Moreover, the proof shows that the weights in t
he
neural network are upper bounded (i.e. they do not
approach infinity). Data Mining, Delta
Energy efficient approach based on evolutionary algorithm for coverage contro...ijcseit
Coverage and connectivity are two important requirements in Wireless Sensor Networks (WSNs). In this
paper, we address the problem of network coverage and connectivity and propose an energy efficient
approach based on genetic evolutionary algorithm for maintaining coverage and connectivity where the
sensor nodes can have different sensing ranges and transmission ranges. The proposed algorithm is
simulated and it' efficiency is demonstrated via different experiments.
Jean Nelson
Covington, WA 98042
360-886-1594
JeanNelson1005@hotmail.com
Well-rounded, seasoned technical professional with over a decade of experience in Information Technologies with emphasis in writing, and copy editing technical communications, layout and design using current software technology. Results-driven, proactive approach to learning new technologies in order to deliver exemplary results. Extensive experience in leading and building cohesive and collaborative teams.
Resume - Taranjeet Singh - 3.5 years - Java/J2EE/GWTtaranjs
This is OLD PROFILE.
For latest one, visit : http://www.linkedin.com/in/taranjs
and connect with me using my email : taranjs at gmail dot com
B.Tech. (Electronics and Communication) from Guru Gobind Singh Indraprastha University.
● Proficiency in grasping new technical concepts quickly and utilizing the same in a productive manner.
● A proactive learner with a flair for adopting emerging trends & addressing industry requirements to achieve
organizational objectives & profitability norms.
● An exceptional performer with distinction of being commended for exemplary performance in academics as well as
extra-curricular activities.
● Total experience: 3 year and 5 months
Classification of Electroencephalograph (EEG) Signals Using Quantum Neural Ne...CSCJournals
In this paper, quantum neural network (QNN), which is a class of feedforward neural networks (FFNN’s), is used to recognize (EEG) signals. For this purpose ,independent component analysis (ICA), wavelet transform (WT) and Fourier transform (FT) are used as a feature extraction after normalization of these signals. The architecture of (QNN’s) have inherently built in fuzzy. The hidden units of these networks develop quantized representations of the sample information provided by the training data set in various graded levels of certainty. Experimental results presented here show that (QNN’s) are capable of recognizing structures in data, a property that conventional (FFNN’s) with sigmoidal hidden units lack . Finally, (QNN) gave us kind of fast and realistic results compared with the (FFNN). Simulation results show that a total classification of 81.33% for (ICA), 76.67% for (WT) and 67.33% for (FT).
Load balancing in public cloud combining the concepts of data mining and netw...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
On The Application of Hyperbolic Activation Function in Computing the Acceler...iosrjce
Hyperbolic activation function is examined for its ability to accelerate the performance of doing data
mining by using a technique named as Reverse Analysis method. In this paper, we describe how Hopfield
network perform better with hyperbolic activation function and able to induce logical rules from large database
by using reverse analysis method: given the values of the connections of a network, we hope to determine what
logical rules are entrenched in the database. We limit our analysis to Horn clauses. The analysis for this study
was simulated using Microsoft Visual C++ software, 2010 Express.
Incorporating Kalman Filter in the Optimization of Quantum Neural Network Par...Waqas Tariq
Kalman filter have been used for the estimation of instantaneous states of linear dynamic systems. It is a good tool for inferring of missing information from noisy measurement. The quantum neural network is another approach to the merging of fuzzy logic with the neural network and that by the investment of quantum mechanics theory in building the structure of neural network. The gradient descent algorithm has been used, widely, in training the neural network, but the problem of local minima is one of the disadvantages of this algorithm. This paper presents an algorithm to train the quantum neural network by using the extended kalman filter.
Energy aware model for sensor network a nature inspired algorithm approachijdms
In this paper we are proposing to develop energy aware model for sensor network. In our approach, first
we used DBSCAN clustering technique to exploit the spatiotemporal correlation among the sensors, then
we identified subset of sensors called representative sensors which represent the entire network state. And
finally we used nature inspired algorithms such as Ant Colony Optimization, Bees Colony Optimization,
and Simulated Annealing to find the optimal transmission path for data transmission. We have conducted
our experiment on publicly available Intel Berkeley Research Lab dataset and the experimental results
shows that consumption of energy can be reduced.
Web spam classification using supervised artificial neural network algorithmsaciijournal
Due to the rapid growth in technology employed by the spammers, there is a need of classifiers that are more efficient, generic and highly adaptive. Neural Network based technologies have high ability of adaption as well as generalization. As per our knowledge, very little work has been done in this field using neural network. We present this paper to fill this gap. This paper evaluates performance of three supervised learning algorithms of artificial neural network by creating classifiers for the complex problem of latest web spam pattern classification. These algorithms are Conjugate Gradient algorithm, Resilient Backpropagation learning, and Levenberg-Marquardt algorithm.
Erca energy efficient routing and reclusteringaciijournal
The pervasive application of wireless sensor networks (WNSs) is challenged by the scarce energy constraints of sensor nodes. En-route filtering schemes, especially commutative cipher based en-route filtering (CCEF) can saves energy with better filtering capacity. However, this approach suffer from fixed paths and inefficient underlying routing designed for ad-hoc networks. Moreover, with decrease in remaining sensor nodes, the probability of network partition increases. In this paper, we propose energy-efficient routing and re-clustering algorithm (ERCA) to address these limitations. In proposed scheme with reduction in the number of sensor nodes to certain thresh-hold the cluster size and transmission range dynamically maintain cluster node-density. Performance results show that our approach demonstrate filtering-power, better energy-efficiency, and an average gain over 285% in network lifetime.
Further results on the joint time delay and frequency estimation without eige...IJCNCJournal
Joint Time Delay and Frequency Estimation (JTDFE) problem of complex sinusoidal signals received at
two separated sensors is an attractive problem that has been considered for several engineering
applications. In this paper, a high resolution null (noise) subspace method without eigenvalue
decomposition is proposed. The direct data Matrix is replaced by an upper triangular matrix obtained from
Rank-Revealing LU (RRLU) factorization. The RRLU provides accurate information about the rank and the
numerical null space which make it a valuable tool in numerical linear algebra.The proposed novel method
decreases the computational complexity of JTDFE approximately to the half compared with RRQR
methods. The proposed method generates estimates of the unknown parameters which are based on the
observation and/or covariance matrices. This leads to a significant improvement in the computational load.
Computer simulations are included in this paper to demonstrate the proposed method.
Boundness of a neural network weights using the notion of a limit of a sequenceIJDKP
feed forward neural network with backpropagation le
arning algorithm is considered as a black box
learning classifier since there is no certain inter
pretation or anticipation of the behavior of a neur
al
network weights. The weights of a neural network ar
e considered as the learning tool of the classifier
, and
the learning task is performed by the repetition mo
dification of those weights. This modification is
performed using the delta rule which is mainly used
in the gradient descent technique. In this article
a
proof is provided that helps to understand and expl
ain the behavior of the weights in a feed forward n
eural
network with backpropagation learning algorithm. Al
so, it illustrates why a feed forward neural networ
k is
not always guaranteed to converge in a global minim
um. Moreover, the proof shows that the weights in t
he
neural network are upper bounded (i.e. they do not
approach infinity). Data Mining, Delta
Energy efficient approach based on evolutionary algorithm for coverage contro...ijcseit
Coverage and connectivity are two important requirements in Wireless Sensor Networks (WSNs). In this
paper, we address the problem of network coverage and connectivity and propose an energy efficient
approach based on genetic evolutionary algorithm for maintaining coverage and connectivity where the
sensor nodes can have different sensing ranges and transmission ranges. The proposed algorithm is
simulated and it' efficiency is demonstrated via different experiments.
Jean Nelson
Covington, WA 98042
360-886-1594
JeanNelson1005@hotmail.com
Well-rounded, seasoned technical professional with over a decade of experience in Information Technologies with emphasis in writing, and copy editing technical communications, layout and design using current software technology. Results-driven, proactive approach to learning new technologies in order to deliver exemplary results. Extensive experience in leading and building cohesive and collaborative teams.
Resume - Taranjeet Singh - 3.5 years - Java/J2EE/GWTtaranjs
This is OLD PROFILE.
For latest one, visit : http://www.linkedin.com/in/taranjs
and connect with me using my email : taranjs at gmail dot com
B.Tech. (Electronics and Communication) from Guru Gobind Singh Indraprastha University.
● Proficiency in grasping new technical concepts quickly and utilizing the same in a productive manner.
● A proactive learner with a flair for adopting emerging trends & addressing industry requirements to achieve
organizational objectives & profitability norms.
● An exceptional performer with distinction of being commended for exemplary performance in academics as well as
extra-curricular activities.
● Total experience: 3 year and 5 months
Black-box modeling of nonlinear system using evolutionary neural NARX modelIJECEIAES
Nonlinear systems with uncertainty and disturbance are very difficult to model using mathematic approach. Therefore, a black-box modeling approach without any prior knowledge is necessary. There are some modeling approaches have been used to develop a black box model such as fuzzy logic, neural network, and evolution algorithms. In this paper, an evolutionary neural network by combining a neural network and a modified differential evolution algorithm is applied to model a nonlinear system. The feasibility and effectiveness of the proposed modeling are tested on a piezoelectric actuator SISO system and an experimental quadruple tank MIMO system.
Classification of Iris Data using Kernel Radial Basis Probabilistic Neural N...Scientific Review SR
Radial Basis Probabilistic Neural Network (RBPNN) has a broader generalized capability that been
successfully applied to multiple fields. In this paper, the Euclidean distance of each data point in RBPNN is
extended by calculating its kernel-induced distance instead of the conventional sum-of squares distance. The
kernel function is a generalization of the distance metric that measures the distance between two data points as the
data points are mapped into a high dimensional space. During the comparing of the four constructed classification
models with Kernel RBPNN, Radial Basis Function networks, RBPNN and Back-Propagation networks as
proposed, results showed that, model classification on Iris Data with Kernel RBPNN display an outstanding
performance in this regard
Classification of Iris Data using Kernel Radial Basis Probabilistic Neural Ne...Scientific Review
Radial Basis Probabilistic Neural Network (RBPNN) has a broader generalized capability that been successfully applied to multiple fields. In this paper, the Euclidean distance of each data point in RBPNN is extended by calculating its kernel-induced distance instead of the conventional sum-of squares distance. The kernel function is a generalization of the distance metric that measures the distance between two data points as the data points are mapped into a high dimensional space. During the comparing of the four constructed classification models with Kernel RBPNN, Radial Basis Function networks, RBPNN and Back-Propagation networks as proposed, results showed that, model classification on Iris Data with Kernel RBPNN display an outstanding performance in this regard.
ElectroencephalographySignalClassification based on Sub-Band Common Spatial P...IOSRJVSP
Brain-computer interface (BCI) is a communication pathway between brain and an external device. It translates human thought into commands to control the external devices.Electroencephalography (EEG) is cost effective and easier way to implement the BCI. This paper presents a novel method for classifying EEG during motor imagery by the combination of common spatial pattern (CSP) and linear discriminant analysis (LDA). In the proposed method, the EEG signal is bandpass-filtered into multiple frequency bands. The CSP features are then extracted from each of these bands. The LDA classifier is subsequently used to classify the CSP features. In this paper, experimental results are presented on a publicly available BCI competition dataset and the performance is compared with existing approaches. The experimental result shows that the proposed method yields comparatively superior cross validation accuracies compared to prevailing methods.
Data Driven Choice of Threshold in Cepstrum Based Spectrum Estimatesipij
The technique of cepstrum thresholding, which is shown to be an effective, yet simple, way of obtaining a smoothed non parametric spectrum estimate of a stationary signal. The major problem of this method is the choice of the threshold value for variance reduction of spectrum estimates. This paper proposes a new threshold selection method which is based on cross validation schemes such as Leave-One-Out, LeaveTwo-Out and Leave-Half-Out. This new methods are easy to describe, simple to implement, and does not impose severe conditions on the unknown spectrum. Numerical results suggest that this new methods are shown to be in agreement with those obtained when the spectrum is fully known.
The International Journal of Engineering and Science (The IJES)theijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
METHOD FOR THE DETECTION OF MIXED QPSK SIGNALS BASED ON THE CALCULATION OF FO...sipij
In this paper we propose the method for the detection of Carrier-in-Carrier signals using QPSK modulations. The method is based on the calculation of fourth-order cumulants. In accordance with the methodology based on the Receiver Operating Characteristic (ROC) curve, a threshold value for the decision rule is established. It was found that the proposed method provides the correct detection of the sum of QPSK signals for a wide range of signal-to-noise ratios and also for the different bandwidths of mixed signals. The obtained results indicate the high efficiency of the proposed detection method. The advantage of the proposed detection method over the “radiuses” method is also shown.
Spatial correlation based clustering algorithm for random and uniform topolog...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Neural Networks for High Performance Time-Delay Estimation and Acoustic Sourc...cscpconf
Time-delay estimation is an essential building block of many signal processing applications.
This paper follows up on earlier work for acoustic source localization and time delay estimation
using pattern recognition techniques in the adverse environment such as reverberant rooms or
underwater; it presents unprecedented high performance results obtained with supervised
training of neural networks which challenge the state of the art and compares its performance
to that of well-known methods such as the Generalized Cross-Correlation or Adaptive
Eigenvalue Decomposition.
NEURAL NETWORKS FOR HIGH PERFORMANCE TIME-DELAY ESTIMATION AND ACOUSTIC SOURC...csandit
Time-delay estimation is an essential building block of many signal processing applications.This paper follows up on earlier work for acoustic source localization and time delay estimation
using pattern recognition techniques in the adverse environment such as reverberant rooms or underwater; it presents unprecedented high performance results obtained with supervised training of neural networks which challenge the state of the art and compares its performance to that of well-known methods such as the Generalized Cross-Correlation or Adaptive Eigenvalue Decomposition.
Using Subspace Pursuit Algorithm to Improve Performance of the Distributed Co...Polytechnique Montreal
This paper applies a compressed algorithm to improve the spectrum sensing performance of cognitive radio technology.
At the fusion center, the recovery error in the analog to information converter (AIC) when reconstructing the
transmit signal from the received time-discrete signal causes degradation of the detection performance. Therefore, we
propose a subspace pursuit (SP) algorithm to reduce the recovery error and thereby enhance the detection performance.
In this study, we employ a wide-band, low SNR, distributed compressed sensing regime to analyze and evaluate the
proposed approach. Simulations are provided to demonstrate the performance of the proposed algorithm.
Biomedical Signals Classification With Transformer Based Model.pptxSandeep Kumar
Seizures are caused by abnormal neuronal activity and are a common chronic brain disease. It is estimated that around 60 million people have epileptic seizures worldwide. Having an epileptic seizure can have serious consequences for the patient. Surface electroencephalogram (EEG) is a non-intrusive method frequently used to detect epileptic brain action. Nevertheless, graphic inspection of the EEG is biased, time-taken, and tedious for the neurologist. This paper proposes an automatic epileptic seizure classification technique using transformer based deep learning. Fast Fourier Transform (FFT) is first used to extract features from EEG data, and features are then used as inputs to a classifier for selection and classification. The suggested technique successfully gave 100% accuracy in the tested EEG reading. The effectiveness of the proposed method could vary across different EEG databases.
Mine Blood Donors Information through Improved K-Means Clusteringijcsity
The number of accidents and health diseases which are increasing at an alarming rate are resulting in a huge increase in the demand for blood. There is a necessity for the organized analysis of the blood donor database or blood banks repositories. Clustering analysis is one of the data mining applications and K-means clustering algorithm is the fundamental algorithm for modern clustering techniques. K-means clustering algorithm is traditional approach and iterative algorithm. At every iteration, it attempts to find the distance from the centroid of each cluster to each and every data point. This paper gives the improvement to the original k-means algorithm by improving the initial centroids with distribution of data. Results and discussions show that improved K-means algorithm produces accurate clusters in less computation time to find the donors information
Mine Blood Donors Information through Improved K-Means Clustering
ssnow_manuscript_postreview
1. Spike Train Distance Analysis of Neuronal Reponses from Prefrontal Cortex
Stephen C. Snowa
, Roger Herikstadb
, Aishwarya Parthasarathyb
, Camilo Libedinskyb
, and Shih-
Cheng Yenb,c
a
Department of Electrical and Computer Engineering
University of Pittsburgh, PA, USA
b
Singapore Institute for Neurotechnology
National University of Singapore, Singapore.
c
Department of Electrical and Computer Engineering
National University of Singapore, Singapore
Abstract
The dorsolateral prefrontal cortex plays a key role in high-order processing and short-
term storage of task-relevant information. We apply spike train similarity measures to data
recorded from multiple neurons in the prefrontal cortex of a monkey during a working memory
task. We compare the efficacy of these methods across a range of parameters to highlight
features of the encoding strategy carried out by the monkey during the task. Our findings help to
quantify the degree to which information is temporally encoded, while also shedding light on
how one might integrate information from populations of neurons to decode certain inputs.
Keywords: Spike train metrics, neural decoding, prefrontal cortex, working memory
1. Introduction
In order to coordinate complex actions, organisms rely on communication channels
throughout the nervous system. This involves lower and higher order processing, where lower
order serves the purpose of either directly receiving sensory input like light or sound, or
delivering motor output through the contraction or relaxation of muscle fibers. Higher order
processing would take signals from the lower order sensory processing, allow the organism to
decide what to do with this information, and then carry out that action.
Previous studies have linked the prefrontal cortex (PFC) to high-order processing, as its
functionality is vital for adequate performance in tasks where working memory is required or the
rules are dynamic [1]. The various regions of the PFC behave differently in working memory
tasks, and the dorsolateral prefrontal cortex (DLPFC), which we have chosen to analyze, has
2. 2
been found to encode spatial task-related information through sustained delay activity [2].
Therefore, one should be able to extract spatial details during a task from the activity of
individual neurons in the DLPFC.
While it is generally accepted that neurons communicate through their action potentials,
or spikes, many groups have focused on studying how the brain reads these spike trains to
determine what information each neuron is conveying [3]. Certain neurons appear to exhibit rate
coding, where one can decode a stimulus based only on the firing rate of a neuron during a time
window [4]. In light of findings that demonstrate that the timing of as few as a single spike can
encode a significant portion of the information about a stimulus [5], naturally many have
developed and applied techniques that factor in this temporally encoded information [6,7,8].
One such method that quantifies the metric distance between spike trains has been
applied to neural data from several brain regions with promising success [9]. Others have
extended this method for improved functionality, namely for the purpose of analyzing
populations of simultaneously recorded neurons [10,11]. Our work applies these methods of
spike-train distance to data recorded from the DLPFC, which has not been previously seen in
the literature.
2. Methods
2.1. Experimental Setup
In the experimental setup, a macaque monkey sits in front of a screen, with its head
fixed. The screen displays a 3 x 3 grid, with the center square containing a fixation cross. Once
the monkey has established fixation on the center, as determined by an eye tracker, one of the
eight perimeter squares illuminates red for 300 ms (target). After a 1000 ms delay period, a
different perimeter square illuminates green for 300 ms (distractor). Then, after another 1000 ms
delay period the fixation cross disappears and the monkey must attempt to make a saccade
towards the location of the target square (Figure 1). If the monkey performs the task correctly, it
is given a juice reward. Otherwise, there is no juice reward. After an inter-trial interval, a new
3. 3
trial with a separate target-distractor pair is performed. For each trial, neural activity recorded by
implanted microelectrode arrays are amplified, digitized, and saved to a computer. The data is
then high-pass filtered, and sorted into spike trains using a hidden Markov model algorithm [12].
2.2. Algorithm Overview
In order to compare the spike trains across the 621 trials, we employ the metric spike
train distance algorithm developed by Victor and Purpura [6]. This method, based on the edit-
distance algorithm [13], determines the distance between spike trains by finding the total cost,
which is a unitless measure of operations performed, to transform one train into another via
insertions, deletions, and shifts. Insertions and deletions are assigned a fixed cost of one, each,
whereas shift costs are determined by q|∆t|, where q is a sensitivity parameter with units of
seconds-1
and ∆t is the time change associated with the shift. As a consequence of these cost
values, two spikes are similar and can be transformed by a shift if |∆t| < 2/q, because in that
case q|∆t| would be less than the cost of two, associated with deleting the original spike and
inserting it at the new location (Figure 2).
Optimally, given a range of stimuli, one would want to select a q value for which
distances between spike trains elicited by the same stimulus are low, while distances to other
stimuli are higher in comparison. At this optimal q value, the quantity 2/q, with units of seconds,
characterizes the time scale of the spike train producing neuron. At one extreme, with q = 0, the
timing of each spike is irrelevant and distances are determined by the difference in the number
of spikes between the two spike trains, making it a rate code. With q→ ∞, only spikes that occur
at the same instant are considered similar, so the metric becomes a coincidence detector. An
optimal q value falling between these two extremes has been used by some as a means of
describing the encoding nature of neurons within a brain region [6,10], but others have called
into question the accuracy by which this parameter describes the neural code [14]. This method
is applied for all of our single-cell evaluations. Aronov [15] provides an extension to the spike
train distance metric by allowing for spike trains from pairs of neurons to be analyzed. A new
4. 4
parameter k is introduced, which represents the cost of reassigning the label of a spike during a
shift, i.e. transforming a spike from one neuron to a spike from a different neuron. Therefore, in
this metric, shifts now have a cost of q|∆t| + k when reassigning labels and q|∆t| otherwise.
When k = 0, the spike trains create a population code where the origin of each spike is ignored.
When k ≥ 2, the spike trains are interpreted as a labeled line, where the pairwise distance is
treated as the sum of the individual distances of each of the two neurons. When k falls between
those two values, the spectrum determines the relative importance of the origin of each spike. In
order to determine the optimal q and k values for each neuron/pair of neurons, one must find the
combination of these values that results in the highest degree of discrimination between stimuli.
For evaluating the discriminative performance of each neuron/pair of neurons across a
range of parameters, two methods are used. The first, as used in Victor and Purpura [6] and
Aronov et. al [10], separates each spike train into one of eight groups (for eight different target
locations) and creates an 8 x 8 confusion matrix with N (number of trials) entries, where for each
spike train from a certain trial (corresponding row), we assign it to one of eight different groups
based on which group of spike trains its average distance to is lowest (corresponding column).
Therefore, for each range of parameters (q, k, cells, time window, etc.), we can depict how
accurately this analysis technique categorized the spike trains by the degree to which the
confusion matrix is diagonal, and we can quantify this by an amount of information, in bits,
where 3 bits is maximal because log2[8 locations] = 3 bits.
The second method opts for a qualitative rather than a quantitative approach. Drawing
from other work in visualizing spike train distances, we produce a low dimensional, graphical
representation of higher dimensional data [11]. This is achieved by applying the t-distributed
stochastic neighbor embedding (t-SNE) algorithm [16] to matrices containing spike train
distances. Using either single-unit or multi-unit spike train distance data, an N-dimensional
matrix of each trial-to-trial distance is inputted into the t-SNE algorithm, which returns a 2-
dimensional plot, effectively preserving relative spacing between each trial data point.
5. 5
3. Results
3.1. Optimizing Parameters
Given the two-parameter nature of the multi-unit metric space analysis approach, first we
seek to find general trends in the optimal parameters. Values of q were chosen from a range
found to produce optimal information using the single unit method for the recorded neurons. k
values were varied from 0 to 2. Using intermediate k values (0 < k < 2) requires the use of more
computationally expensive algorithms than the single unit method, which is O(N2
), where N is
the number of spikes per train. The multi-unit algorithm has a complexity of O(NL+1
), where L is
the number of labels, or neurons per train. This increased complexity for even the bivariate case
(two neurons) restricts us to only analyzing pairwise information for 10 of the 58 neurons
sampled during the target time window. These 10 neurons were shown to encode spatial
information during the target window using other analytic methods.
Figure 3 contains the redundancy index of neuron pairs [17] for the different k values (at
optimal q value) across four neurons that showed considerably higher degrees of encoding than
the other six neurons in this method (although similar results were found with more error when
all ten neurons were included). Redundancy index is a measure of the degree to which two
neurons encode information jointly. For an index of 0, joint information is completely
independent. For a value of 1, information is completely redundant. For values between 0 and 1,
information is partially redundant. And for values greater than 1, information is contradictory.
Seeing as redundancy gradually drops for higher k values, and a k value of 2 is preferable due
to computational constraints, this value was chosen for subsequent multi-unit analysis.
3.2. Information Across Time Intervals
We next looked to see how both information in the system and the effectiveness of the
method evolved over the duration of the experiment. To do this, we first analyzed all 58 cells
and found which ones encoded at least 0.1 bits of information for some 300 ms time window
(fixation, target, early delay 1, late delay 1, distractor, early delay 2, late delay 2, saccade) using
6. 6
the single unit method for a wide range of q values. Next, we computed the pairwise information
values, with k = 2, for these encoding cells at each time window. Finally, we computed an
ensemble information by comparing each of these encoding cells’ spike trains at once,
essentially taking an aggregate sum of each cell’s distance matrix and calculated the
information from those distances. For comparison, we included the information found using a
rate code for both best pairwise and ensemble coding over time, as well as the best single-unit
information (Figure 4). The pair of neurons that encode the most pairwise information at each
time window changes over time, as shown in Figure 5, where the four different high-information
pairs exhibit peaks at different times.
3.3. t-SNE Visualization
As described in another study [14], the method above of quantifying information is very
sensitive to the classifier used, so to construct a better visualization of the clustering
performance across different groups of cells, we apply the t-SNE algorithm to constructed trial-
to-trial distance matrices (Figure 6). Ideally, as the distances between more neurons are
included in the distance matrix through multi-unit analysis, the locations should separate further.
The visualization for the best single cell was not included because of uncharacteristic t-SNE
output. Additionally, we compare the Aronov method’s results to both a rate code and to a
different multivariate application of the Victor distance, which applies the t-SNE algorithm to an
N x MN distance matrix, where N is the number of trials and M is the number of neurons [11].
4. Discussion
4.1. Optimal Parameters
We have applied spike train distance metrics to neural data recorded from the DLPFC,
which has a higher sensitivity to individual spike times than other binned methods. To take full
advantage of the many cells that we were able to record simultaneously during our experiment,
we used multi-unit extensions of the single unit algorithm. For the cells that encoded target
7. 7
location, greater information was obtained through pairwise or ensemble analysis, with the least
redundancy when each spike train was considered separately (i.e. the k parameter value of 2).
4.2. Information Across Time Intervals
Although for the target period there was a considerable difference in the information
encoded using the time sensitive metric over a rate code, for the subsequent time periods this
difference essentially vanished. One of the reasons for this could be because one cell, g40c01,
encoded a high amount of information (0.4 bits) during the target period and benefitted the most
from a higher temporal precision (approximately 8 ms, a level of magnitude lower than the
majority of other cells analyzed), therefore making the rate code much less effective in this time
window. However, most other cells were not this time-sensitive and for that reason, rate coding
performed as well if not better across the rest of the experiment.
4.3. t-SNE Visualization
The t-SNE visualizations do indicate that the spike train distance metric lends itself well
to our experiment, as in the late delay 1 period, the 8 locations form a circle that resembles the
shape of the 3 x 3 grid. It is unclear whether the failure to produce clear clusters is due to the
particular method or the fact that only a handful of recorded cells contributed significantly
(greater than 0.1 bits above shuffled information) to location discrimination. Previous
experiments concerning spatial working memory seem to indicate that more of our cells should
have contributed to the information through receptive fields [2].
5. Conclusions
From our results, spike train similarity measures appear to have strong potential for
expanding our understanding of the DLPFC, but that potential largely depends on the metric
used. Already, many other multi-unit extensions have been developed [18,19,20] that could be
applied in future studies. Although few conclusions can be made about the spike resolution of
spatial working memory encoding neurons in the DLPFC, the t-SNE visualizations clearly
demonstrate that target location can be decoded using spike train distance metrics. Further
8. 8
research could be focused on analyzing how location encoding performance is affected by
variations in experimental setup, such as the inclusion of distractors during the trial sequence.
Acknowledgements
This award was funded by the Swanson School of Engineering and the Office of the Provost.
References
[1] E.K. Miller, J.D. Cohen, An integrative theory of prefrontal cortex function, Annual Review of
Neuroscience, 24 (2001) 167–202.
[2] G. Rainer, W.F. Asaad, E.K. Miller, Memory fields of neurons in the primate pre- frontal
cortex, Proceedings of the National Academy of Sciences of the United States of America, 95
(1998) 15008–15013.
[3] F. Rieke, D. Warland, R. De Ruyter Van Steveninck, W. Bialek, Spikes: Exploring the Neural
Code, Computational Neuroscience 20 (1997).
[4] E.D. Adrian, The impulses produced by sensory nerve-endings, The Journal of Physiology,
62 (1926) 33–51.
[5] W. Bialek, F. Rieke, R. De Ruyter van Steveninck, D. Warland, Reading a neural code,
Science 252 (1991) 1854–1857.
[6] J.D. Victor, K.P. Purpura, Metric-space analysis of spike trains: theory, algorithms and
application, Network 8 (1997) 127–164.
[7] M.C. van Rossum, A novel spike distance, Neural Computation, 13 (2001) 751–763.
[8] S. Panzeri, R.S. Petersen, S.R. Schultz, M. Lebedev, M.E. Diamond, The role of spike timing
in the coding of stimulus location in rat somatosensory cortex, Neuron, 29 (2001) 769–777.
[9] J.D. Victor, Spike train metrics, Current Opinion in Neurobiology, 15 (2005) 585–592.
[10] D. Aronov, D.S. Reich, F. Mechler, J.D. Victor, Neural coding of spatial phase in V1 of the
macaque monkey, Journal of Neurophysiology, 89 (2003a) 3304–3327.
[11] C.E. Vargas-Irwin, D.M. Brandman, J.B. Zimmermann, J.P. Donoghue, M.J. Black, Spike
Train SIMilarity Space (SSIMS): A Framework for Single Neuron and Ensemble Data Analysis,
Neural Computation, 27 (2015) 1–31.
[12] J.A. Herbst, S. Gammeter, D Ferrero, R.H. Hahnloser, Spike sorting with hidden Markov
models, Journal of Neuroscience Methods, 174 (2008) 123-134.
[13] P.H. Sellers, On the Theory and Computation of Evolutionary Distances, SIAM Journal on
Applied Mathematics, 26 (1974) 787–793.
[14] D. Chicharro, T. Kreuz, R.G. Andrzejak, What can spike train distances tell us about the
neural code?, Journal of Neuroscience Methods, 199 (2011) 146–165.
[15] D. Aronov, Fast algorithm for the metric-space analysis of simultaneous responses of
multiple single neurons, Journal of Neuroscience Methods, 124 (2003b) 175–179.
[16] L.J.P. van der Maaten, G.E. Hinton, Visualizing high-dimensional data using t-SNE, Journal
of Machine Learning Research, 9 (2008) 2579–2605.
[17] D.S. Reich, F. Mechler, J.D. Victor, Independent and redundant information in nearby
cortical neurons, Science, 294 (2001) 2566–2568.
[18] D.M. Diez, F.P. Schoenberg, C.D. Woody, Algorithms for computing spike time distance
and point process prototypes with application to feline neuronal responses to acoustic stimuli,
Journal of Neuroscience Methods, 203 (2012) 186–192.
[19] C. Houghton, K. Sen, A new multineuron spike train metric, Neural Computation, 20 (2008)
1495–1511.
[20] T. Kreuz, D. Chicharro, C. Houghton, R.G. Andrzejak, F. Mormann, Monitoring spike train
synchrony, Journal of Neurophysiology, 109 (2013) 1457–1472.
9. 9
Figure 1. Trial sequence. Once the monkey has fixated on the center of the screen, a target square is
illuminated, followed by a distractor square. After the second delay period, the monkey may make a
saccade, with a saccade to the target square resulting in a reward.
Figure 4. Information encoded across trial duration. HPair indicates information found using the multi-
unit method with two neurons at optimal parameters. HSingle is information from single-unit method. All
values are from the maximum single neurons/pairs at each respective time period. HEnsemble is
information from multi-unit method with all encoding cells. Rate indicates a q value of 0.
Figure 2. Example of spike train transformation. In the
algorithm, the distance between two spike trains is
determined by the cost of editing the individual spikes
in the first train to match the other train through
insertions (row 1), deletions (row 4), and shifts (rows
2,3).
Figure 3. Redundancy index of encoding neurons
during target period. Redunancy index indicates the
degree to which neurons jointly encode information
(higher values are more redundant). Error bars indicate
SEM.
10. 10
Figure 5. Information across time for select cells. Different cells appear to provide peak information at
different time intervals. F-fixation, T-target, ED-early delay, D-distractor, LD-late delay, S-saccade.
Figure 6. t-SNE visualization for late delay 1. Top row, from left to right: best 2 cells, best 5
cells, best 12 cells. Bottom Left: Rate code 12 cells. Bottom Right: SSIMS (N x MN distance
matrix) 12 cells. Colors on bottom middle grid show each location’s corresponding color.