Spike sorting is of prime importance in neurophysiology and hence has received considerable attention. However, conventional methods suffer from the degradation of clustering results in the presence of high levels of noise contamination. This paper presents a scheme for taking advantage of automatic clustering and enhancing the feature extraction efficiency, especially for low-SNR spike data. The method employs linear discriminant analysis based on a fuzzy c-means (FCM) algorithm. Simulated spike data [1] were used as the test bed due to better a priori knowledge of the spike signals. Application to both high and low signal-to-noise ratio (SNR) data showed that the proposed method outperforms conventional principal-component analysis (PCA) and FCM algorithm. FCM failed to cluster spikes for low-SNR data. For two discriminative performance indices based on Fisher's discriminant criterion, the proposed approach was over 1.36 times the ratio of between- and within-class variation of PCA for spike data with SNR ranging from 1.5 to 4.5 dB. In conclusion, the proposed scheme is unsupervised and can enhance the performance of fuzzy c-means clustering in spike sorting with low-SNR data.
Problem Decomposition and Information Minimization for the Global, Concurrent...gerogepatton
This piece of research introduces a purely data-driven, directly reconfigurable, divide-and-conquer on-line
monitoring (OLM) methodology for automatically selecting the minimum number of neutron detectors
(NDs) – and corresponding neutron noise signals (NSs) – which are currently necessary, as well as
sufficient, for inspecting the entire nuclear reactor (NR) in-core area. The proposed implementation builds
upon the 3-tuple configuration, according to which three sufficiently pairwise-correlated NSs are capable
of on-line (I) verifying each NS of the 3-tuple and (II) endorsing correct functioning of each corresponding
ND, implemented herein via straightforward pairwise comparisons of fixed-length sliding time-windows
(STWs) between the three NSs of the 3-tuple. A pressurized water NR (PWR) model – developed for H2020
CORTEX – is used for deriving the optimal ND/NS configuration, where (i) the evident partitioning of the
36 NDs/NSs into six clusters of six NDs/NSs each, and (ii) the high cross-correlations (CCs) within every
3-tuple of NSs, endorse the use of a constant pair comprising the two most highly CC-ed NSs per cluster as
the first two members of the 3-tuple, with the third member being each remaining NS of the cluster, in turn,
thereby computationally streamlining OLM without compromising the identification of either deviating NSs
or malfunctioning NDs. Tests on the in-core dataset of the PWR model demonstrate the potential of the
proposed methodology in terms of suitability for, efficiency at, as well as robustness in ND/NS selection,
further establishing the “directly reconfigurable” property of the proposed approach at every point in time
while using one-third only of the original NDs/NSs.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
A Review on Classification Based Approaches for STEGanalysis DetectionEditor IJCATR
This paper presents two scenarios of image steganalysis, in first scenario, an alternative feature set for steganalysis based on
rate-distortion characteristics of images. Here features are based on two key observations: i) Data embedding typically increases the
image entropy in order to encode the hidden messages; ii) Data embedding methods are limited to the set of small, imperceptible distortions.
The proposed feature set is used as the basis of a steganalysis algorithm and its performance is investigated using different
data hiding methods. In second scenario, a new blind approach of image Steganalysis based on contourlet transform and nonlinear
support vector machine. Properties of Contourlet transform are used to extract features of images, the important aspect of this paper is
that, it uses the minimum number of features in the transform domain and gives a better accuracy than many of the existing stegananlysis
methods. The efficiency of the proposed method is demonstrated through experimental results. Also its performance is compared
with the Contourlet based steganalyzer (WBS). Finally, the results show that the proposed method is very efficient in terms of its detection
accuracy and computational cost.
A Combined Approach for Feature Subset Selection and Size Reduction for High ...IJERA Editor
selection of relevant feature from a given set of feature is one of the important issues in the field of
data mining as well as classification. In general the dataset may contain a number of features however it is not
necessary that the whole set features are important for particular analysis of decision making because the
features may share the common information‟s and can also be completely irrelevant to the undergoing
processing. This generally happen because of improper selection of features during the dataset formation or
because of improper information availability about the observed system. However in both cases the data will
contain the features that will just increase the processing burden which may ultimately cause the improper
outcome when used for analysis. Because of these reasons some kind of methods are required to detect and
remove these features hence in this paper we are presenting an efficient approach for not just removing the
unimportant features but also the size of complete dataset size. The proposed algorithm utilizes the information
theory to detect the information gain from each feature and minimum span tree to group the similar features
with that the fuzzy c-means clustering is used to remove the similar entries from the dataset. Finally the
algorithm is tested with SVM classifier using 35 publicly available real-world high-dimensional dataset and the
results shows that the presented algorithm not only reduces the feature set and data lengths but also improves the
performances of the classifier.
Problem Decomposition and Information Minimization for the Global, Concurrent...gerogepatton
This piece of research introduces a purely data-driven, directly reconfigurable, divide-and-conquer on-line
monitoring (OLM) methodology for automatically selecting the minimum number of neutron detectors
(NDs) – and corresponding neutron noise signals (NSs) – which are currently necessary, as well as
sufficient, for inspecting the entire nuclear reactor (NR) in-core area. The proposed implementation builds
upon the 3-tuple configuration, according to which three sufficiently pairwise-correlated NSs are capable
of on-line (I) verifying each NS of the 3-tuple and (II) endorsing correct functioning of each corresponding
ND, implemented herein via straightforward pairwise comparisons of fixed-length sliding time-windows
(STWs) between the three NSs of the 3-tuple. A pressurized water NR (PWR) model – developed for H2020
CORTEX – is used for deriving the optimal ND/NS configuration, where (i) the evident partitioning of the
36 NDs/NSs into six clusters of six NDs/NSs each, and (ii) the high cross-correlations (CCs) within every
3-tuple of NSs, endorse the use of a constant pair comprising the two most highly CC-ed NSs per cluster as
the first two members of the 3-tuple, with the third member being each remaining NS of the cluster, in turn,
thereby computationally streamlining OLM without compromising the identification of either deviating NSs
or malfunctioning NDs. Tests on the in-core dataset of the PWR model demonstrate the potential of the
proposed methodology in terms of suitability for, efficiency at, as well as robustness in ND/NS selection,
further establishing the “directly reconfigurable” property of the proposed approach at every point in time
while using one-third only of the original NDs/NSs.
International Journal of Engineering Research and DevelopmentIJERD Editor
Electrical, Electronics and Computer Engineering,
Information Engineering and Technology,
Mechanical, Industrial and Manufacturing Engineering,
Automation and Mechatronics Engineering,
Material and Chemical Engineering,
Civil and Architecture Engineering,
Biotechnology and Bio Engineering,
Environmental Engineering,
Petroleum and Mining Engineering,
Marine and Agriculture engineering,
Aerospace Engineering.
A Review on Classification Based Approaches for STEGanalysis DetectionEditor IJCATR
This paper presents two scenarios of image steganalysis, in first scenario, an alternative feature set for steganalysis based on
rate-distortion characteristics of images. Here features are based on two key observations: i) Data embedding typically increases the
image entropy in order to encode the hidden messages; ii) Data embedding methods are limited to the set of small, imperceptible distortions.
The proposed feature set is used as the basis of a steganalysis algorithm and its performance is investigated using different
data hiding methods. In second scenario, a new blind approach of image Steganalysis based on contourlet transform and nonlinear
support vector machine. Properties of Contourlet transform are used to extract features of images, the important aspect of this paper is
that, it uses the minimum number of features in the transform domain and gives a better accuracy than many of the existing stegananlysis
methods. The efficiency of the proposed method is demonstrated through experimental results. Also its performance is compared
with the Contourlet based steganalyzer (WBS). Finally, the results show that the proposed method is very efficient in terms of its detection
accuracy and computational cost.
A Combined Approach for Feature Subset Selection and Size Reduction for High ...IJERA Editor
selection of relevant feature from a given set of feature is one of the important issues in the field of
data mining as well as classification. In general the dataset may contain a number of features however it is not
necessary that the whole set features are important for particular analysis of decision making because the
features may share the common information‟s and can also be completely irrelevant to the undergoing
processing. This generally happen because of improper selection of features during the dataset formation or
because of improper information availability about the observed system. However in both cases the data will
contain the features that will just increase the processing burden which may ultimately cause the improper
outcome when used for analysis. Because of these reasons some kind of methods are required to detect and
remove these features hence in this paper we are presenting an efficient approach for not just removing the
unimportant features but also the size of complete dataset size. The proposed algorithm utilizes the information
theory to detect the information gain from each feature and minimum span tree to group the similar features
with that the fuzzy c-means clustering is used to remove the similar entries from the dataset. Finally the
algorithm is tested with SVM classifier using 35 publicly available real-world high-dimensional dataset and the
results shows that the presented algorithm not only reduces the feature set and data lengths but also improves the
performances of the classifier.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Multi fractal analysis of human brain mr imageeSAT Journals
Abstract In computer programming, code smell may origin of latent problems in source code. Detecting and resolving bad smells remain time intense for software engineers despite proposals on bad smell detecting and refactoring tools. Numerous code smells have been recognized yet the sequence in which the detection and resolution of different kinds of code smells are performed because software engineers do not know how to optimize sequence. In this paper, the novel refactoring approach is proposed to improve the performance of programs. In this recommended approach the code smells are automatically detected and refactored. The simulation results propose the reduction of time over the semi-automated refactoring are achieved when code smells are refactored by using multi-step automated refactoring. Keywords: Code smell, multi step refactoring, detection, code resolution, restructuring etc
New Data Association Technique for Target Tracking in Dense Clutter Environme...CSCJournals
Improving data association process by increasing the probability of detecting valid data points (measurements obtained from radar/sonar system) in the presence of noise for target tracking are discussed in manuscript. We develop a novel algorithm by filtering gate for target tracking in dense clutter environment. This algorithm is less sensitive to false alarm (clutter) in gate size than conventional approaches as probabilistic data association filter (PDAF) which has data association algorithm that begin to fail due to the increase in the false alarm rate or low probability of target detection. This new selection filtered gate method combines a conventional threshold based algorithm with geometric metric measure based on one type of the filtering methods that depends on the idea of adaptive clutter suppression methods. An adaptive search based on the distance threshold measure is then used to detect valid filtered data point for target tracking. Simulation results demonstrate the effectiveness and better performance when compared to conventional algorithm.
APPLYING DYNAMIC MODEL FOR MULTIPLE MANOEUVRING TARGET TRACKING USING PARTICL...IJITCA Journal
In this paper, we applied a dynamic model for manoeuvring targets in SIR particle filter algorithm for improving tracking accuracy of multiple manoeuvring targets. In our proposed approach, a color distribution model is used to detect changes of target's model . Our proposed approach controls
deformation of target's model. If deformation of target's model is larger than a predetermined threshold,then the model will be updated. Global Nearest Neighbor (GNN) algorithm is used as data association algorithm. We named our proposed method as Deformation Detection Particle Filter (DDPF) . DDPF
approach is compared with basic SIR-PF algorithm on real airshow videos. Comparisons results show that, the basic SIR-PF algorithm is not able to track the manoeuvring targets when the rotation or scaling is occurred in target' s model. However, DDPF approach updates target's model when the rotation or
scaling is occurred. Thus, the proposed approach is able to track the manoeuvring targets more efficiently
and accurately.
Maximum Correntropy Based Dictionary Learning Framework for Physical Activity...sherinmm
Due to its symbolic role in ubiquitous health monitoring,
physical activity recognition with wearable body sensors has been in the
limelight in both research and industrial communities. Physical activity
recognition is difficult due to the inherent complexity involved with different
walking styles and human body movements. Thus we present a
correntropy induced dictionary pair learning framework to achieve this
recognition. Our algorithm for this framework jointly learns a synthesis
dictionary and an analysis dictionary in order to simultaneously perform
signal representation and classification once the time-domain features
have been extracted. In particular, the dictionary pair learning algorithm
is developed based on the maximum correntropy criterion, which
is much more insensitive to outliers. In order to develop a more tractable
and practical approach, we employ a combination of alternating direction
method of multipliers and an iteratively reweighted method to approximately
minimize the objective function. We validate the effectiveness of
our proposed model by employing it on an activity recognition problem
and an intensity estimation problem, both of which include a large number
of physical activities from the recently released PAMAP2 dataset.
Experimental results indicate that classifiers built using this correntropy
induced dictionary learning based framework achieve high accuracy by
using simple features, and that this approach gives results competitive
with classical systems built upon features with prior knowledge.
Geometric Correction for Braille Document Images csandit
Image processing is an important research area in computer vision. clustering is an unsupervised
study. clustering can also be used for image segmentation. there exist so many methods for image
segmentation. image segmentation plays an important role in image analysis.it is one of the first
and the most important tasks in image analysis and computer vision. this proposed system
presents a variation of fuzzy c-means algorithm that provides image clustering. the kernel fuzzy
c-means clustering algorithm (kfcm) is derived from the fuzzy c-means clustering
algorithm(fcm).the kfcm algorithm that provides image clustering and improves accuracy
significantly compared with classical fuzzy c-means algorithm. the new algorithm is called
gaussian kernel based fuzzy c-means clustering algorithm (gkfcm)the major characteristic of
gkfcm is the use of a fuzzy clustering approach ,aiming to guarantee noise insensitiveness and
image detail preservation.. the objective of the work is to cluster the low intensity in homogeneity
area from the noisy images, using the clustering method, segmenting that portion separately using
content level set approach. the purpose of designing this system is to produce better segmentation
results for images corrupted by noise, so that it can be useful in various fields like medical image
analysis, such as tumor detection, study of anatomical structure, and treatment planning.
Further results on the joint time delay and frequency estimation without eige...IJCNCJournal
Joint Time Delay and Frequency Estimation (JTDFE) problem of complex sinusoidal signals received at
two separated sensors is an attractive problem that has been considered for several engineering
applications. In this paper, a high resolution null (noise) subspace method without eigenvalue
decomposition is proposed. The direct data Matrix is replaced by an upper triangular matrix obtained from
Rank-Revealing LU (RRLU) factorization. The RRLU provides accurate information about the rank and the
numerical null space which make it a valuable tool in numerical linear algebra.The proposed novel method
decreases the computational complexity of JTDFE approximately to the half compared with RRQR
methods. The proposed method generates estimates of the unknown parameters which are based on the
observation and/or covariance matrices. This leads to a significant improvement in the computational load.
Computer simulations are included in this paper to demonstrate the proposed method.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
DETECTION OF HUMAN BLADDER CANCER CELLS USING IMAGE PROCESSINGprj_publication
Bladder cancer presents a spectrum of different diatheses. A precise assessment for
individualized treatment depends on the accuracy of the initial diagnosis. In this method the
performance of the level set segmentation is subject to appropriate initialization and optimal
configuration of controlling parameters, which require substantial manual intervention. A
new fuzzy level set algorithm is proposed in this paper to facilitate medical image
segmentation. It is able to directly evolve from the initial segmentation by spatial fuzzy
clustering. The Spatial induced fuzzy c-means using pixel classification and level set
methods are utilizing dynamic variational boundaries for image segmentation. The
controlling parameters of level set evolution are also estimated from the results of clustering.
The fuzzy level set algorithm is enhanced with locally regularized evolution. Such
improvements facilitate level set manipulation and lead to more robust segmentation.
Performance evaluation of the proposed algorithm was carried on medical images
A Threshold Fuzzy Entropy Based Feature Selection: Comparative StudyIJMER
Feature selection is one of the most common and critical tasks in database classification. It
reduces the computational cost by removing insignificant and unwanted features. Consequently, this
makes the diagnosis process accurate and comprehensible. This paper presents the measurement of
feature relevance based on fuzzy entropy, tested with Radial Basis Classifier (RBF) network,
Bagging(Bootstrap Aggregating), Boosting and stacking for various fields of datasets. Twenty
benchmarked datasets which are available in UCI Machine Learning Repository and KDD have been
used for this work. The accuracy obtained from these classification process shows that the proposed
method is capable of producing good and accurate results with fewer features than the original
datasets.
Sensing Method for Two-Target Detection in Time-Constrained Vector Poisson Ch...sipij
It is an experimental design problem in which there are two Poisson sources with two possible and known rates, and one counter. Through a switch, the counter can observe the sources individually or the counts can be combined so that the counter observes the sum of the two. The sensor scheduling problem is to determine an optimal proportion of the available time to be allocated toward individual and joint sensing, under a total time
constraint. Two different metrics are used for optimization: mutual information between the sources and the observed counts, and probability of detection for the associated source detection problem. Our results, which are primarily computational, indicate similar but not identical results under the two cost functions.
Isabelle Guyon, President, ChaLearn at MLconf SF - 11/13/15MLconf
Network Reconstruction: The Contribution of Challenges in Machine Learning: Networks of influence are found at all levels of physical, biological, and societal systems: climate networks, gene networks, neural networks, and social networks are a few examples. These networks are not just descriptive of the “State of Nature”, they allow us to make predictions such as forecasting disruptive weather patterns, evaluating the possible effect of a drug, locating the focus of a neural seizure, and predicting the propagation of epidemics. This, in turns, allows us to device adequate interventions or change in policies to obtain desired outcomes: evacuate people before a region is hit by a hurricane, administer treatment, vaccinate, etc. But knowing the network structure is a prerequisite, and this structure may be very hard and costly to obtain with traditional means. For example, the medical community relies on clinical trials, which cost millions of dollars; the neuroscience community engages in connection tracing with election microscopy, which take years before establishing the connectivity of 100 neurons (the brain contains billions).
This presentation will review recent progresses that have been made in network reconstruction methods based solely on observational data. Great advances have been recently made using machine learning. We will analyze the results of several challenges we organized, which point us to new simple and practical methodologies.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Multi fractal analysis of human brain mr imageeSAT Journals
Abstract In computer programming, code smell may origin of latent problems in source code. Detecting and resolving bad smells remain time intense for software engineers despite proposals on bad smell detecting and refactoring tools. Numerous code smells have been recognized yet the sequence in which the detection and resolution of different kinds of code smells are performed because software engineers do not know how to optimize sequence. In this paper, the novel refactoring approach is proposed to improve the performance of programs. In this recommended approach the code smells are automatically detected and refactored. The simulation results propose the reduction of time over the semi-automated refactoring are achieved when code smells are refactored by using multi-step automated refactoring. Keywords: Code smell, multi step refactoring, detection, code resolution, restructuring etc
New Data Association Technique for Target Tracking in Dense Clutter Environme...CSCJournals
Improving data association process by increasing the probability of detecting valid data points (measurements obtained from radar/sonar system) in the presence of noise for target tracking are discussed in manuscript. We develop a novel algorithm by filtering gate for target tracking in dense clutter environment. This algorithm is less sensitive to false alarm (clutter) in gate size than conventional approaches as probabilistic data association filter (PDAF) which has data association algorithm that begin to fail due to the increase in the false alarm rate or low probability of target detection. This new selection filtered gate method combines a conventional threshold based algorithm with geometric metric measure based on one type of the filtering methods that depends on the idea of adaptive clutter suppression methods. An adaptive search based on the distance threshold measure is then used to detect valid filtered data point for target tracking. Simulation results demonstrate the effectiveness and better performance when compared to conventional algorithm.
APPLYING DYNAMIC MODEL FOR MULTIPLE MANOEUVRING TARGET TRACKING USING PARTICL...IJITCA Journal
In this paper, we applied a dynamic model for manoeuvring targets in SIR particle filter algorithm for improving tracking accuracy of multiple manoeuvring targets. In our proposed approach, a color distribution model is used to detect changes of target's model . Our proposed approach controls
deformation of target's model. If deformation of target's model is larger than a predetermined threshold,then the model will be updated. Global Nearest Neighbor (GNN) algorithm is used as data association algorithm. We named our proposed method as Deformation Detection Particle Filter (DDPF) . DDPF
approach is compared with basic SIR-PF algorithm on real airshow videos. Comparisons results show that, the basic SIR-PF algorithm is not able to track the manoeuvring targets when the rotation or scaling is occurred in target' s model. However, DDPF approach updates target's model when the rotation or
scaling is occurred. Thus, the proposed approach is able to track the manoeuvring targets more efficiently
and accurately.
Maximum Correntropy Based Dictionary Learning Framework for Physical Activity...sherinmm
Due to its symbolic role in ubiquitous health monitoring,
physical activity recognition with wearable body sensors has been in the
limelight in both research and industrial communities. Physical activity
recognition is difficult due to the inherent complexity involved with different
walking styles and human body movements. Thus we present a
correntropy induced dictionary pair learning framework to achieve this
recognition. Our algorithm for this framework jointly learns a synthesis
dictionary and an analysis dictionary in order to simultaneously perform
signal representation and classification once the time-domain features
have been extracted. In particular, the dictionary pair learning algorithm
is developed based on the maximum correntropy criterion, which
is much more insensitive to outliers. In order to develop a more tractable
and practical approach, we employ a combination of alternating direction
method of multipliers and an iteratively reweighted method to approximately
minimize the objective function. We validate the effectiveness of
our proposed model by employing it on an activity recognition problem
and an intensity estimation problem, both of which include a large number
of physical activities from the recently released PAMAP2 dataset.
Experimental results indicate that classifiers built using this correntropy
induced dictionary learning based framework achieve high accuracy by
using simple features, and that this approach gives results competitive
with classical systems built upon features with prior knowledge.
Geometric Correction for Braille Document Images csandit
Image processing is an important research area in computer vision. clustering is an unsupervised
study. clustering can also be used for image segmentation. there exist so many methods for image
segmentation. image segmentation plays an important role in image analysis.it is one of the first
and the most important tasks in image analysis and computer vision. this proposed system
presents a variation of fuzzy c-means algorithm that provides image clustering. the kernel fuzzy
c-means clustering algorithm (kfcm) is derived from the fuzzy c-means clustering
algorithm(fcm).the kfcm algorithm that provides image clustering and improves accuracy
significantly compared with classical fuzzy c-means algorithm. the new algorithm is called
gaussian kernel based fuzzy c-means clustering algorithm (gkfcm)the major characteristic of
gkfcm is the use of a fuzzy clustering approach ,aiming to guarantee noise insensitiveness and
image detail preservation.. the objective of the work is to cluster the low intensity in homogeneity
area from the noisy images, using the clustering method, segmenting that portion separately using
content level set approach. the purpose of designing this system is to produce better segmentation
results for images corrupted by noise, so that it can be useful in various fields like medical image
analysis, such as tumor detection, study of anatomical structure, and treatment planning.
Further results on the joint time delay and frequency estimation without eige...IJCNCJournal
Joint Time Delay and Frequency Estimation (JTDFE) problem of complex sinusoidal signals received at
two separated sensors is an attractive problem that has been considered for several engineering
applications. In this paper, a high resolution null (noise) subspace method without eigenvalue
decomposition is proposed. The direct data Matrix is replaced by an upper triangular matrix obtained from
Rank-Revealing LU (RRLU) factorization. The RRLU provides accurate information about the rank and the
numerical null space which make it a valuable tool in numerical linear algebra.The proposed novel method
decreases the computational complexity of JTDFE approximately to the half compared with RRQR
methods. The proposed method generates estimates of the unknown parameters which are based on the
observation and/or covariance matrices. This leads to a significant improvement in the computational load.
Computer simulations are included in this paper to demonstrate the proposed method.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
DETECTION OF HUMAN BLADDER CANCER CELLS USING IMAGE PROCESSINGprj_publication
Bladder cancer presents a spectrum of different diatheses. A precise assessment for
individualized treatment depends on the accuracy of the initial diagnosis. In this method the
performance of the level set segmentation is subject to appropriate initialization and optimal
configuration of controlling parameters, which require substantial manual intervention. A
new fuzzy level set algorithm is proposed in this paper to facilitate medical image
segmentation. It is able to directly evolve from the initial segmentation by spatial fuzzy
clustering. The Spatial induced fuzzy c-means using pixel classification and level set
methods are utilizing dynamic variational boundaries for image segmentation. The
controlling parameters of level set evolution are also estimated from the results of clustering.
The fuzzy level set algorithm is enhanced with locally regularized evolution. Such
improvements facilitate level set manipulation and lead to more robust segmentation.
Performance evaluation of the proposed algorithm was carried on medical images
A Threshold Fuzzy Entropy Based Feature Selection: Comparative StudyIJMER
Feature selection is one of the most common and critical tasks in database classification. It
reduces the computational cost by removing insignificant and unwanted features. Consequently, this
makes the diagnosis process accurate and comprehensible. This paper presents the measurement of
feature relevance based on fuzzy entropy, tested with Radial Basis Classifier (RBF) network,
Bagging(Bootstrap Aggregating), Boosting and stacking for various fields of datasets. Twenty
benchmarked datasets which are available in UCI Machine Learning Repository and KDD have been
used for this work. The accuracy obtained from these classification process shows that the proposed
method is capable of producing good and accurate results with fewer features than the original
datasets.
Sensing Method for Two-Target Detection in Time-Constrained Vector Poisson Ch...sipij
It is an experimental design problem in which there are two Poisson sources with two possible and known rates, and one counter. Through a switch, the counter can observe the sources individually or the counts can be combined so that the counter observes the sum of the two. The sensor scheduling problem is to determine an optimal proportion of the available time to be allocated toward individual and joint sensing, under a total time
constraint. Two different metrics are used for optimization: mutual information between the sources and the observed counts, and probability of detection for the associated source detection problem. Our results, which are primarily computational, indicate similar but not identical results under the two cost functions.
Isabelle Guyon, President, ChaLearn at MLconf SF - 11/13/15MLconf
Network Reconstruction: The Contribution of Challenges in Machine Learning: Networks of influence are found at all levels of physical, biological, and societal systems: climate networks, gene networks, neural networks, and social networks are a few examples. These networks are not just descriptive of the “State of Nature”, they allow us to make predictions such as forecasting disruptive weather patterns, evaluating the possible effect of a drug, locating the focus of a neural seizure, and predicting the propagation of epidemics. This, in turns, allows us to device adequate interventions or change in policies to obtain desired outcomes: evacuate people before a region is hit by a hurricane, administer treatment, vaccinate, etc. But knowing the network structure is a prerequisite, and this structure may be very hard and costly to obtain with traditional means. For example, the medical community relies on clinical trials, which cost millions of dollars; the neuroscience community engages in connection tracing with election microscopy, which take years before establishing the connectivity of 100 neurons (the brain contains billions).
This presentation will review recent progresses that have been made in network reconstruction methods based solely on observational data. Great advances have been recently made using machine learning. We will analyze the results of several challenges we organized, which point us to new simple and practical methodologies.
Linear Discriminant Analysis and Its Generalization일상 온
The brief introduction to the linear discriminant analysis and some extended methods. Much of the materials are taken from The Elements of Statistical Learning by Hastie et al. (2008).
OFFLINE SPIKE DETECTION USING TIME DEPENDENT ENTROPY ijbesjournal
Analysis of the neuronal activities is essential in studying nervous system mechanisms.True interpretation of such mechanisms relies on the detection of the neuronal activities, which appear as action potentials or spikes in recorded neural data. So far several algorithms have been developed for spike detection. In this paper such issue is addressed using entropy measures.Transient events like spikes affect the entropy content of a signal. Thus, a time-dependent entropy framework can be used for spike detection where the entropy of each windowed segment of neural data is computed based on a generalized form of entropy. Detection method is tested on different signal to noise ratios. The results show that the time-dependent entropy method in comparison with available methods enables us to detect spikes in their exact time of occurrence with relatively lower false alarm rate.
Cooperative Spectrum Sensing Technique Based on Blind Detection MethodINFOGAIN PUBLICATION
Spectrum sensing is a key task for cognitive radio. Our motivation is to increase the probability of detection of spectrum sensing in cognitive radio. The spectrum-sensing algorithms are proposed based on the statistical methods like EVD,CVD of a covariance matrix. In this Two test statistics are then extracted from the sample covariance matrix. The decision on the signal presence is made by comparing the two test statistics.The Detection probability and the associated threshold are found based on the statistical theory. In this paper, we study the collaborative sensing as a means to improve the performance of the proposed spectrum sensing technique and show their effect on cooperative cognitive radio network. Simulations results and performances evaluation are done in Matlab and the results are tabulated.
This paper introduces the Artifi cial Neural Networks (ANN) function to model probabilistic dependencies, in supervised classification tasks for discrimination between earthquakes and explosions problems. ANNs are regarded as the discriminating tools to classify the natural seismic events (earthquakes) from the artifi cial ones (Man-made explosions) based on the seismic signals recorded at regional distances. The bulk of our novel is to improve the obtained numerical results using this advance technique. The ANNs, by testing the different types of seismic features, showed the potential application of this method to discriminate the classes. During the above study, we found out that the Neural Networks have been used in a fully innovative manner in this work. Here the ARMA coefficients filters detects
the type of the source whenever a natural or artificial source changes the nature of the background noise of the seismograms. During the above study, we found out that this algorithm is sometimes capable to alarm the further natural seismological events just a little before the onset.
A Non Parametric Estimation Based Underwater Target ClassifierCSCJournals
Underwater noise sources constitute a prominent class of input signal in most underwater signal processing systems. The problem of identification of noise sources in the ocean is of great importance because of its numerous practical applications. In this paper, a methodology is presented for the detection and identification of underwater targets and noise sources based on non parametric indicators. The proposed system utilizes Cepstral coefficient analysis and the Kruskal-Wallis H statistic along with other statistical indicators like F-test statistic for the effective detection and classification of noise sources in the ocean. Simulation results for typical underwater noise data and the set of identified underwater targets are also presented in this paper.
Wavelet-based EEG processing for computer-aided seizure detection and epileps...IJERA Editor
Many Neurological disorders are very difficult to detect. One such Neurological disorder which we are going to discuss in this paper is Epilepsy. Epilepsy means sudden change in the behavior of a human being for a short period of time. This is caused due to seizures in the brain. Many researches are going onto detect epilepsy detection through analyzing EEG. One such method of epilepsy detection is proposed in this paper. This technique employs Discrete Wave Transform (DWT) method for pre-processing, Approximate Entropy (ApEn) to extract features and Artificial Neural Network (ANN) for classification. This paper presented a detailed survey of various methods that are being used for epilepsy detection and also proposes a wavelet based epilepsy detection method
Analysis and Comparison of Different Spectrum Sensing Technique for WLANijtsrd
This Paper explores basic two systems of spectrum sensing Cooperative System and Non Cooperative System. Non Cooperative System includes Energy detector, Match Filter and cyclostationary with a performance analysis of transmitter based detection. It also includes analysis of Match Filter and Cyclostationary under low and high SNR, validating the result and applied the technique for Wireless local Area Network WLAN . To identify the no. of detected signal, chi square equation has been solved and finds the threshold. It has been observed during analysis that energy rises at high SNR under AWGN and under high SNR no. of detected signal decreases gradually when the no. of sample increases. When no. of sample increases then the no. of detected signal increases. The results of the detection techniques are reliable in comparison. Energy detection provides good result under high SNR values. All of the simulation work is done in MATLAB software and finalized the best detection technique for spectrum sensing. Abrar Ahmed | Rashmi Raj "Analysis and Comparison of Different Spectrum Sensing Technique for WLAN" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-6 , October 2019, URL: https://www.ijtsrd.com/papers/ijtsrd29174.pdf Paper URL: https://www.ijtsrd.com/engineering/electronics-and-communication-engineering/29174/analysis-and-comparison-of-different-spectrum-sensing-technique-for-wlan/abrar-ahmed
Performance analysis of cooperative spectrum sensing using double dynamic thr...IAESIJAI
Increased use of wireless technologies and in turn more utilization of available spectrum is subsequently leading to the increasing demand for wireless spectrum. This research work incorporates spectrum sensing detection consisting of a double dynamic threshold followed by cooperative type spectrum sensing. The performance has been analyzed using two modulation schemes, quadrature-amplitude-modulation (QAM) and binary-phase-shift-keying (BPSK). Improved probability of detection has been witnessed using the double dynamic threshold where a comparison of average values of local decision (LD) and the observed value of energy (EO) has been considered instead of using direct values of local decisions and
energy. Further, the probability-of-detection (𝑃𝑑) is found to be better with QAM as compared to the BPSK. From the results, it has been observed that the detection of primary users is also affected by the number of samples. The simulation environment considered for this work is MATLAB and the performance of cooperative spectrum sensing for 500 and 1000 samples with -9db and -12 SNR by considering different false alarm values i. e 0.1, 0.3 and 0.5 has been analyzed. The further scope shall be to enhance the primary user detection by considering different QAM schemes and different signal to noise ratio (SNRs).
A Wearable Accelerometer System for Unobtrusive Monitoring of Parkinson’s Dis...Michael J. Montgomery
Abstract: Parkinson’s disease is a complex condition currently monitored at home with paper diaries which rely on subjective and unreliable assessment of motor function at nonstandard time intervals. We present an innovative wearable and unobtrusive monitoring system for patients which can help provide physicians with significantly improved assessment of patients’ responses to drug therapies and lead to better-targeted treatment regimens. In this paper we describe the algorithmic development of the system and an evaluation in patients for assessing the onset and duration of advanced PD motor symptoms.
Neural Networks for High Performance Time-Delay Estimation and Acoustic Sourc...cscpconf
Time-delay estimation is an essential building block of many signal processing applications.
This paper follows up on earlier work for acoustic source localization and time delay estimation
using pattern recognition techniques in the adverse environment such as reverberant rooms or
underwater; it presents unprecedented high performance results obtained with supervised
training of neural networks which challenge the state of the art and compares its performance
to that of well-known methods such as the Generalized Cross-Correlation or Adaptive
Eigenvalue Decomposition.
NEURAL NETWORKS FOR HIGH PERFORMANCE TIME-DELAY ESTIMATION AND ACOUSTIC SOURC...csandit
Time-delay estimation is an essential building block of many signal processing applications.This paper follows up on earlier work for acoustic source localization and time delay estimation
using pattern recognition techniques in the adverse environment such as reverberant rooms or underwater; it presents unprecedented high performance results obtained with supervised training of neural networks which challenge the state of the art and compares its performance to that of well-known methods such as the Generalized Cross-Correlation or Adaptive Eigenvalue Decomposition.
Blind Audio Source Separation (Bass): An Unsuperwised Approach IJEEE
Audio processing is an area where signal separation is considered as a fascinating works, potentially offering a vivid range of new scope and experience in professional and personal context. The objective of Blind Audio Source Separation is to separate audio signals from multiple independent sources in an unknown mixing environment. This paper addresses the key challenges in BASS and unsupervised approaches to counter these challenges. Comparative performance analysis of Fast-ICA algorithm and Convex Divergence ICA for Blind Source Separation is presented with the help of experimental result. Result reflects Convex Divergence ICA with α=-1 gives more accurate estimate in comparison of Fast ICA..
Investigation of Chaotic-Type Features in Hyperspectral Satellite Datacsandit
Hyperspectral images provide detailed spectral info
rmation with more than several hundred
channels. On the other hand, the high dimensionalit
y in hyperspectral images also causes to
classification problems due to the huge ratio betwe
en the number of training samples and the
features. In this paper, Lyapunov Exponents (LEs) a
re used to determine chaotic-type structure
of EO- 1 Hyperion hyperspectral image, a mixed fore
st site in Turkey. Experimental results
demonstrate that EO-1 Hyperion image has a chaotic
structure by checking distribution of
Lyapunov Exponents (LEs) and they can be used as d
iscriminative features to improve
classification accuracy for hyperspectral images.
Spectrum Sensing Detection Techniques for Overlay UsersIJMTST Journal
Spectrum allocated Agency (FCC) is currently working on the concept of white space users “borrowing” spectrum from free license holders temporarily to improve the spectrum utilization, i.e known as dynamic spectrum access (DSA). CRN systems can utilize dispersed spectrum, and thus such approach is known as dispersed spectrum cognitive radio systems. This project provides a tradeoff between a false alarm probability (Pf) and the signal to noise ratio (SNR) value of any spectrum detector to have a certain performance. Moreover, the performance of the cyclostationary detector (CD) and the matched filter detector (MF) is better than the energy detector(ED) especially at low signal to noise ratio values. Unfortunately, the cyclostationary spectrum sensing method, performance is not satisfying when the wireless fading channels are employed. In this project we provide the best trade off for spectrum usage for over lay users.
Expert system design for elastic scattering neutrons optical model using bpnnijcsa
In present paper, a proposed expert system is designed to obtain a trained formulae for the optical model
parameters used in elastic scattering neutrons of light nuclei for (7Li), at energy range between [(1) to
(20)] MeV. A simple algorithm has used to design this expert system, while a multi-layer backwardpropagation
neural network (BPNN) is applied for training and testing the data used in this model. This
group of formulae may get a simple expert system occurring from governing formulae model, and predicts
the critical parameters usually resulted from the complicated computer coding methods. This expert system
may use in nuclear reactions yields in both fission and fusion nature who gives more closely results to the
real model.
Similar to A linear-Discriminant-Analysis-Based Approach to Enhance the Performance of Fuzzy C-means Clustering in Spike Sorting With low-SNR Data (20)
Ethnobotany and Ethnopharmacology:
Ethnobotany in herbal drug evaluation,
Impact of Ethnobotany in traditional medicine,
New development in herbals,
Bio-prospecting tools for drug discovery,
Role of Ethnopharmacology in drug evaluation,
Reverse Pharmacology.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
This is a presentation by Dada Robert in a Your Skill Boost masterclass organised by the Excellence Foundation for South Sudan (EFSS) on Saturday, the 25th and Sunday, the 26th of May 2024.
He discussed the concept of quality improvement, emphasizing its applicability to various aspects of life, including personal, project, and program improvements. He defined quality as doing the right thing at the right time in the right way to achieve the best possible results and discussed the concept of the "gap" between what we know and what we do, and how this gap represents the areas we need to improve. He explained the scientific approach to quality improvement, which involves systematic performance analysis, testing and learning, and implementing change ideas. He also highlighted the importance of client focus and a team approach to quality improvement.
The Art Pastor's Guide to Sabbath | Steve ThomasonSteve Thomason
What is the purpose of the Sabbath Law in the Torah. It is interesting to compare how the context of the law shifts from Exodus to Deuteronomy. Who gets to rest, and why?
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
The Indian economy is classified into different sectors to simplify the analysis and understanding of economic activities. For Class 10, it's essential to grasp the sectors of the Indian economy, understand their characteristics, and recognize their importance. This guide will provide detailed notes on the Sectors of the Indian Economy Class 10, using specific long-tail keywords to enhance comprehension.
For more information, visit-www.vavaclasses.com
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
How to Split Bills in the Odoo 17 POS ModuleCeline George
Bills have a main role in point of sale procedure. It will help to track sales, handling payments and giving receipts to customers. Bill splitting also has an important role in POS. For example, If some friends come together for dinner and if they want to divide the bill then it is possible by POS bill splitting. This slide will show how to split bills in odoo 17 POS.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
ESC Beyond Borders _From EU to You_ InfoPack general.pdf
A linear-Discriminant-Analysis-Based Approach to Enhance the Performance of Fuzzy C-means Clustering in Spike Sorting With low-SNR Data
1. Chien-Wen Cho, Wen-Hung Chao, You-Yin Chen
International Journals of Biometric and Bioinformatics, Volume (1) : Issue (1) 1
A linear-discriminant-analysis-based approach to enhance the
performance of fuzzy c-means clustering in spike sorting with
low-SNR data
Chien-Wen Cho1
gustafcho@gmail.com
1
Department of Electrical and Control Engineering, National Chiao Tung University, No. 1001,
Ta-Hsueh Rd., Hsinchu City, Taiwan 300, R.O.C.
Wen-Hung Chao1, 2
wenhong@mail.ypu.edu.tw
2
Department of Biomedical Engineering, Yuanpei University, No.306, Yuanpei St., Hsinchu City,
Taiwan 300, R.O.C.
You-Yin Chen2,*
irradiance@so-net.net.tw
1
Department of Electrical and Control Engineering, National Chiao Tung University, No. 1001,
Ta-Hsueh Rd., Hsinchu City, Taiwan 300, R.O.C.
*
Corresponding author: Tel: +886-3-571-2121 ext 54427; Fax: +886-3- 612-5059.
Abstract
Spike sorting is of prime importance in neurophysiology and hence has received
considerable attention. However, conventional methods suffer from the degradation
of clustering results in the presence of high levels of noise contamination. This paper
presents a scheme for taking advantage of automatic clustering and enhancing the
feature extraction efficiency, especially for low-SNR spike data. The method employs
linear discriminant analysis based on a fuzzy c-means (FCM) algorithm. Simulated
spike data [1] were used as the test bed due to better a priori knowledge of the spike
signals. Application to both high and lowsignal-to-noise ratio (SNR) data showed that
the proposed method outperforms conventional principal-component analysis (PCA)
and FCM algorithm. FCM failed to cluster spikes for low-SNR data. For two
discriminative performance indices based on Fisher's discriminant criterion, the
proposed approach was over 1.36 times the ratio of between- and within-class
variation of PCA for spike data with SNR ranging from 1.5 to 4.5 dB. In conclusion,
the proposed scheme is unsupervised and can enhance the performance of fuzzy
c-means clustering in spike sorting with low-SNR data.
Keywords: Spike sorting; spike classification; fuzzy c-means; principal-componentanalysis; lineardiscriminant
analysis; low-SNR.
1. INTRODUCTION
The recording of neural signals is of prime importance to monitoring information transmission by
multipleneurons.Therecordedwaveformusuallyconsistsofactionpotentials(i.e.,spikes)fromseveral
neuronsthatareincloseproximitytotheelectrodesite[2].Anumberofreportsinsystemsneuroscience
have assumed that brain encodes information in the firing rate of neurons. Consequently, detecting
the spiking activity in electrophysiological recordings of the brain is seen to
be essential to the decoding of neural activity.
2. Chien-Wen Cho, Wen-Hung Chao, You-Yin Chen
International Journals of Biometric and Bioinformatics, Volume (1) : Issue (1) 2
The analysis of neuronal recordings consists of two main general steps: (1) detecting and confirming
waveform candidates that are possible action potentials, and (2) distinguishing different spikes and
generating a series of spike trains according to the temporal sequence of action potentials. The unique
and reproducible shape of spikes produced by each neuron allows the spiking activity of different
neurons to be distinguished [2-3].
Duringthepast3decades,variousclassificationmethodsrangingfromsimpleamplitudediscrimination
to a neural-network classifier have been applied to spike sorting [4-7]. Conventional methods such as
principal-componentanalysis(PCA),fuzzyc-means(FCM),andtheuseofsimplead-hocfeaturessuch
as the peak-to-peak amplitude and spike duration can be useful for spike recordings when the
signal-to-noise ratio (SNR) is sufficiently high [4, 8]. However, these methods become inadequate for
discriminating spikes in the presence of high background noise.
Manyefforts–includingsupervisedandunsupervisedapproaches–havebeendedicatedtoimproving
spike sorting in the presence of high noise. The use of a supervised classifier produces acceptable
results even under a very high background noise [9-10]. However, an automated method for spike
sorting is necessary, at least in the initial analysis of experimental data that sets a basis for further
analysis using some form of supervised classification algorithm for spike sorting. Kim and Kim [11-12]
demonstrated an unsupervised approach comprising a spike detector,
negentropy-maximization-based projection pursuit feature extractor, and an unsupervised classifier
using a mixed Gaussian model. One of the key concepts is that feature extraction and dimensionality
reduction can be combined together using a linear transform expressed as xWy T
= , where x and
y are the observed data and feature vectors, respectively, and W is a linear projection matrix such
that y becomesdiscriminativesoastoaidseparationoftheclusters.Manytypesofoptimizationcriteria
can be used to determine an appropriate W , such as maximizing the variance, non-Gaussianity,
negentropy, or the ratio of between- and within-class variations [13-14]. The ratio of between- and
within-class variations (Fisher's linear discriminant criterion) appears to be an especially valid index
since it allows simultaneous balancing of the maximization and minimization of the between- and
within-classvariations.BasedonFisher'slineardiscriminantcriterion,lineardiscriminantanalysis(LDA)
then produces a linear projection matrix, and greatlyenhances the classification ability. Inspired bythis
idea, we have designed an unsupervised spike sorting system that is capable of detecting and
classifying spikes even under a low-SNR condition.
Our method combines action potential detectors [15], LDA-based feature extraction, and FCM
clustering. This combination is unsupervised because the spike features are automatically clustered
bythe FCM algorithm. The proposed scheme can also resolve the low-SNR problem thanks to the high
discriminative ability provided by LDA.
2. MATERIALS AND METHODS
2.1 The overall spike sorting system
The proposed system, as illustrated in Fig. 1, is an automated neural spike sorting system that does
not require interactive human input, and shows high performance under a low-SNR condition. The
systemcanbedividedintothreemainstages.Inthefirststage,allpossibleactionpotentialsareobtained
using a detector. In the second stage, the projection matrix is trained for feature extraction. This is
because although LDA can offer more effective discriminating features than PCA or when analyzing
the original signal domains, it has the drawback of needing supervised knowledge about the targets
of action potentials. Therefore, FCM clustering is used to analyze the obtained action potentials
beforehand to form classification targets for LDA training. In the third stage, an LDA projection matrix
projects all the detected action potential candidates to a new domain, the canonical space, which is
named the LDA space in this paper for convenience.
3. Chien-Wen Cho, Wen-Hung Chao, You-Yin Chen
International Journals of Biometric and Bioinformatics, Volume (1) : Issue (1) 3
FIGURE 1: Overall structure of the proposed spike sorting system.
2.2 Construction of simulated signals
Simulated signals were constructed from the data records provided by Quiroga et al. [1], which have
594 different average spike shapes. Background noise was generated by randomly selecting spikes
from the database and superimposing them at random times and amplitudes. This method was
employed so as to mimic the background noise of actual recordings as generated by the activity of
distant neurons. Next, a train of three distinct spike shapes from the database were superimposed on
the noise signal at a random peak value of 1. In Eq. (1), we represented the noise level using the SNR
as follows:
peak value of action potential with minimum amplitude
root-mean-square value of pure noise segment
By simulation, the interspike intervals of the three distinct spikes followed a Poisson distribution with
a mean firing rate of 20 Hz. Note that constructing noise from spikes also ensures that the noise and
spikes exhibit similar power spectra.
2.3 Spike detection
Spike detection, as the first step of processing the obtained action potentials, can be categorized into
three main groups, based on (1) the peak-to-peak threshold, (2) template matching, and (3) statistical
strategies [15-17]. To avoid the artificial operations needed in the first and second method categories,
we adopted the method proposed by Donoho and Johnstone [15]. The threshold (Th) is selected as
nhT σ4=
(2)
where nσ is an estimate of the standard deviation of the background signal
=
6745.0
median
x
nσ
(3)
and x is the signal comprising spikes and noise. It is noted that taking the standard deviation of the
signal could lead to very high threshold values, especially in cases with high firing rates and large spike
SNR (dB) = 20 log10 (1)
4. Chien-Wen Cho, Wen-Hung Chao, You-Yin Chen
International Journals of Biometric and Bioinformatics, Volume (1) : Issue (1) 4
amplitudes. In contrast, by using the estimation based on the median, the interference of the spikes
is diminished (under the reasonable assumption that spikes amount to a small fraction of all samples).
2.4 Feature extraction
As reported in the review of Lewicki [3], early studies on spike sorting simply detected spikes using the
height of an action potential. The width and peak-to-peak amplitude were also used to characterize the
shapefeatures of spikes whenthecomputingresources wereverylimited.However, choosingfeatures
based on this intuitive approach often results in poor cluster separation. This prompted the use of PCA
to find an ordered set of orthogonal basis vectors that capture the directions in the data of largest
variation[4,17].LDAisanothercommonlyusedapproachfordiscrimination[13,18,19],andisreported
to be more efficient than PCA experimentally except for very small training sets [18, 20-21]. LDAaims
to find an optimal transformation by maximizing the between-class distance and simultaneously
minimizing the within-class distance, thus achieving maximum discrimination:
})()Trace{(max T-1T
WSWWSW bw
W (4)
where
∑∑= =
−−=
L
i
N
j
iijiij
T
i
N 1 1
T
))((
1
mxmxSw
(5)
∑=
−−=
L
i
iii
T
N
N 1
T
))((
1
xxb mmmmS
(6)
wS and bS are the between-class and within-class matrices, respectively. ijx is the j -th vector
correspondingtothe i -thclasscenter, im and xm arethecentersofoverallvectors. L , iN ,and TN
are the number of classes, the number of vectors of the i -th class, and the total number of the overall
data vectors [22]. By applying eigendecomposition to the scatter matrices, the optimal transformation
W is readily evaluated by computing the eigenvectors of .
1
bw SS
−
Although LDA has an intrinsic
limitationofrequiringoneofthescattermatricesoftheobjectivefunctiontobenonsingular,thisproblem
can be overcome by using the PCA+LDA algorithm [18].
2.5 Choosing the number of classes and fuzzy c-means clustering
It is important to choose an appropriate number of classes. Bayesian approaches [23] can be used to
estimatetheprobabilityofeachmodelwhenassumingdifferentnumbersofclassesgiventheobserved
data. Fuzzy approaches have also been used to estimate a suitable number of clusters, as studied by
Xie and Beni [24]. In this study we determined the number of clusters for FCM by performing FCM with
an increasing number of clusters, beginning with two clusters. We investigated the matrix of the
Mahalanobisdistancesbetweeneachpairofgroupmeans;theMahalanobisdistanceisanormalization
technique that does not require a new threshold value to be specified for different experiments.
Furthermore, according to multivariate analysis, we could also apply a sequence of tests by
p values
(with the threshold set to 0), which can be computed after LDA. This estimation method allows the
number of clusters to be chosen without human intervention.
After selecting a reasonable number of clusters, a clustering algorithm is used to separate
multidimensional data into different groups. Simple approaches such as ISODATA and k-means
clustering [25] can be used. However, we adopted FCM to calculate the fuzzy centers [26], which
involves finding locally optimal fuzzy clustering of the data based on locally minimizing the generalized
least-square error function
∑ ∑ −=
k i
ik
m
ikm vyuJ
2
)(),( vU
(7)
5. Chien-Wen Cho, Wen-Hung Chao, You-Yin Chen
International Journals of Biometric and Bioinformatics, Volume (1) : Issue (1) 5
where iku is the membership strength of pattern ky in cluster i , iv is the center of the i -th cluster,
ky is the k -th training pattern, and m is a weighting exponent that sets the fuzziness. The
magnitude-squared term in Eq. (7) is the Euclidean norm, and is replaced by the norm
.))()((
21
2
−=−= ∑n
ikikik nvnyvyd
(8)
In our method Eqs. (7) and (8) are used to find the centers of the clusters that will partition the set of
training patterns so as to minimize the least-squared error function ).( vU,mJ The clustering process
starts with a random set of centers. Centers and membership strengths are calculated, and the
root-mean-square difference is calculated between the current partition,
n
U and the previous one,
1−n
U . The process terminates when the difference is below a specified threshold (0.001% in the
experiments reported here).
2.6 Performance indices
We used two performance indices based on the scatter matrix to quantitatively compare the efficacy
of the proposed method with those of other approaches based on linear transformations [12]. Each
index is a function of transform matrix W . The first index is used for Fisher's discriminant function, and
is defined as follows:
)det(
)det(
)
~
det(
)
~
det(
)( T
T
1
WSW
WSW
S
S
W
w
b
w
b
==J
(9)
bS and bS
~
represent the between-class matrices before and after a linear projection by W ,
respectively, and wS
~
and wS are the associated within-class matrices. The second index, the
generalized Fisher linear discriminant function, which can be interpreted in a similar context, is defined
as
})()Trace{()( T-1T
2 WSWWSWW bw=J
(10)
3. RESULTS
3.1 Spike sources and determining the number of clusters
The overall system was tested for three-unit clustering. The simulated signals are shown in Fig. 2A.
We further extracted 300 spikes and aligned them with their highest amplitudes in the center of the
waveforms, shown in Fig. 2B, by the thresholding method mentioned in Section 2.1 and preprocessed
them using PCA to reduce their dimensionality. In the experiments, the total number of clusters for
grouping using FCM was initially set to be 2 and was increased gradually. For demonstration purpose,
we only show and compare the clustering result with the cases of the total cluster number for 2 to 4.
6. Chien-Wen Cho, Wen-Hung Chao, You-Yin Chen
International Journals of Biometric and Bioinformatics, Volume (1) : Issue (1) 6
Time (sec)
1.58 1.60 1.62 1.64 1.66 1.68 1.70
-1.0
-0.5
0.0
0.5
1.0
1.5
Sample
0 10 20 30 40 50 60 70 80 90
-2.0
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
FIGURE 2: (A) Simulated signals, with the threshold shown by the horizontal line. (B) Extracted spikes.
The categorization results are illustrated, in Figs. 3-5. For the grouping with the number of clusters set
to two, we observed that the first cluster had relatively consistent waveform members (red lines in Fig.
3) while the second cluster (blue lines in Fig. 3) could be further divided into more classes. The
corresponding average waveforms of these two clusters of waveforms are shown in the inset of Fig.
3.
0 10 20 30 40 50 60 70 80 90
-2.0
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
2.0
0 10 20 30 40 50 60 70 80 90
Sample
0 10 20 30 40 50 60 70 80 90
0 10 20 30 40 50 60 70 80 90
-1.0
-0.8
-0.6
-0.4
-0.2
0.0
0.2
0.4
0.6
0.8
1.0
1.2
Fig. 4 illustrates the results with the total number of clusters set to three, where the waveforms of the
A B
FIGURE 3: Grouping results after FCM with the desired number of clusters set to two. The obtained two clusters
are indicated by the red and blue lines . The inset shows the corresponding average waveforms
7. Chien-Wen Cho, Wen-Hung Chao, You-Yin Chen
International Journals of Biometric and Bioinformatics, Volume (1) : Issue (1) 7
first, second, and third clusters are indicated by the red, blue, and green lines, respectively. It is evident
that this produces better clustering results than those in Fig. 3. To provide an overview of these three
clusters of signals, the inset shows the associated three average waveforms.
Sample
0 10 20 30 40 50 60 70 80 90
-2
-1
0
1
2
Sample
0 10 20 30 40 50 60 70 80 90
-2.0
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
2.0
Sample
0 10 20 30 40 50 60 70 80 90
Sample
0 10 20 30 40 50 60 70 80 90
Sample
0 10 20 30 40 50 60 70 80 90
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
FIGURE4:Grouping results afterFCMwith thedesirednumberofclusterssettothree.Theobtainedthreeclusters
are indicated by the red, blue, and green lines. The inset shows the corresponding average waveforms
Sample
0 10 20 30 40 50 60 70 80 90
-2
-1
0
1
2
Sample
0 10 20 30 40 50 60 70 80 90
-2.0
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
2.0
Sample
0 10 20 30 40 50 60 70 80 90
Sample
0 10 20 30 40 50 60 70 80 90
Sample
0 10 20 30 40 50 60 70 80 90
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
FIGURE 5: Grouping results after FCM with the desired number of clusters set to four. Only three meaningful
clusters were produced (indicated by the red, blue, and green lines). The inset shows the corresponding average
waveforms, which clearly indicates that the fourth cluster (black lines) was almost identical to the third cluster
Finally, Fig. 5 shows the results of having four clusters in FCM clustering. The figure shows that the
third and fourth clusters had almost the same types of waveforms (green and black lines, respectively),
8. Chien-Wen Cho, Wen-Hung Chao, You-Yin Chen
International Journals of Biometric and Bioinformatics, Volume (1) : Issue (1) 8
asalsoconfirmedbythecorrespondingaveragewaveformsplottedintheinset.Thisindicatesthatthere
were essentially only three units of signals. Table 1 is the matrix of Mahalanobis distances between
each pair of group means with the number of clusters set to five, in order to verify the validation of the
estimated number of clusters. Note that redundant clusters are indicated by a very small Mahalanobis
distance (cluster 4 and cluster 5 are close to cluster 1 with Mahalanobis distance 3.24 x 10
-10
and 4.12
x 10
-6
, respectively).
Cluster 1 Cluster 2 Cluster 3 Cluster 4 Cluster 5
Cluster 1 1.26 x 10
1
4.06 x 10
0
3.24 x 10
-10
4.12 x 10
-6
Cluster 2 1.26x10
1
1.76 x 10
1
1.26 x 10
1
1.26 x 10
1
Cluster 3 4.06x10
0
1.76 x 10
1
4.06 x 10
0
4.06 x 10
0
Cluster 4 3.24x10
-10
1.26 x 10
1
4.06 x 10
0
4.20 x 10
-6
Cluster 5 4.12x10
-6
1.26 x 10
1
4.06 x 10
0
4.20 x 10
-6
TABLE 1: The matrix of Mahalanobis distances between each pair of group means. Redundant clusters are
indicated by a very small Mahalanobis distance.
3.2 Comparison of clustering abilities without noise contamination
Since we had determined a reasonable number of clusters by comparing the clustering results, we
visualized the spike data in different planes (for best illustration, we show only two most important
coefficients in each plane). Fig. 6A is a scatter plot of the spikes drawn using two of the coefficients
(35th and 36th coefficients in the experiment) such that the chosen features had the largest variations
to spread the spike curves. The figure shows the distribution of the spikes projected onto the PCA+LDA
plane (denoted as LDA only for brevity, Fig. 6B) and the associated two components, or bases (Fig.
6D). It is evident that LDA exhibited better discriminative ability than PCA (the scatter plot in Fig. 6C
and associated principal components in Fig. 6E, respectively). It is noted that, comparison of Fig. 6A
and 6B indicates that PCA also had a better separating ability than the original domain.
3.3 Comparison of clustering ability under a low-SNR condition
To investigate the classification ability for very-low-SNR spike data enhanced by the LDA-based FCM
algorithm, we tested spike data that were contaminated with random noise to produce SNRs of 1.5,
2.0, 2.5, 3.0, 3.5, 4.0, and 4.5. For brevity, we only showed the scatter plot of the spikes before LDA
under SNR = 1.5 in Fig. 7A. As illustrated in Fig. 7B, the spikes could only be clustered into two (and
not three) groups due to the presence of the high-level noise. Fig. 7C is the scatter plot of these two
groups of spikes. The clustering results of LDA and PCA are shown in Fig. 7D and 7E, respectively.
It is clear that LDA exhibited the best clustering performance. To further quantitatively compare the
discriminative abilities of PCA and LDA, Eqs. (9) and (10) were used as two indices to evaluate the
grouping performance. Fig. 8 shows these two indices as functions of the SNR, which indicates that
LDA is superior to PCA over a broad range of SNRs.
A B
9. Chien-Wen Cho, Wen-Hung Chao, You-Yin Chen
International Journals of Biometric and Bioinformatics, Volume (1) : Issue (1) 9
X35
-1.5 -1.0 -0.5 0.0 0.5 1.0
X36
-1.5
-1.0
-0.5
0.0
0.5
1.0
LDA Coefficient 1
-0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5
LDACoefficient2
-0.3
-0.2
-0.1
0.0
0.1
0.2
0.3
Principal Component 1
-3 -2 -1 0 1 2 3 4
PrincipalComponent2
-2.0
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
2.0
Sample
0 10 20 30 40 50 60 70 80 90
-0.6
-0.4
-0.2
0.0
0.2
0.4
Sample
0 10 20 30 40 50 60 70 80 90
-0.4
-0.2
0.0
0.2
0.4
0.6
FIGURE 6: Scatter plots of the spike vectors under different spaces. (A-C) Scatter plots on the original, LDA, and
PCA spaces, respectively. (D and E) Associated LDA and PCA components, respectively
C D
E
A B
10. Chien-Wen Cho, Wen-Hung Chao, You-Yin Chen
International Journals of Biometric and Bioinformatics, Volume (1) : Issue (1) 10
Sample
0 10 20 30 40 50 60 70 80 90
-3
-2
-1
0
1
2
3
Sample
0 10 20 30 40 50 60 70 80 90
-3
-2
-1
0
1
2
3
0 10 20 30 40 50 60 70 80 90
-0.6
-0.4
-0.2
0.0
0.2
0.4
0.6
0.8
1.0
X35
-2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5
X36
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
2.0
LDA Coefficient 1
-10 -5 0 5 10 15
LDACoefficient2
-8
-6
-4
-2
0
2
4
6
8
Principal Component 1
-3 -2 -1 0 1 2 3 4 5
PrincipalComponent2
-3
-2
-1
0
1
2
3
FIGURE7: FCMalone failed to separate generate the thirdclass when SNR= 1.5 dB.(A)Spikes before clustering.
(B) First and second clusters after clustering in the time domain, with the inset showing the two corresponding
average waveforms . (C) Scatter plot of clustered result of the original space. (D and E) Plots of clustering using
LDA and PCA, respectively.
C D
E
11. Chien-Wen Cho, Wen-Hung Chao, You-Yin Chen
International Journals of Biometric and Bioinformatics, Volume (1) : Issue (1) 11
4. CONCLUSION & FUTURE WORK
Weimplementedanautomaticneuralspikesortingsystemthatdoesnotrequireinteractivehumaninput.
The first step of the proposed scheme involves extracting spikes using a detector. The desired number
of clusters is then iteratively changed with the obtained spikes clustered using FCM, and the distances
between cluster centers are quantified using the Mahalanobis distance. Assigning too many clusters
results in FCM producing clusters with almost identical centers, as shown by Fig. 5 and Table 1. As
reported by Zouridakis and Tam [8], FCM is suitable for clustering spike data with a low noise level,
andthis is verified inFig.4.However,theydidnotinvestigatethegroupingperformanceinthepresence
of very high noise levels. The experiments performed in this study revealed that FCM might fail to
separate spikes into a sufficient number of clusters due to noise contamination (see Fig. 7B and 7C).
Besides FCM, some researchers have also used several types of linear transformation for feature
extraction in the sorting of data [27-29]. Among them, PCAis arguably the best known and most widely
used. PCA has previously been utilized for feature extraction by Richmond and Optican [30] in spikes
generated by the primate inferior temporal cortex. However, PCA has limited utility when the signals
of interest are sparsely distributed, such as when the difference between firing patterns is based on
one or a small number of spikes occurring within a narrow time window. This issue arises due to PCA
emphasizing global features in signals [31] - PCA focuses on computing eigenvectors accounting for
the largest variance of the data are selected, but these directions do not necessarily provide the best
separation of the spike classes.
In this study, we utilized FCM as the clustering strategyfor implementing an unsupervised spike sorting
system. We further improved the discriminating ability by incorporating LDA [13, 18-19] for feature
extraction, which is a frequently used method for classification and dimension reduction. LDA and its
variations thereof have been used widely in many applications, particularly in face recognition. LDA
aims to find an optimal transformation by minimizing the within-class distance and maximizing the
between-class distance simultaneously, thus maximizing the discrimination ability. However, such an
approachcannot be implementedinanunsupervised way. In practice,itis verydifficultto perform spike
sorting by directly applying supervised approaches during the course of an experiment. Thus, we
incorporated FCM to avoid the problem resulting from the lack of a priori knowledge of spike targets.
The clustering performances of FCM alone, PCA, and LDAwere compared by visualizing scatter plots
of the spikes. As shown in Fig. 6 and 7, the proposed scheme outperformed FCM and PCA. This is
because FCM exhibits low noise tolerance and PCA only focuses on maximizing the variation rather
than the grouping performance. A further comparison of PCA and LDA under a low-SNR condition is
provided in Fig. 8. LDAexhibited better (greater) performance values for both indices 1J and 2J . For
instance, at SNR = 1.5 dB, 1J for LDA and PCA was 50.63 and 23.86, respectively, and 2J for LDA
and PCA was 28.41 and 20.83, respectively. That is, at SNR = 1.5 dB, LDA was 2.12 times the ratio
of between- and within-class variation of PCA for 1J , and 1.36 times for 2J . When SNR was 4.5, 1J
for LDAand PCAwas 381.62 and 165.67, respectively, and 2J for LDAand PCAwas 42.27 and 28.25,
respectively. That is, at SNR = 4.5 dB, LDA was 1.50 times the ratio of between- and within-class
variation of PCA for 1J , and 1.48 times for 2J . The results revealed that LDA improves the grouping
performance for low-SNR spike data.
The performance of the proposed unsupervised spike sorting system could be further improved by
combining more powerful feature extraction approaches, such as wavelet and Gabor transformations
[1, 32-33]. Wavelet transformation has been proven to be excellent for reducing noise and signal
reconstruction, and Gabor transformation makes use of both time- and frequency-domain information.
The use of more feature coefficients to represent spikes increases the importance of the efficiency of
the dimension reduction technique. Incorporating such feature extraction methods into the proposed
scheme should be investigated in future work.
12. Chien-Wen Cho, Wen-Hung Chao, You-Yin Chen
International Journals of Biometric and Bioinformatics, Volume (1) : Issue (1) 12
SNR
1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0
PerformanceIndex
0
50
100
150
200
250
300
350
400
J1 of LDA
J2 of LDA
J1 of PCA
J2 of PCA
FIGURE 8: Performance comparison of LDA and PCA using two indices: 1J and 2J .
5. REFERENCES
1. R. Q. Quiroga, Z. Nadasdy and Y. Ben-Shaul. “Unsupervised spike detection and sorting with
wavelets and superparamagnetic clustering”. Neural Comp, 16(8):1661-1687, 2004
2. F. Rieke et al. “Spikes: exploring the neural code”, Cambridge, MA: The MIT Press (1996)
3. M. S. Lewicki. “A review of methods for spike sorting: The detection and classification of neural
action potentials”. Network: Computation Neural Syst, 9(4):R53-R78, 1998
4. B.C.WheelerandW.J.Heetderks.“Acomparisonoftechniquesforclassificationofmultipleneural
signals”. IEEE Trans Biomed Eng, 29(12):752-759, 1982
5. K. Mirfakhraei and K. Horch. “Classification of action potentials in multi-unit intrafascicular
recordings using neural network patern-recognition techniques”. IEEE Trans Biomed Eng,
41(1):89-91, 1994
6. S. N. Gozani and J. P. Miller. “Optimal discrimination and classification of neuronal action potential
waveforms from multiunit, multichannel recordings using software-based linear filters”. IEEE Trans
Biomed Eng, 41(4):358-372, 1994
7. X. Yang and S. A. Shamma. “A totally automated system for the detection and classification of
neural spikes”. IEEE Trans Biomed Eng, 35(10):806-816, 1988
8. G. Zouridakis and D. C. Tam. “Identification of reliable spike templates in multi-unit extracellular
recordings using fuzzy clustering”. Comp Meth Prog Biomed, 61(2):91-98, 2000
9. R. Chandra and L. M. Optican. “Detection, classification, and superposition resolution of action
potentials in multiunit single channel recordings by an on-line real-time neural network”. IEEE Trans
Biomed Eng, 44(5):403-412, 1997
10. K.H.KimandS.J.Kim,“Neuralspikesortingundernearly0-dBsignal-to-noiseratiousingnonlinear
energy operatorandartificialneural-networkclassifier”.IEEETransBiomedEng,47(10):1406-1411,
2000
11. K.H. Kim andS.J. Kim.“Methodforunsupervised classification of multiunitneural signalrecording
under low signal-to-noise ratio”. IEEE Trans Biomed Eng, 50(4):421-431, 2003
12. K. H. Kim,“Improved algorithm forfully-automated neuralspike sorting based on projection pursuit
and Gaussian mixture model”. Inter J Control Autom Syst, 4(6):705-713, 2006
13. C. Liu and H. Wechsler. Enhanced Fisher linear discriminant model for face recognition. In
Proceedings of Int Conf Pattern Recognition.1368-1372, 1998
13. Chien-Wen Cho, Wen-Hung Chao, You-Yin Chen
International Journals of Biometric and Bioinformatics, Volume (1) : Issue (1) 13
14. A. Hyvarinen et al. “Independent component analysis”, New York: Wiley (2001)
15. D. Donoho and I. M. Johnstone. “Ideal spatial adaptation by wavelet shrinkage”. Biometrika,
81(3):425-455, 1994
16. R.Tasker,Z.IsraelandK.Burchiel.“Microelectroderecordinginmovementdisordersurgery”.New
York: Thieme Medical Publishers (2004)
17. G. L. Gerstein and W. A. Clark. “Simultaneous studies of firing patterns in several neurons”.
Science,143(3612):1325-1327, 1964
18. A. Martinez and M. Kak. “PCA versus LDA”. IEEE Trans Pattern Analysis Machine Intell, 23(2):
233-288, 2001
19. J. Ye, Q. Li. “A two-stage linear discriminant analysis via QR-decomposition”. IEEE Trans Pattern
Analysis Machine Intell, 27(6):929-941, 2005
20. D.L.SwetsandJ.Weng.“Usingdiscriminanteigenfeaturesforimageretrieval”.IEEETransPattern
Anal Machine Intell 18(8):831-836, 1996
21. P. N. Belhumeour, J. P. Hespanha and D. J. Kriegman. “Eigenfaces vs. fisherfaces: recognition
using class specific linear projection”. IEEE Trans Pattern Anal Machine Intell, 19(7):711-720, 1997
22. H.R.Barker.“MultivariateAnalysisofVariance(MANOVA):APracticalGuidetoItsUseinScientific
Decision-Making”, AL: University of Alabama Press (1984)
23. M. S. Lewicki. “Bayesian modeling and classification of neural signals”. Neural Comput,
6(5):1005-1030, 1994
24. X.L.XieandG. A. Beni.“A validity measure forfuzzy clustering”.IEEETrans Pattern AnalMachine
Intell 13(8):841-847, 1991
25. R. O. Duda and P. E. Hart. “Pattern Classification and Scene Analysis”, New York: John Wiley &
Sons (1973)
26. J. C. Bezdek, R. Ehrlich and W. Full. “FCM: The fuzzy c-means clustering algorithm”. Comput
Geosci, 10(2):191-203, 1984
27. J. H. Friedman. “Exploratory projection pursuit”. J Ameri Statist Associat, 82(1):249-266, 1987
28. R. A. Fisher. “The use of multiple measurements in taxonomic problems”. Ann eugenics,
7(2):179-188, 1936
29. Y. Meyer. “Wavelet analysis book report”. Bull Am Math Soc (New Series), 28(2):350-360, 1993
30. B. J. Richmond and L. M. Optican. “Temporal encoding of two-dimensional patterns by single units
in primate inferior temporal cortex. II. Quantification of response waveform”. J Neurophysiol,
57(1):147-161, 1987
31. P. S. Penev and J. J. Atick. “Local feature analysis: a general statistical theory for object
representation”. Network: Comput Neural Syst, 7(3):477-500, 1996
32. S. Qian and D. Chen. “Discrete Gabor transform”. IEEE Trans Signal Process, 41(7): 2429-2438,
1993
33. J. C. Letelier and P. P. Weber. “Spike sorting based on discrete wavelet transform coefficients”.
J Neurosci Methods, 101(2):93-106, 2000