SlideShare a Scribd company logo
1 of 8
Download to read offline
IOSR Journal of VLSI and Signal Processing (IOSR-JVSP)
Volume 7, Issue 2, Ver. I (Mar. - Apr. 2017), PP 26-33
e-ISSN: 2319 – 4200, p-ISSN No. : 2319 – 4197
www.iosrjournals.org
DOI: 10.9790/4200-0702012633 www.iosrjournals.org 26 | Page
Performance Analysis of the Sigmoid and Fibonacci Activation
Functions in NGA Architecture for a Generalized Independent
Component Analysis
James Paul CHIBOLE1,*
, Heywood Ouma ABSALOMS1, 2
,
Edward Ng’ang’a NDUNGU1, 3
1
Department of Telecommunication and Information Engineering, School of Electrical, Electronics and
Information Engineering, Jomo Kenyatta University of Agriculture and Technology, Nairobi, Kenya.
2
Department of Electrical Engineering, Faculty of Engineering, University of Nairobi, Nairobi, Kenya.
3
Department of Telecommunication and Information Engineering, School of Electrical, Electronics and
Information Engineering, Jomo Kenyatta University of Agriculture and Technology, Nairobi, Kenya.
Abstract: Activation functions are used to transform the mixed inputs into their corresponding output
counterparts. Commonly, activation functions are used as transfer functions in engineering and research.
Artificial neural networks (ANN) are the preferred choice for most studies and comparisons of activation
functions. The Sigmoid Activation Function is the most common and its popularity arise from the fact that it is
easy to derive, its boundedness within the unit interval, and it has mathematical properties that work well with
the approximation theory. On the other hand, not so common is the Fibonacci Activation Function with similar
and perhaps better features than the Sigmoid. Algorithms have a broad range of applications making it
plausible to suspect that different problems call for unique activation functions. The aim of this paper is to have
a detailed of the role of the activation functions and then analyse the performance of two of them – the Sigmoid
and the Fibonacci – in a non-ANN setup using the most basic artificially generated signals. Results show that
the Fibonacci activation function performs better with the set of signals applied in the natural gradient
algorithm.
Keywords: Fibonacci Activation Function, Sigmoid Activation Function, Independent Component Analysis,
Natural Gradient Algorithm.
I. Introduction
The use of adaptive algorithms to solve various problems is predominant because they can adapt their
behaviour to reflect the changing characteristics of the modelled system. Karlik and Olgac observe that there
has been enormous research to investigate methods aimed at improving the performance of ANN through
optimised training methods, network structure and learn parameters with little work dedicated to activation
functions [1]. Adaptive algorithms that do not use the ANN in their implementation have also received little
attention. On activation functions, the sigmoid is seen as the preferred choice because it attains the
approximation power much better when compared to other established functions studied in the approximation
theory of polynomials and spines [2]. In some advanced studies, the use of the hyperbolic tangent sigmoid in the
first and second layers and a linear function in the output layers of an ANN network are the most effective when
used within the decision algorithm for transmission system fault detection [3]. Despite the likely use of
Fibonacci Activation Function, there is no real application of it in literature other than in [4] and only on the
experimental basis. Fibonacci activation function has also been picked up in [5] and used in the separation of
two speech signals with promising results. A significant observation is that most of the studies on activations
functions use neural networks. Therefore, a comparison of the Fibonacci activation function (FAF) to the widely
preferred alternative, the Sigmoid activation function (SAF), is necessary. A general understanding of
activation functions and why they are critical in the algorithm is not well expounded in literature. What is the
performance of FAF when compared to SAF for separating artificially generated signals in terms of quality and
stability?
II. Role Of Activation Functions
Input signals are often non-linear in nature. However, during the mixing process, such inputs are combined
linearly. In fact, inputs to algorithms are simply a linear transformation.
𝒙 = 𝑨𝒔 (1)
Performance Analysis Of The Sigmoid And Fibonacci Activation Functions In NGA Architecture…
DOI: 10.9790/4200-0702012633 www.iosrjournals.org 27 | Page
where 𝒙 is a vector of mixed signals, 𝑨 is the mixing matrix and 𝒔 is the vector of the input signals. To
retain back the original signals s, a reverse transformation through a de-mixing matrix 𝑾 is necessary but often
not sufficient.
𝒚 = 𝑾𝒙 (2)
The linear transformation in Equation 2 is necessary but will need to be passed through an activation
function even if the inputs are linear. Because most inputs are non-linear, 𝒚 can effectively resemble 𝒔 if the
transformation passes through a non-linear activation function. Otherwise, it is not possible to retain the non-
linearity of the input signals. Non-linearity here means that it is not possible to reproduce the output when the
inputs are combined linearly.
While the input takes values in the range [-∞, ∞], the output controlled by the activation function gives
bounded values within the intervals [1, 0] or [-1, 1]. Without an activation function, it is not possible to map the
wide inputs to relatively small output parameters.
One essential characteristic of an activation function is that it must be differentiable within its bounded
interval. Differentiation helps to give the direction of adjusting the weights. When the derivative value is
significant, it means the corresponding weights will equally have large weight adjustments. Calculus rules are
that a significant derivative value indicates the minima is still far. For each iteration, the weights of the
algorithm are adjusted to correspond to the direction of the steepest descent on the surface of the cost function
defined by the total error. Computing the error is by subtracting the expected from the observed values. Then
each weight within the matrices of weights is adjusted according to the gradient error calculated. In a more
general sense, the derivation is done along the activation function curve as the expected value using
optimization techniques such as the natural gradient in finding the minima of the objective function.
The first derivative determines the first point on the curve by ensuring that there is a tangent with a
slope of zero on the line tangent. A slope of zero suggests the location of the minima and it can be local or
global for the function. However, the first derivative also suggests something significant: it informs the
algorithm if headed in the correct direction that will take it to the minimum of the function. The derivative
value, that is, the slope at that point decreases gradually. What this means is that for a minimized function, its
derivative must be calculated and that the value must be decreasing if the algorithm is following the correct
direction. It is for this reason that the activation function must be differentiable within its range, revealing it
critical role in algorithms. The better the choice of the activation functions the better the algorithm.
2.1. Comparison of Activation Functions
This paper compares the commonly used activation function the Sigmoid to the less explored in the
literature, the Fibonacci. [4] showed that the Fibonacci activation function outperforms another (name not
known) but having the equation:
𝜑 𝑦 =
2
3
𝑦3
+
1
3
𝑦5
(3)
In fact, the SAF is so common that there is an ANN named after it, the Sigmoid Neurones. Sigmoid
neurones are modified version of the perceptions where small changes in the bias and weights reflect as small
variations in the output [6]. Equation 4 shows the FAF
𝜑 𝑦 =
5−1
2
𝑦3
+
3− 5
2
𝑦5
(4)
while the SAF has the equation
𝜑 𝑦 =
1
1+ 𝑒−𝑦 (5)
and their MATLAB generated graphs are shown in Figure 1, below.
Figure 1: Sigmoid and Fibonacci Activation Functions
Performance Analysis Of The Sigmoid And Fibonacci Activation Functions In NGA Architecture…
DOI: 10.9790/4200-0702012633 www.iosrjournals.org 28 | Page
An activation function is the cumulative density function (cdf) and the reason for its use is explained
below. Assuming there is a random variable s, having probability density 𝑃𝑠(𝑆). The cumulative density
function (cdf) is given as:
𝐹 𝑠 = 𝑃(𝑆 ≤ 𝑠) (6)
Suppose, this is the Gaussian density then
Figure 2: Density function (a) and its cdf (b)
The height of the function 𝑠 (Figure 2 (b)) is equal to the area of the Gaussian density (Figure 2 (a)), up
to the period 𝑠. Going further (Figure 2 (b)) the function (Figure 2 (a)) becomes Gaussian. Equation 7 is another
form of writing the cdf of Equation 6.
𝐹 𝑆 = 𝑃𝑠
𝑠
−∞
𝑡 𝑑𝑡 (7)
Suppose there is a random variable 𝑠 and the aim is to model the distribution of this variable. Two
option exist: the first option is by specifying the density 𝑃𝑠(s) or the second option is to specify the cdf 𝐹 𝑆 .
Equation 7 is used to relate the two options. Most literature prefers the cdf option because of the easy in the
calculation and because it is continuous over the specified space. Continuous here means it is differentiable over
its range. In signal processing and other applications, the cdf is truly referred to as the activation function. It is,
therefore, easy to recovery the density 𝑃𝑠(s) by taking the cdf and computing its derivative
𝑃𝑠 𝑆 = 𝐹′
(𝑆) (8)
There is always a step in algorithms when the random variable for 𝑠 is assumed by either specifying the
density 𝑃𝑠 or the cdf. In the real sense, the assumptions involve the choice of an activation function that can
model the output of the signals to be separated. The cdf or the activation function has to be some function
increasing from 0 to 1 or some other specified range.
III. Generalized Independent Component Analysis
Let us assume that the data comes from 𝑛 original sources as speakers
𝑠 ∈ ℝ 𝑛
(9)
where 𝑠𝑗
(𝑖)
= 𝑠𝑖𝑔𝑛𝑎𝑙 𝑓𝑟𝑜𝑚 𝑠𝑝𝑒𝑎𝑘𝑒𝑟 𝑗 𝑎𝑡 𝑡𝑖𝑚𝑒 𝑖
we observe
𝑥(𝑖)
= 𝐴𝑠(𝑖)
(10)
where 𝑋 ∈ ℝ 𝑛
. This means that there are n microphones and each of them records some linear combination of
what the speaker says:
𝑥𝑗
(𝑖)
= 𝐴𝑗𝑘 𝑠𝑘
(𝑖)
(11)
𝑘
Equation 11 can be interpreted to mean the jth microphone is recording a linear combination of all the speakers
are saying at time i. The goal is to find the matrix 𝑊 = 𝐴−1
so that
𝑠(𝑖)
= 𝑊𝑥(𝑖)
(12)
It is possible to recover the original sources as a linear combination of 𝑊𝑥(𝑖)
.
The use of ICA is because 𝑠𝑗
(𝑖)
∽ 𝑈𝑛𝑖𝑓𝑜𝑟𝑚 [−1, 1], each of the speaker outputs uniform white noise
Performance Analysis Of The Sigmoid And Fibonacci Activation Functions In NGA Architecture…
DOI: 10.9790/4200-0702012633 www.iosrjournals.org 29 | Page
Figure 3: Original source signals 𝑠1and 𝑠2
Figure 3 is part of the 𝑠1and 𝑠2, the original sources and is what the two speakers will be outputting
because each of the speakers is giving out uniform [-1, 1] independent random variables. The microphone,
however, records
Figure 4: Microphone observed signals
From Figure 4, it is easy to tell the axis of this parallelogram and it is also easy to note which linear
transformation is responsible for transforming Figure 4 to Figure 3.
There are ambiguities in ICA. One of them is that it is not possible to recover the original indexing of
the order. In particular, the generated data from speaker-1 and speaker-2 when running on ICA there is the
possibility of ending up with the reversed order of speakers. That corresponds to taking Figure 4 and flipping it
along the 450
axis. The axis reflects the picture in either way and it will still be within the box. There is no way
an algorithm can tell which was speaker-1 and which was speaker-2. The ordering is ambiguous. The other
source of ambiguity is the sign of the sources. It is not possible to tell whether you got back positive 𝑠1
(𝑖)
or
whether you got back negative 𝑠1
(𝑖)
. In Figure 4, what that corresponds to is depicted by taking the figure and
reflecting it along the vertical axis, it will reflect along the horizontal axis and you will still get the box. So, it
turns out that in this example, there is a guarantee of recovering positive 𝑠 𝑖
or negative 𝑠 𝑖
. The two are,
therefore, the only ambiguities in this case – permutation of the speakers and the sign of the speakers. It turns
out that the reason these are the only source of ambiguity was that 𝑠𝑗
(𝑖)
is non-Gaussian, an essential
characteristic for ICA to work.
Figure 5: Gaussian Signals
Performance Analysis Of The Sigmoid And Fibonacci Activation Functions In NGA Architecture…
DOI: 10.9790/4200-0702012633 www.iosrjournals.org 30 | Page
Suppose the original sources were Gaussian (Figure 5), then 𝑠(𝑖)
∽ 𝒩(0, 𝐼) - Non-Gaussian. Gaussian
is a spherically symmetric distribution. Figure 5 shows the continuous of a Gaussian distribution with identity
covariance. If the data is Gaussian, then it is impossible to use ICA.
Mathematically, let 𝑠 ∈ ℝ 𝑛
and the probability density function of 𝑠 = 𝑃𝑠(𝑠). Therefore, 𝑋 = 𝐴𝑆 =
𝑊−1
𝑠 and where 𝑠 = 𝑊𝑥 . Will this information at hand, next is to find the probability density function of W,
that is, 𝑃𝑥 𝑋 . It is easy to substitute the above equations and get
𝑃𝑥 𝑋 = 𝑃𝑠(𝑊𝑥) (13)
Equation 13 is only possible for most functions but not for continuous density function. For continuous density
function, the right formula is
𝑃𝑥 𝑋 = 𝑃𝑠(𝑊𝑥)|𝑤| (14)
where the additional |𝑤| on Equation 14 is the determinant of the matrix 𝑤. (Shortly we will see this
determinant in use in the NGA in an applied sense, given as 𝑑𝑒𝑡𝑊 ). As an example to illustrate this, suppose
𝑃𝑠 𝑠 = 𝐼 0 ≤ 𝑠 ≤ 1 (15)
showing that the density of s is uniform distribution over [0, 1]. Figure 6 illustrates this distribution.
Figure 6: Density distribution of 𝑠 over [0, 1] interval
If 𝑠 is uniform over [0, 1], then x will be the uniform distribution over [0, 2], because 𝑥 = 2𝑠, therefore,
𝐴 = 2 and 𝑤 =
1
2
. For 𝑥, the density would be
Figure 7: Density distribution of 𝑥 over [0, 2] interval
In this case 𝑃𝑥 𝑥 = 𝐼 0 ≤ 𝑥 ≤ 2 .
1
2
. where 𝐼 denote the indicator.
IV. Natural Gradient Model Architecture
The natural gradient algorithm (NGA) is an improvement on the shortcoming of the stochastic gradient
algorithm (SGA), which demands the specifying of the initial condition. In both SGA and the modified NGA,
the aim is to adjust the coefficient of the demixing matrix. Adjusting 𝑊(𝑘) means that the joint Probability
Density Function (jPDF) of the separated signal 𝑦(𝑘) is made as close as possible to the assumed distribution
𝑝 𝑦 (𝑦) [7] in each iteration of the algorithm. The measure has been successfully done by Cardoso using the
divergence measure of Kullback-Leibler [17].
𝐾𝐿(𝑝 𝑦 ∥ 𝑝 𝑦 ) = 𝑝 𝑦 𝑦 log⁡
𝑝 𝑦 (𝑦)
𝑝 𝑦 (𝑦)
dy (16)
where 𝑝 𝑦 (𝑦) is the actual distribution and 𝑝 𝑦 (𝑦) is the assumed distribution of the estimated signal
vector. In an ideal situation 𝑝 𝑦 = 𝑝 𝑦 giving a 𝐾𝐿 of zero. In fact, Equation 16 can correctly be referred to as the
objective function [5].
The tricky part is in choosing 𝑝 𝑦 (𝑦), which makes Equation (16) unreliable in many applications. A
cost function that is instantaneous and which has the same expected value as that of the 𝐾𝐿(𝑝 𝑦 ∥ 𝑝 𝑦 ) is given by
𝐽 𝑊 = −𝑙𝑜𝑔 𝑝𝑠 𝑦𝑖(𝑘)
𝑚
𝑖=1
𝑑𝑒𝑡𝑊 (17)
where 𝑑𝑒𝑡𝑊 indicates the determinant 𝑊. The cost function (Equation 17) can be computed further to yield the
stochastic gradient descent given as
𝑊 𝑘 + 1 = 𝑊 𝑘 + 𝜇(𝑘) 𝑊−𝑇
𝑘 − 𝜑(𝑦(𝑘)𝑥 𝑇
(𝑘)) (18)
here 𝜑 𝑦 = 𝜑 𝑦1 , … , 𝜑(𝑦 𝑚 ) 𝑇
, 𝜑 𝑦 = −𝛿𝑙𝑜𝑔𝑝𝑠(𝑦/𝑑𝑦), is the cdf or algorithmically the
activation function. 𝜇(𝑘) is the step size used by the algorithm to move from one iteration to the next.
According to [7], Equation (18) is able to perform Blind Signal Processing (BSS) for any simple choice of 𝜑(𝑦).
The Stochastic gradient descent of Equation (18) is limited regarding application because of its slow
Performance Analysis Of The Sigmoid And Fibonacci Activation Functions In NGA Architecture…
DOI: 10.9790/4200-0702012633 www.iosrjournals.org 31 | Page
convergence. A modification to the stochastic gradient descent to remove the mixing matrix 𝐴’s ill-condition is
the natural gradient algorithm by Amari [18] or the relative gradient by Gardoso [17]. In NGA, the initial
condition is not put into consideration because the algorithm aims at the overall system matrix 𝐶(𝑘). The full
derivation of the NGA is found in [7] with its final equation given as:
𝐶 𝑘 + 1 = 𝐶 𝑘 + 𝜇 𝑘 𝐼 − 𝜑(𝐶(𝑘)𝑠(𝑘))𝑠 𝑇
(𝑘)𝐶 𝑇
(𝑘) 𝐶 𝑘 (19)
From Equation 19 it is evident that the mixing matrix 𝐴 does not play any role in the separation process
as it only used in the initial condition 𝐶(0) = 𝑊(0)𝐴. Therefore even if the mixing matrix is poorly chosen, it
will not affect the quality of separated signal. The NGA works even better if the inputs 𝑠(𝑘) adhere to the ICA’s
independence requirement. Despite the modifications, both SGA and NGA require a good choice of the
activation function 𝜑(. ) as shown in Equation 19.
V. Experiment
Four sub-Gaussian signals were used as inputs to the algorithm
𝑠1 𝑡 = 𝑠𝑎𝑤𝑡 𝑕 𝑜𝑜𝑡 𝑕 (𝑡)
𝑠2 𝑡 = 𝑠𝑞𝑢𝑎𝑟𝑒 𝑡
𝑠3(𝑡) = sin⁡(𝑡)
𝑠4(𝑡) = cos⁡(𝑡)
(20)
The four inputs have a duration of 100 seconds with a sample time of 0.01. The MATLAB waveforms of these
signals is shown below.
Figure 8: From top to bottom – sawthooth, square, sine and cosine signals
The inputs are then mixed together using a 4-by-4 mixing matrix generated randomly in the range [-1,+1]. The
output of this mixture is shown below
Figure 9: Mixed input signals
The NGA algorithm is now applied to the mixture of Figure 9 with the Fibonacci and later Sigmoid activation
function in turns. The learning rate is set at 𝝁 = 𝟎. 𝟎𝟎𝟏 while the initial demixing matrix is set at 𝑾 𝒐 = 𝟎. 𝟏𝑰.
Performance Analysis Of The Sigmoid And Fibonacci Activation Functions In NGA Architecture…
DOI: 10.9790/4200-0702012633 www.iosrjournals.org 32 | Page
VI. Performance Comparison
The outputs from the two activation functions are shown below
Figure 10: Separated Signals using FAF
Figure 11: Separated Signals using SAF
It is evident from the two outputs of Figures 10 and 11 that the FAF realizes much better results than
the SAF. Further tests on the convergence and stability of the algorithm using FAF and SAF were done with
results shown in Figure 12.
Figure 12: Convergence rate of FAF and SAF
At 10000 iterations it is clear that the algorithm converges much faster when FAF is employed than
SAS. It also indicates that the stability of the algorithm is much higher when FAF is used as opposed to SAF
and shown by the few elements required to reach convergence.
Finally, a test on dynamic error was also done.
Performance Analysis Of The Sigmoid And Fibonacci Activation Functions In NGA Architecture…
DOI: 10.9790/4200-0702012633 www.iosrjournals.org 33 | Page
Figure 13: Dynamic error
It is clear that the dynamic error when the input are compared to their corresponding output signals reveal that
when FAF is used it gives a smaller error when compared to SAF.
VII. Conclusion
The findings reveal that the performance of algorithms is significantly affected by the choice of the
activation function. With a focus on activation function research; there are possibilities of algorithms performing
better on problems that have long been identified with specific AF. For example, despite the overwhelming use
of the Sigmoid in research and engineering applications, this study has shown that Fibonacci is an equally
promising AF. Specifically, the findings show that FAF, when used in NGA, makes the algorithm more stable
and converges much faster than when SAF is used. FAF also realizes a better error than SAF.
References
[1]. B. Karlik and V. A. Olgac, "Performance Analysis of Various Activation Functions in Generalized MLP Architectures of Neural
Networks," International Journal of Artificial Intelligence And Expert Systems (IJAE), vol. 1, no. 4, pp. 111-122, 2011.
[2]. B. DasGupta and G. Schnitger, "The Power of Approximating: A Comparison of Activation Functions," Advances in Neural
Information Processing Systems, vol. 5, pp. 615-622, 1993.
[3]. N. Suttisinthong, B. Seewirote and A. Ngaopitakkul, "Selection of Proper Activation Functions in Back-propagation Neural
Network algorithm for Single-Circuit Transmission Line," in Proceedings of the International MultiConference of Engineers and
Computer Scientists, IMECS , Hong Kong, 2014.
[4]. L. Lei, W. Yu and W. Xing-Hui, "Natural gradient algorithm based on a class of activation functions and its applications in BSS," in
Proceedings of the Fifth International Conference on Machine Learning and Cybernetics, Dalian, 2006.
[5]. J. P. Chibole, "Blind separation of two human speech signals using natural gradient algorithm by employing the assumptions of
independent component analysis," in Proceedings of the 2014 International Annual Conference on Sustainable Research and
Innovation, Nairobi, 2014.
[6]. M. Nielsen, Neural Networks and Deep learning, New York: Determination Press, 2015.
[7]. D. C. Scott, "7 Blind Signal Seperation and Blind Deconvolution," New York, CRC Press LLC, 2002.
[8]. J. F. Cardoso, "Blind signal separation: statistical principles," in Proceedings of the IEEE, Cambridge, MA, 1998.
[9]. S. Amari, "Natural gradient works efficiently in learning," Neural Computation, vol. 10, pp. 251-276, 1998.

More Related Content

What's hot

Hierarchical algorithms of quasi linear ARX Neural Networks for Identificatio...
Hierarchical algorithms of quasi linear ARX Neural Networks for Identificatio...Hierarchical algorithms of quasi linear ARX Neural Networks for Identificatio...
Hierarchical algorithms of quasi linear ARX Neural Networks for Identificatio...Yuyun Wabula
 
Static Analysis of Computer programs
Static Analysis of Computer programs Static Analysis of Computer programs
Static Analysis of Computer programs Arvind Devaraj
 
On the Performance of the Pareto Set Pursuing (PSP) Method for Mixed-Variable...
On the Performance of the Pareto Set Pursuing (PSP) Method for Mixed-Variable...On the Performance of the Pareto Set Pursuing (PSP) Method for Mixed-Variable...
On the Performance of the Pareto Set Pursuing (PSP) Method for Mixed-Variable...Amir Ziai
 
IRJET- Proposed System for Animal Recognition using Image Processing
IRJET-  	  Proposed System for Animal Recognition using Image ProcessingIRJET-  	  Proposed System for Animal Recognition using Image Processing
IRJET- Proposed System for Animal Recognition using Image ProcessingIRJET Journal
 
Firefly Algorithm to Opmimal Distribution of Reactive Power Compensation Units
Firefly Algorithm to Opmimal Distribution of Reactive Power Compensation Units Firefly Algorithm to Opmimal Distribution of Reactive Power Compensation Units
Firefly Algorithm to Opmimal Distribution of Reactive Power Compensation Units IJECEIAES
 
Presenting an Algorithm for Tasks Scheduling in Grid Environment along with I...
Presenting an Algorithm for Tasks Scheduling in Grid Environment along with I...Presenting an Algorithm for Tasks Scheduling in Grid Environment along with I...
Presenting an Algorithm for Tasks Scheduling in Grid Environment along with I...Editor IJCATR
 
A C OMPARATIVE S TUDY ON A DAPTIVE L IFTING B ASED S CHEME AND I NTERACT...
A C OMPARATIVE  S TUDY ON  A DAPTIVE L IFTING  B ASED  S CHEME AND  I NTERACT...A C OMPARATIVE  S TUDY ON  A DAPTIVE L IFTING  B ASED  S CHEME AND  I NTERACT...
A C OMPARATIVE S TUDY ON A DAPTIVE L IFTING B ASED S CHEME AND I NTERACT...ijma
 
IRJET- Rainfall Simulation using Co-Active Neuro Fuzzy Inference System (...
IRJET-  	  Rainfall Simulation using Co-Active Neuro Fuzzy Inference System (...IRJET-  	  Rainfall Simulation using Co-Active Neuro Fuzzy Inference System (...
IRJET- Rainfall Simulation using Co-Active Neuro Fuzzy Inference System (...IRJET Journal
 
Enriched Firefly Algorithm for Solving Reactive Power Problem
Enriched Firefly Algorithm for Solving Reactive Power ProblemEnriched Firefly Algorithm for Solving Reactive Power Problem
Enriched Firefly Algorithm for Solving Reactive Power Problemijeei-iaes
 
Implementation of Back-Propagation Neural Network using Scilab and its Conver...
Implementation of Back-Propagation Neural Network using Scilab and its Conver...Implementation of Back-Propagation Neural Network using Scilab and its Conver...
Implementation of Back-Propagation Neural Network using Scilab and its Conver...IJEEE
 
Molecular transporter system for qubits generation
Molecular transporter system for qubits generationMolecular transporter system for qubits generation
Molecular transporter system for qubits generationUniversity of Malaya (UM)
 
Direction Finding - Antennas Project
Direction Finding - Antennas Project Direction Finding - Antennas Project
Direction Finding - Antennas Project Surya Chandra
 
Elements Space and Amplitude Perturbation Using Genetic Algorithm for Antenna...
Elements Space and Amplitude Perturbation Using Genetic Algorithm for Antenna...Elements Space and Amplitude Perturbation Using Genetic Algorithm for Antenna...
Elements Space and Amplitude Perturbation Using Genetic Algorithm for Antenna...CSCJournals
 
Software Effort Estimation Using Particle Swarm Optimization with Inertia Weight
Software Effort Estimation Using Particle Swarm Optimization with Inertia WeightSoftware Effort Estimation Using Particle Swarm Optimization with Inertia Weight
Software Effort Estimation Using Particle Swarm Optimization with Inertia WeightWaqas Tariq
 
FPGA Optimized Fuzzy Controller Design for Magnetic Ball Levitation using Gen...
FPGA Optimized Fuzzy Controller Design for Magnetic Ball Levitation using Gen...FPGA Optimized Fuzzy Controller Design for Magnetic Ball Levitation using Gen...
FPGA Optimized Fuzzy Controller Design for Magnetic Ball Levitation using Gen...IDES Editor
 

What's hot (20)

Hierarchical algorithms of quasi linear ARX Neural Networks for Identificatio...
Hierarchical algorithms of quasi linear ARX Neural Networks for Identificatio...Hierarchical algorithms of quasi linear ARX Neural Networks for Identificatio...
Hierarchical algorithms of quasi linear ARX Neural Networks for Identificatio...
 
Static Analysis of Computer programs
Static Analysis of Computer programs Static Analysis of Computer programs
Static Analysis of Computer programs
 
HgI2 As X-Ray Imager: Modulation Transfer Function Approach
HgI2 As X-Ray Imager: Modulation Transfer Function Approach HgI2 As X-Ray Imager: Modulation Transfer Function Approach
HgI2 As X-Ray Imager: Modulation Transfer Function Approach
 
mlsys_portrait
mlsys_portraitmlsys_portrait
mlsys_portrait
 
On the Performance of the Pareto Set Pursuing (PSP) Method for Mixed-Variable...
On the Performance of the Pareto Set Pursuing (PSP) Method for Mixed-Variable...On the Performance of the Pareto Set Pursuing (PSP) Method for Mixed-Variable...
On the Performance of the Pareto Set Pursuing (PSP) Method for Mixed-Variable...
 
IRJET- Proposed System for Animal Recognition using Image Processing
IRJET-  	  Proposed System for Animal Recognition using Image ProcessingIRJET-  	  Proposed System for Animal Recognition using Image Processing
IRJET- Proposed System for Animal Recognition using Image Processing
 
Firefly Algorithm to Opmimal Distribution of Reactive Power Compensation Units
Firefly Algorithm to Opmimal Distribution of Reactive Power Compensation Units Firefly Algorithm to Opmimal Distribution of Reactive Power Compensation Units
Firefly Algorithm to Opmimal Distribution of Reactive Power Compensation Units
 
Presenting an Algorithm for Tasks Scheduling in Grid Environment along with I...
Presenting an Algorithm for Tasks Scheduling in Grid Environment along with I...Presenting an Algorithm for Tasks Scheduling in Grid Environment along with I...
Presenting an Algorithm for Tasks Scheduling in Grid Environment along with I...
 
A C OMPARATIVE S TUDY ON A DAPTIVE L IFTING B ASED S CHEME AND I NTERACT...
A C OMPARATIVE  S TUDY ON  A DAPTIVE L IFTING  B ASED  S CHEME AND  I NTERACT...A C OMPARATIVE  S TUDY ON  A DAPTIVE L IFTING  B ASED  S CHEME AND  I NTERACT...
A C OMPARATIVE S TUDY ON A DAPTIVE L IFTING B ASED S CHEME AND I NTERACT...
 
IRJET- Rainfall Simulation using Co-Active Neuro Fuzzy Inference System (...
IRJET-  	  Rainfall Simulation using Co-Active Neuro Fuzzy Inference System (...IRJET-  	  Rainfall Simulation using Co-Active Neuro Fuzzy Inference System (...
IRJET- Rainfall Simulation using Co-Active Neuro Fuzzy Inference System (...
 
Enriched Firefly Algorithm for Solving Reactive Power Problem
Enriched Firefly Algorithm for Solving Reactive Power ProblemEnriched Firefly Algorithm for Solving Reactive Power Problem
Enriched Firefly Algorithm for Solving Reactive Power Problem
 
Implementation of Back-Propagation Neural Network using Scilab and its Conver...
Implementation of Back-Propagation Neural Network using Scilab and its Conver...Implementation of Back-Propagation Neural Network using Scilab and its Conver...
Implementation of Back-Propagation Neural Network using Scilab and its Conver...
 
Molecular transporter system for qubits generation
Molecular transporter system for qubits generationMolecular transporter system for qubits generation
Molecular transporter system for qubits generation
 
Direction Finding - Antennas Project
Direction Finding - Antennas Project Direction Finding - Antennas Project
Direction Finding - Antennas Project
 
Elements Space and Amplitude Perturbation Using Genetic Algorithm for Antenna...
Elements Space and Amplitude Perturbation Using Genetic Algorithm for Antenna...Elements Space and Amplitude Perturbation Using Genetic Algorithm for Antenna...
Elements Space and Amplitude Perturbation Using Genetic Algorithm for Antenna...
 
Software Effort Estimation Using Particle Swarm Optimization with Inertia Weight
Software Effort Estimation Using Particle Swarm Optimization with Inertia WeightSoftware Effort Estimation Using Particle Swarm Optimization with Inertia Weight
Software Effort Estimation Using Particle Swarm Optimization with Inertia Weight
 
FPGA Optimized Fuzzy Controller Design for Magnetic Ball Levitation using Gen...
FPGA Optimized Fuzzy Controller Design for Magnetic Ball Levitation using Gen...FPGA Optimized Fuzzy Controller Design for Magnetic Ball Levitation using Gen...
FPGA Optimized Fuzzy Controller Design for Magnetic Ball Levitation using Gen...
 
E021052327
E021052327E021052327
E021052327
 
2006 ssiai
2006 ssiai2006 ssiai
2006 ssiai
 
Defuzzification
DefuzzificationDefuzzification
Defuzzification
 

Similar to Performance Analysis of the Sigmoid and Fibonacci Activation Functions in NGA Architecture for a Generalized Independent Component Analysis

Performance Analysis of Various Activation Functions in Generalized MLP Archi...
Performance Analysis of Various Activation Functions in Generalized MLP Archi...Performance Analysis of Various Activation Functions in Generalized MLP Archi...
Performance Analysis of Various Activation Functions in Generalized MLP Archi...Waqas Tariq
 
Design of airfoil using backpropagation training with mixed approach
Design of airfoil using backpropagation training with mixed approachDesign of airfoil using backpropagation training with mixed approach
Design of airfoil using backpropagation training with mixed approachEditor Jacotech
 
Design of airfoil using backpropagation training with mixed approach
Design of airfoil using backpropagation training with mixed approachDesign of airfoil using backpropagation training with mixed approach
Design of airfoil using backpropagation training with mixed approachEditor Jacotech
 
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTOR
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTORARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTOR
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTORijac123
 
Solution of Inverse Kinematics for SCARA Manipulator Using Adaptive Neuro-Fuz...
Solution of Inverse Kinematics for SCARA Manipulator Using Adaptive Neuro-Fuz...Solution of Inverse Kinematics for SCARA Manipulator Using Adaptive Neuro-Fuz...
Solution of Inverse Kinematics for SCARA Manipulator Using Adaptive Neuro-Fuz...ijsc
 
Local Binary Fitting Segmentation by Cooperative Quantum Particle Optimization
Local Binary Fitting Segmentation by Cooperative Quantum Particle OptimizationLocal Binary Fitting Segmentation by Cooperative Quantum Particle Optimization
Local Binary Fitting Segmentation by Cooperative Quantum Particle OptimizationTELKOMNIKA JOURNAL
 
Web spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithmsWeb spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithmsaciijournal
 
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network AlgorithmsWeb Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network Algorithmsaciijournal
 
On The Application of Hyperbolic Activation Function in Computing the Acceler...
On The Application of Hyperbolic Activation Function in Computing the Acceler...On The Application of Hyperbolic Activation Function in Computing the Acceler...
On The Application of Hyperbolic Activation Function in Computing the Acceler...iosrjce
 
Solution of Inverse Kinematics for SCARA Manipulator Using Adaptive Neuro-Fuz...
Solution of Inverse Kinematics for SCARA Manipulator Using Adaptive Neuro-Fuz...Solution of Inverse Kinematics for SCARA Manipulator Using Adaptive Neuro-Fuz...
Solution of Inverse Kinematics for SCARA Manipulator Using Adaptive Neuro-Fuz...ijsc
 
Adaptive modified backpropagation algorithm based on differential errors
Adaptive modified backpropagation algorithm based on differential errorsAdaptive modified backpropagation algorithm based on differential errors
Adaptive modified backpropagation algorithm based on differential errorsIJCSEA Journal
 
LINEAR FEEDBACK SHIFT REGISTER GENETICALLY ADJUSTED FOR SEQUENCE COPYING
LINEAR FEEDBACK SHIFT REGISTER GENETICALLY ADJUSTED FOR SEQUENCE COPYINGLINEAR FEEDBACK SHIFT REGISTER GENETICALLY ADJUSTED FOR SEQUENCE COPYING
LINEAR FEEDBACK SHIFT REGISTER GENETICALLY ADJUSTED FOR SEQUENCE COPYINGAIRCC Publishing Corporation
 
Linear Feedback Shift Register Genetically Adjusted for Sequence Copying
Linear Feedback Shift Register Genetically Adjusted for Sequence CopyingLinear Feedback Shift Register Genetically Adjusted for Sequence Copying
Linear Feedback Shift Register Genetically Adjusted for Sequence CopyingAIRCC Publishing Corporation
 
APPLIED MACHINE LEARNING
APPLIED MACHINE LEARNINGAPPLIED MACHINE LEARNING
APPLIED MACHINE LEARNINGRevanth Kumar
 
A partial linearization method for the traffic assignment problem.pdf
A partial linearization method for the traffic assignment problem.pdfA partial linearization method for the traffic assignment problem.pdf
A partial linearization method for the traffic assignment problem.pdfAnn Wera
 
Optimization of Fuzzy Logic controller for Luo Converter using Genetic Algor...
Optimization of Fuzzy Logic controller for Luo Converter using  Genetic Algor...Optimization of Fuzzy Logic controller for Luo Converter using  Genetic Algor...
Optimization of Fuzzy Logic controller for Luo Converter using Genetic Algor...IRJET Journal
 
Fpga based artificial neural network
Fpga based artificial neural networkFpga based artificial neural network
Fpga based artificial neural networkHoopeer Hoopeer
 

Similar to Performance Analysis of the Sigmoid and Fibonacci Activation Functions in NGA Architecture for a Generalized Independent Component Analysis (20)

Performance Analysis of Various Activation Functions in Generalized MLP Archi...
Performance Analysis of Various Activation Functions in Generalized MLP Archi...Performance Analysis of Various Activation Functions in Generalized MLP Archi...
Performance Analysis of Various Activation Functions in Generalized MLP Archi...
 
Design of airfoil using backpropagation training with mixed approach
Design of airfoil using backpropagation training with mixed approachDesign of airfoil using backpropagation training with mixed approach
Design of airfoil using backpropagation training with mixed approach
 
Design of airfoil using backpropagation training with mixed approach
Design of airfoil using backpropagation training with mixed approachDesign of airfoil using backpropagation training with mixed approach
Design of airfoil using backpropagation training with mixed approach
 
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTOR
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTORARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTOR
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTOR
 
Solution of Inverse Kinematics for SCARA Manipulator Using Adaptive Neuro-Fuz...
Solution of Inverse Kinematics for SCARA Manipulator Using Adaptive Neuro-Fuz...Solution of Inverse Kinematics for SCARA Manipulator Using Adaptive Neuro-Fuz...
Solution of Inverse Kinematics for SCARA Manipulator Using Adaptive Neuro-Fuz...
 
Local Binary Fitting Segmentation by Cooperative Quantum Particle Optimization
Local Binary Fitting Segmentation by Cooperative Quantum Particle OptimizationLocal Binary Fitting Segmentation by Cooperative Quantum Particle Optimization
Local Binary Fitting Segmentation by Cooperative Quantum Particle Optimization
 
Web spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithmsWeb spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithms
 
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network AlgorithmsWeb Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
 
On The Application of Hyperbolic Activation Function in Computing the Acceler...
On The Application of Hyperbolic Activation Function in Computing the Acceler...On The Application of Hyperbolic Activation Function in Computing the Acceler...
On The Application of Hyperbolic Activation Function in Computing the Acceler...
 
Solution of Inverse Kinematics for SCARA Manipulator Using Adaptive Neuro-Fuz...
Solution of Inverse Kinematics for SCARA Manipulator Using Adaptive Neuro-Fuz...Solution of Inverse Kinematics for SCARA Manipulator Using Adaptive Neuro-Fuz...
Solution of Inverse Kinematics for SCARA Manipulator Using Adaptive Neuro-Fuz...
 
NNAF_DRK.pdf
NNAF_DRK.pdfNNAF_DRK.pdf
NNAF_DRK.pdf
 
Adaptive modified backpropagation algorithm based on differential errors
Adaptive modified backpropagation algorithm based on differential errorsAdaptive modified backpropagation algorithm based on differential errors
Adaptive modified backpropagation algorithm based on differential errors
 
LINEAR FEEDBACK SHIFT REGISTER GENETICALLY ADJUSTED FOR SEQUENCE COPYING
LINEAR FEEDBACK SHIFT REGISTER GENETICALLY ADJUSTED FOR SEQUENCE COPYINGLINEAR FEEDBACK SHIFT REGISTER GENETICALLY ADJUSTED FOR SEQUENCE COPYING
LINEAR FEEDBACK SHIFT REGISTER GENETICALLY ADJUSTED FOR SEQUENCE COPYING
 
Linear Feedback Shift Register Genetically Adjusted for Sequence Copying
Linear Feedback Shift Register Genetically Adjusted for Sequence CopyingLinear Feedback Shift Register Genetically Adjusted for Sequence Copying
Linear Feedback Shift Register Genetically Adjusted for Sequence Copying
 
APPLIED MACHINE LEARNING
APPLIED MACHINE LEARNINGAPPLIED MACHINE LEARNING
APPLIED MACHINE LEARNING
 
A partial linearization method for the traffic assignment problem.pdf
A partial linearization method for the traffic assignment problem.pdfA partial linearization method for the traffic assignment problem.pdf
A partial linearization method for the traffic assignment problem.pdf
 
Optimization of Fuzzy Logic controller for Luo Converter using Genetic Algor...
Optimization of Fuzzy Logic controller for Luo Converter using  Genetic Algor...Optimization of Fuzzy Logic controller for Luo Converter using  Genetic Algor...
Optimization of Fuzzy Logic controller for Luo Converter using Genetic Algor...
 
Fpga based artificial neural network
Fpga based artificial neural networkFpga based artificial neural network
Fpga based artificial neural network
 
AI Lesson 32
AI Lesson 32AI Lesson 32
AI Lesson 32
 
Lesson 32
Lesson 32Lesson 32
Lesson 32
 

More from IOSRJVSP

PCIe BUS: A State-of-the-Art-Review
PCIe BUS: A State-of-the-Art-ReviewPCIe BUS: A State-of-the-Art-Review
PCIe BUS: A State-of-the-Art-ReviewIOSRJVSP
 
Implementation of Area & Power Optimized VLSI Circuits Using Logic Techniques
Implementation of Area & Power Optimized VLSI Circuits Using Logic TechniquesImplementation of Area & Power Optimized VLSI Circuits Using Logic Techniques
Implementation of Area & Power Optimized VLSI Circuits Using Logic TechniquesIOSRJVSP
 
Design of Low Voltage D-Flip Flop Using MOS Current Mode Logic (MCML) For Hig...
Design of Low Voltage D-Flip Flop Using MOS Current Mode Logic (MCML) For Hig...Design of Low Voltage D-Flip Flop Using MOS Current Mode Logic (MCML) For Hig...
Design of Low Voltage D-Flip Flop Using MOS Current Mode Logic (MCML) For Hig...IOSRJVSP
 
Measuring the Effects of Rational 7th and 8th Order Distortion Model in the R...
Measuring the Effects of Rational 7th and 8th Order Distortion Model in the R...Measuring the Effects of Rational 7th and 8th Order Distortion Model in the R...
Measuring the Effects of Rational 7th and 8th Order Distortion Model in the R...IOSRJVSP
 
Analyzing the Impact of Sleep Transistor on SRAM
Analyzing the Impact of Sleep Transistor on SRAMAnalyzing the Impact of Sleep Transistor on SRAM
Analyzing the Impact of Sleep Transistor on SRAMIOSRJVSP
 
Robust Fault Tolerance in Content Addressable Memory Interface
Robust Fault Tolerance in Content Addressable Memory InterfaceRobust Fault Tolerance in Content Addressable Memory Interface
Robust Fault Tolerance in Content Addressable Memory InterfaceIOSRJVSP
 
Color Particle Filter Tracking using Frame Segmentation based on JND Color an...
Color Particle Filter Tracking using Frame Segmentation based on JND Color an...Color Particle Filter Tracking using Frame Segmentation based on JND Color an...
Color Particle Filter Tracking using Frame Segmentation based on JND Color an...IOSRJVSP
 
Design and FPGA Implementation of AMBA APB Bridge with Clock Skew Minimizatio...
Design and FPGA Implementation of AMBA APB Bridge with Clock Skew Minimizatio...Design and FPGA Implementation of AMBA APB Bridge with Clock Skew Minimizatio...
Design and FPGA Implementation of AMBA APB Bridge with Clock Skew Minimizatio...IOSRJVSP
 
Simultaneous Data Path and Clock Path Engineering Change Order for Efficient ...
Simultaneous Data Path and Clock Path Engineering Change Order for Efficient ...Simultaneous Data Path and Clock Path Engineering Change Order for Efficient ...
Simultaneous Data Path and Clock Path Engineering Change Order for Efficient ...IOSRJVSP
 
Design And Implementation Of Arithmetic Logic Unit Using Modified Quasi Stati...
Design And Implementation Of Arithmetic Logic Unit Using Modified Quasi Stati...Design And Implementation Of Arithmetic Logic Unit Using Modified Quasi Stati...
Design And Implementation Of Arithmetic Logic Unit Using Modified Quasi Stati...IOSRJVSP
 
P-Wave Related Disease Detection Using DWT
P-Wave Related Disease Detection Using DWTP-Wave Related Disease Detection Using DWT
P-Wave Related Disease Detection Using DWTIOSRJVSP
 
Design and Synthesis of Multiplexer based Universal Shift Register using Reve...
Design and Synthesis of Multiplexer based Universal Shift Register using Reve...Design and Synthesis of Multiplexer based Universal Shift Register using Reve...
Design and Synthesis of Multiplexer based Universal Shift Register using Reve...IOSRJVSP
 
Design and Implementation of a Low Noise Amplifier for Ultra Wideband Applica...
Design and Implementation of a Low Noise Amplifier for Ultra Wideband Applica...Design and Implementation of a Low Noise Amplifier for Ultra Wideband Applica...
Design and Implementation of a Low Noise Amplifier for Ultra Wideband Applica...IOSRJVSP
 
ElectroencephalographySignalClassification based on Sub-Band Common Spatial P...
ElectroencephalographySignalClassification based on Sub-Band Common Spatial P...ElectroencephalographySignalClassification based on Sub-Band Common Spatial P...
ElectroencephalographySignalClassification based on Sub-Band Common Spatial P...IOSRJVSP
 
IC Layout Design of 4-bit Magnitude Comparator using Electric VLSI Design System
IC Layout Design of 4-bit Magnitude Comparator using Electric VLSI Design SystemIC Layout Design of 4-bit Magnitude Comparator using Electric VLSI Design System
IC Layout Design of 4-bit Magnitude Comparator using Electric VLSI Design SystemIOSRJVSP
 
Design & Implementation of Subthreshold Memory Cell design based on the prima...
Design & Implementation of Subthreshold Memory Cell design based on the prima...Design & Implementation of Subthreshold Memory Cell design based on the prima...
Design & Implementation of Subthreshold Memory Cell design based on the prima...IOSRJVSP
 
Retinal Macular Edema Detection Using Optical Coherence Tomography Images
Retinal Macular Edema Detection Using Optical Coherence Tomography ImagesRetinal Macular Edema Detection Using Optical Coherence Tomography Images
Retinal Macular Edema Detection Using Optical Coherence Tomography ImagesIOSRJVSP
 
Speech Enhancement Using Spectral Flatness Measure Based Spectral Subtraction
Speech Enhancement Using Spectral Flatness Measure Based Spectral SubtractionSpeech Enhancement Using Spectral Flatness Measure Based Spectral Subtraction
Speech Enhancement Using Spectral Flatness Measure Based Spectral SubtractionIOSRJVSP
 
Adaptive Channel Equalization using Multilayer Perceptron Neural Networks wit...
Adaptive Channel Equalization using Multilayer Perceptron Neural Networks wit...Adaptive Channel Equalization using Multilayer Perceptron Neural Networks wit...
Adaptive Channel Equalization using Multilayer Perceptron Neural Networks wit...IOSRJVSP
 
Real Time System Identification of Speech Signal Using Tms320c6713
Real Time System Identification of Speech Signal Using Tms320c6713Real Time System Identification of Speech Signal Using Tms320c6713
Real Time System Identification of Speech Signal Using Tms320c6713IOSRJVSP
 

More from IOSRJVSP (20)

PCIe BUS: A State-of-the-Art-Review
PCIe BUS: A State-of-the-Art-ReviewPCIe BUS: A State-of-the-Art-Review
PCIe BUS: A State-of-the-Art-Review
 
Implementation of Area & Power Optimized VLSI Circuits Using Logic Techniques
Implementation of Area & Power Optimized VLSI Circuits Using Logic TechniquesImplementation of Area & Power Optimized VLSI Circuits Using Logic Techniques
Implementation of Area & Power Optimized VLSI Circuits Using Logic Techniques
 
Design of Low Voltage D-Flip Flop Using MOS Current Mode Logic (MCML) For Hig...
Design of Low Voltage D-Flip Flop Using MOS Current Mode Logic (MCML) For Hig...Design of Low Voltage D-Flip Flop Using MOS Current Mode Logic (MCML) For Hig...
Design of Low Voltage D-Flip Flop Using MOS Current Mode Logic (MCML) For Hig...
 
Measuring the Effects of Rational 7th and 8th Order Distortion Model in the R...
Measuring the Effects of Rational 7th and 8th Order Distortion Model in the R...Measuring the Effects of Rational 7th and 8th Order Distortion Model in the R...
Measuring the Effects of Rational 7th and 8th Order Distortion Model in the R...
 
Analyzing the Impact of Sleep Transistor on SRAM
Analyzing the Impact of Sleep Transistor on SRAMAnalyzing the Impact of Sleep Transistor on SRAM
Analyzing the Impact of Sleep Transistor on SRAM
 
Robust Fault Tolerance in Content Addressable Memory Interface
Robust Fault Tolerance in Content Addressable Memory InterfaceRobust Fault Tolerance in Content Addressable Memory Interface
Robust Fault Tolerance in Content Addressable Memory Interface
 
Color Particle Filter Tracking using Frame Segmentation based on JND Color an...
Color Particle Filter Tracking using Frame Segmentation based on JND Color an...Color Particle Filter Tracking using Frame Segmentation based on JND Color an...
Color Particle Filter Tracking using Frame Segmentation based on JND Color an...
 
Design and FPGA Implementation of AMBA APB Bridge with Clock Skew Minimizatio...
Design and FPGA Implementation of AMBA APB Bridge with Clock Skew Minimizatio...Design and FPGA Implementation of AMBA APB Bridge with Clock Skew Minimizatio...
Design and FPGA Implementation of AMBA APB Bridge with Clock Skew Minimizatio...
 
Simultaneous Data Path and Clock Path Engineering Change Order for Efficient ...
Simultaneous Data Path and Clock Path Engineering Change Order for Efficient ...Simultaneous Data Path and Clock Path Engineering Change Order for Efficient ...
Simultaneous Data Path and Clock Path Engineering Change Order for Efficient ...
 
Design And Implementation Of Arithmetic Logic Unit Using Modified Quasi Stati...
Design And Implementation Of Arithmetic Logic Unit Using Modified Quasi Stati...Design And Implementation Of Arithmetic Logic Unit Using Modified Quasi Stati...
Design And Implementation Of Arithmetic Logic Unit Using Modified Quasi Stati...
 
P-Wave Related Disease Detection Using DWT
P-Wave Related Disease Detection Using DWTP-Wave Related Disease Detection Using DWT
P-Wave Related Disease Detection Using DWT
 
Design and Synthesis of Multiplexer based Universal Shift Register using Reve...
Design and Synthesis of Multiplexer based Universal Shift Register using Reve...Design and Synthesis of Multiplexer based Universal Shift Register using Reve...
Design and Synthesis of Multiplexer based Universal Shift Register using Reve...
 
Design and Implementation of a Low Noise Amplifier for Ultra Wideband Applica...
Design and Implementation of a Low Noise Amplifier for Ultra Wideband Applica...Design and Implementation of a Low Noise Amplifier for Ultra Wideband Applica...
Design and Implementation of a Low Noise Amplifier for Ultra Wideband Applica...
 
ElectroencephalographySignalClassification based on Sub-Band Common Spatial P...
ElectroencephalographySignalClassification based on Sub-Band Common Spatial P...ElectroencephalographySignalClassification based on Sub-Band Common Spatial P...
ElectroencephalographySignalClassification based on Sub-Band Common Spatial P...
 
IC Layout Design of 4-bit Magnitude Comparator using Electric VLSI Design System
IC Layout Design of 4-bit Magnitude Comparator using Electric VLSI Design SystemIC Layout Design of 4-bit Magnitude Comparator using Electric VLSI Design System
IC Layout Design of 4-bit Magnitude Comparator using Electric VLSI Design System
 
Design & Implementation of Subthreshold Memory Cell design based on the prima...
Design & Implementation of Subthreshold Memory Cell design based on the prima...Design & Implementation of Subthreshold Memory Cell design based on the prima...
Design & Implementation of Subthreshold Memory Cell design based on the prima...
 
Retinal Macular Edema Detection Using Optical Coherence Tomography Images
Retinal Macular Edema Detection Using Optical Coherence Tomography ImagesRetinal Macular Edema Detection Using Optical Coherence Tomography Images
Retinal Macular Edema Detection Using Optical Coherence Tomography Images
 
Speech Enhancement Using Spectral Flatness Measure Based Spectral Subtraction
Speech Enhancement Using Spectral Flatness Measure Based Spectral SubtractionSpeech Enhancement Using Spectral Flatness Measure Based Spectral Subtraction
Speech Enhancement Using Spectral Flatness Measure Based Spectral Subtraction
 
Adaptive Channel Equalization using Multilayer Perceptron Neural Networks wit...
Adaptive Channel Equalization using Multilayer Perceptron Neural Networks wit...Adaptive Channel Equalization using Multilayer Perceptron Neural Networks wit...
Adaptive Channel Equalization using Multilayer Perceptron Neural Networks wit...
 
Real Time System Identification of Speech Signal Using Tms320c6713
Real Time System Identification of Speech Signal Using Tms320c6713Real Time System Identification of Speech Signal Using Tms320c6713
Real Time System Identification of Speech Signal Using Tms320c6713
 

Recently uploaded

VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...SUHANI PANDEY
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performancesivaprakash250
 
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756dollysharma2066
 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlysanyuktamishra911
 
Unleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leapUnleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leapRishantSharmaFr
 
Bhosari ( Call Girls ) Pune 6297143586 Hot Model With Sexy Bhabi Ready For ...
Bhosari ( Call Girls ) Pune  6297143586  Hot Model With Sexy Bhabi Ready For ...Bhosari ( Call Girls ) Pune  6297143586  Hot Model With Sexy Bhabi Ready For ...
Bhosari ( Call Girls ) Pune 6297143586 Hot Model With Sexy Bhabi Ready For ...tanu pandey
 
Thermal Engineering Unit - I & II . ppt
Thermal Engineering  Unit - I & II . pptThermal Engineering  Unit - I & II . ppt
Thermal Engineering Unit - I & II . pptDineshKumar4165
 
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...ranjana rawat
 
Thermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - VThermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - VDineshKumar4165
 
Generative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTGenerative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTbhaskargani46
 
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Call Girls in Nagpur High Profile
 
Double rodded leveling 1 pdf activity 01
Double rodded leveling 1 pdf activity 01Double rodded leveling 1 pdf activity 01
Double rodded leveling 1 pdf activity 01KreezheaRecto
 
Call Girls Wakad Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Wakad Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Wakad Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Wakad Call Me 7737669865 Budget Friendly No Advance Bookingroncy bisnoi
 
Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)simmis5
 
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Bookingdharasingh5698
 
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Bookingroncy bisnoi
 

Recently uploaded (20)

(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7
(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7
(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7
 
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
 
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort ServiceCall Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
 
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performance
 
Roadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and RoutesRoadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and Routes
 
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghly
 
Unleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leapUnleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leap
 
Bhosari ( Call Girls ) Pune 6297143586 Hot Model With Sexy Bhabi Ready For ...
Bhosari ( Call Girls ) Pune  6297143586  Hot Model With Sexy Bhabi Ready For ...Bhosari ( Call Girls ) Pune  6297143586  Hot Model With Sexy Bhabi Ready For ...
Bhosari ( Call Girls ) Pune 6297143586 Hot Model With Sexy Bhabi Ready For ...
 
Thermal Engineering Unit - I & II . ppt
Thermal Engineering  Unit - I & II . pptThermal Engineering  Unit - I & II . ppt
Thermal Engineering Unit - I & II . ppt
 
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
 
Thermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - VThermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - V
 
Generative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTGenerative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPT
 
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
 
Double rodded leveling 1 pdf activity 01
Double rodded leveling 1 pdf activity 01Double rodded leveling 1 pdf activity 01
Double rodded leveling 1 pdf activity 01
 
Call Girls Wakad Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Wakad Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Wakad Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Wakad Call Me 7737669865 Budget Friendly No Advance Booking
 
Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)
 
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
 
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
 

Performance Analysis of the Sigmoid and Fibonacci Activation Functions in NGA Architecture for a Generalized Independent Component Analysis

  • 1. IOSR Journal of VLSI and Signal Processing (IOSR-JVSP) Volume 7, Issue 2, Ver. I (Mar. - Apr. 2017), PP 26-33 e-ISSN: 2319 – 4200, p-ISSN No. : 2319 – 4197 www.iosrjournals.org DOI: 10.9790/4200-0702012633 www.iosrjournals.org 26 | Page Performance Analysis of the Sigmoid and Fibonacci Activation Functions in NGA Architecture for a Generalized Independent Component Analysis James Paul CHIBOLE1,* , Heywood Ouma ABSALOMS1, 2 , Edward Ng’ang’a NDUNGU1, 3 1 Department of Telecommunication and Information Engineering, School of Electrical, Electronics and Information Engineering, Jomo Kenyatta University of Agriculture and Technology, Nairobi, Kenya. 2 Department of Electrical Engineering, Faculty of Engineering, University of Nairobi, Nairobi, Kenya. 3 Department of Telecommunication and Information Engineering, School of Electrical, Electronics and Information Engineering, Jomo Kenyatta University of Agriculture and Technology, Nairobi, Kenya. Abstract: Activation functions are used to transform the mixed inputs into their corresponding output counterparts. Commonly, activation functions are used as transfer functions in engineering and research. Artificial neural networks (ANN) are the preferred choice for most studies and comparisons of activation functions. The Sigmoid Activation Function is the most common and its popularity arise from the fact that it is easy to derive, its boundedness within the unit interval, and it has mathematical properties that work well with the approximation theory. On the other hand, not so common is the Fibonacci Activation Function with similar and perhaps better features than the Sigmoid. Algorithms have a broad range of applications making it plausible to suspect that different problems call for unique activation functions. The aim of this paper is to have a detailed of the role of the activation functions and then analyse the performance of two of them – the Sigmoid and the Fibonacci – in a non-ANN setup using the most basic artificially generated signals. Results show that the Fibonacci activation function performs better with the set of signals applied in the natural gradient algorithm. Keywords: Fibonacci Activation Function, Sigmoid Activation Function, Independent Component Analysis, Natural Gradient Algorithm. I. Introduction The use of adaptive algorithms to solve various problems is predominant because they can adapt their behaviour to reflect the changing characteristics of the modelled system. Karlik and Olgac observe that there has been enormous research to investigate methods aimed at improving the performance of ANN through optimised training methods, network structure and learn parameters with little work dedicated to activation functions [1]. Adaptive algorithms that do not use the ANN in their implementation have also received little attention. On activation functions, the sigmoid is seen as the preferred choice because it attains the approximation power much better when compared to other established functions studied in the approximation theory of polynomials and spines [2]. In some advanced studies, the use of the hyperbolic tangent sigmoid in the first and second layers and a linear function in the output layers of an ANN network are the most effective when used within the decision algorithm for transmission system fault detection [3]. Despite the likely use of Fibonacci Activation Function, there is no real application of it in literature other than in [4] and only on the experimental basis. Fibonacci activation function has also been picked up in [5] and used in the separation of two speech signals with promising results. A significant observation is that most of the studies on activations functions use neural networks. Therefore, a comparison of the Fibonacci activation function (FAF) to the widely preferred alternative, the Sigmoid activation function (SAF), is necessary. A general understanding of activation functions and why they are critical in the algorithm is not well expounded in literature. What is the performance of FAF when compared to SAF for separating artificially generated signals in terms of quality and stability? II. Role Of Activation Functions Input signals are often non-linear in nature. However, during the mixing process, such inputs are combined linearly. In fact, inputs to algorithms are simply a linear transformation. 𝒙 = 𝑨𝒔 (1)
  • 2. Performance Analysis Of The Sigmoid And Fibonacci Activation Functions In NGA Architecture… DOI: 10.9790/4200-0702012633 www.iosrjournals.org 27 | Page where 𝒙 is a vector of mixed signals, 𝑨 is the mixing matrix and 𝒔 is the vector of the input signals. To retain back the original signals s, a reverse transformation through a de-mixing matrix 𝑾 is necessary but often not sufficient. 𝒚 = 𝑾𝒙 (2) The linear transformation in Equation 2 is necessary but will need to be passed through an activation function even if the inputs are linear. Because most inputs are non-linear, 𝒚 can effectively resemble 𝒔 if the transformation passes through a non-linear activation function. Otherwise, it is not possible to retain the non- linearity of the input signals. Non-linearity here means that it is not possible to reproduce the output when the inputs are combined linearly. While the input takes values in the range [-∞, ∞], the output controlled by the activation function gives bounded values within the intervals [1, 0] or [-1, 1]. Without an activation function, it is not possible to map the wide inputs to relatively small output parameters. One essential characteristic of an activation function is that it must be differentiable within its bounded interval. Differentiation helps to give the direction of adjusting the weights. When the derivative value is significant, it means the corresponding weights will equally have large weight adjustments. Calculus rules are that a significant derivative value indicates the minima is still far. For each iteration, the weights of the algorithm are adjusted to correspond to the direction of the steepest descent on the surface of the cost function defined by the total error. Computing the error is by subtracting the expected from the observed values. Then each weight within the matrices of weights is adjusted according to the gradient error calculated. In a more general sense, the derivation is done along the activation function curve as the expected value using optimization techniques such as the natural gradient in finding the minima of the objective function. The first derivative determines the first point on the curve by ensuring that there is a tangent with a slope of zero on the line tangent. A slope of zero suggests the location of the minima and it can be local or global for the function. However, the first derivative also suggests something significant: it informs the algorithm if headed in the correct direction that will take it to the minimum of the function. The derivative value, that is, the slope at that point decreases gradually. What this means is that for a minimized function, its derivative must be calculated and that the value must be decreasing if the algorithm is following the correct direction. It is for this reason that the activation function must be differentiable within its range, revealing it critical role in algorithms. The better the choice of the activation functions the better the algorithm. 2.1. Comparison of Activation Functions This paper compares the commonly used activation function the Sigmoid to the less explored in the literature, the Fibonacci. [4] showed that the Fibonacci activation function outperforms another (name not known) but having the equation: 𝜑 𝑦 = 2 3 𝑦3 + 1 3 𝑦5 (3) In fact, the SAF is so common that there is an ANN named after it, the Sigmoid Neurones. Sigmoid neurones are modified version of the perceptions where small changes in the bias and weights reflect as small variations in the output [6]. Equation 4 shows the FAF 𝜑 𝑦 = 5−1 2 𝑦3 + 3− 5 2 𝑦5 (4) while the SAF has the equation 𝜑 𝑦 = 1 1+ 𝑒−𝑦 (5) and their MATLAB generated graphs are shown in Figure 1, below. Figure 1: Sigmoid and Fibonacci Activation Functions
  • 3. Performance Analysis Of The Sigmoid And Fibonacci Activation Functions In NGA Architecture… DOI: 10.9790/4200-0702012633 www.iosrjournals.org 28 | Page An activation function is the cumulative density function (cdf) and the reason for its use is explained below. Assuming there is a random variable s, having probability density 𝑃𝑠(𝑆). The cumulative density function (cdf) is given as: 𝐹 𝑠 = 𝑃(𝑆 ≤ 𝑠) (6) Suppose, this is the Gaussian density then Figure 2: Density function (a) and its cdf (b) The height of the function 𝑠 (Figure 2 (b)) is equal to the area of the Gaussian density (Figure 2 (a)), up to the period 𝑠. Going further (Figure 2 (b)) the function (Figure 2 (a)) becomes Gaussian. Equation 7 is another form of writing the cdf of Equation 6. 𝐹 𝑆 = 𝑃𝑠 𝑠 −∞ 𝑡 𝑑𝑡 (7) Suppose there is a random variable 𝑠 and the aim is to model the distribution of this variable. Two option exist: the first option is by specifying the density 𝑃𝑠(s) or the second option is to specify the cdf 𝐹 𝑆 . Equation 7 is used to relate the two options. Most literature prefers the cdf option because of the easy in the calculation and because it is continuous over the specified space. Continuous here means it is differentiable over its range. In signal processing and other applications, the cdf is truly referred to as the activation function. It is, therefore, easy to recovery the density 𝑃𝑠(s) by taking the cdf and computing its derivative 𝑃𝑠 𝑆 = 𝐹′ (𝑆) (8) There is always a step in algorithms when the random variable for 𝑠 is assumed by either specifying the density 𝑃𝑠 or the cdf. In the real sense, the assumptions involve the choice of an activation function that can model the output of the signals to be separated. The cdf or the activation function has to be some function increasing from 0 to 1 or some other specified range. III. Generalized Independent Component Analysis Let us assume that the data comes from 𝑛 original sources as speakers 𝑠 ∈ ℝ 𝑛 (9) where 𝑠𝑗 (𝑖) = 𝑠𝑖𝑔𝑛𝑎𝑙 𝑓𝑟𝑜𝑚 𝑠𝑝𝑒𝑎𝑘𝑒𝑟 𝑗 𝑎𝑡 𝑡𝑖𝑚𝑒 𝑖 we observe 𝑥(𝑖) = 𝐴𝑠(𝑖) (10) where 𝑋 ∈ ℝ 𝑛 . This means that there are n microphones and each of them records some linear combination of what the speaker says: 𝑥𝑗 (𝑖) = 𝐴𝑗𝑘 𝑠𝑘 (𝑖) (11) 𝑘 Equation 11 can be interpreted to mean the jth microphone is recording a linear combination of all the speakers are saying at time i. The goal is to find the matrix 𝑊 = 𝐴−1 so that 𝑠(𝑖) = 𝑊𝑥(𝑖) (12) It is possible to recover the original sources as a linear combination of 𝑊𝑥(𝑖) . The use of ICA is because 𝑠𝑗 (𝑖) ∽ 𝑈𝑛𝑖𝑓𝑜𝑟𝑚 [−1, 1], each of the speaker outputs uniform white noise
  • 4. Performance Analysis Of The Sigmoid And Fibonacci Activation Functions In NGA Architecture… DOI: 10.9790/4200-0702012633 www.iosrjournals.org 29 | Page Figure 3: Original source signals 𝑠1and 𝑠2 Figure 3 is part of the 𝑠1and 𝑠2, the original sources and is what the two speakers will be outputting because each of the speakers is giving out uniform [-1, 1] independent random variables. The microphone, however, records Figure 4: Microphone observed signals From Figure 4, it is easy to tell the axis of this parallelogram and it is also easy to note which linear transformation is responsible for transforming Figure 4 to Figure 3. There are ambiguities in ICA. One of them is that it is not possible to recover the original indexing of the order. In particular, the generated data from speaker-1 and speaker-2 when running on ICA there is the possibility of ending up with the reversed order of speakers. That corresponds to taking Figure 4 and flipping it along the 450 axis. The axis reflects the picture in either way and it will still be within the box. There is no way an algorithm can tell which was speaker-1 and which was speaker-2. The ordering is ambiguous. The other source of ambiguity is the sign of the sources. It is not possible to tell whether you got back positive 𝑠1 (𝑖) or whether you got back negative 𝑠1 (𝑖) . In Figure 4, what that corresponds to is depicted by taking the figure and reflecting it along the vertical axis, it will reflect along the horizontal axis and you will still get the box. So, it turns out that in this example, there is a guarantee of recovering positive 𝑠 𝑖 or negative 𝑠 𝑖 . The two are, therefore, the only ambiguities in this case – permutation of the speakers and the sign of the speakers. It turns out that the reason these are the only source of ambiguity was that 𝑠𝑗 (𝑖) is non-Gaussian, an essential characteristic for ICA to work. Figure 5: Gaussian Signals
  • 5. Performance Analysis Of The Sigmoid And Fibonacci Activation Functions In NGA Architecture… DOI: 10.9790/4200-0702012633 www.iosrjournals.org 30 | Page Suppose the original sources were Gaussian (Figure 5), then 𝑠(𝑖) ∽ 𝒩(0, 𝐼) - Non-Gaussian. Gaussian is a spherically symmetric distribution. Figure 5 shows the continuous of a Gaussian distribution with identity covariance. If the data is Gaussian, then it is impossible to use ICA. Mathematically, let 𝑠 ∈ ℝ 𝑛 and the probability density function of 𝑠 = 𝑃𝑠(𝑠). Therefore, 𝑋 = 𝐴𝑆 = 𝑊−1 𝑠 and where 𝑠 = 𝑊𝑥 . Will this information at hand, next is to find the probability density function of W, that is, 𝑃𝑥 𝑋 . It is easy to substitute the above equations and get 𝑃𝑥 𝑋 = 𝑃𝑠(𝑊𝑥) (13) Equation 13 is only possible for most functions but not for continuous density function. For continuous density function, the right formula is 𝑃𝑥 𝑋 = 𝑃𝑠(𝑊𝑥)|𝑤| (14) where the additional |𝑤| on Equation 14 is the determinant of the matrix 𝑤. (Shortly we will see this determinant in use in the NGA in an applied sense, given as 𝑑𝑒𝑡𝑊 ). As an example to illustrate this, suppose 𝑃𝑠 𝑠 = 𝐼 0 ≤ 𝑠 ≤ 1 (15) showing that the density of s is uniform distribution over [0, 1]. Figure 6 illustrates this distribution. Figure 6: Density distribution of 𝑠 over [0, 1] interval If 𝑠 is uniform over [0, 1], then x will be the uniform distribution over [0, 2], because 𝑥 = 2𝑠, therefore, 𝐴 = 2 and 𝑤 = 1 2 . For 𝑥, the density would be Figure 7: Density distribution of 𝑥 over [0, 2] interval In this case 𝑃𝑥 𝑥 = 𝐼 0 ≤ 𝑥 ≤ 2 . 1 2 . where 𝐼 denote the indicator. IV. Natural Gradient Model Architecture The natural gradient algorithm (NGA) is an improvement on the shortcoming of the stochastic gradient algorithm (SGA), which demands the specifying of the initial condition. In both SGA and the modified NGA, the aim is to adjust the coefficient of the demixing matrix. Adjusting 𝑊(𝑘) means that the joint Probability Density Function (jPDF) of the separated signal 𝑦(𝑘) is made as close as possible to the assumed distribution 𝑝 𝑦 (𝑦) [7] in each iteration of the algorithm. The measure has been successfully done by Cardoso using the divergence measure of Kullback-Leibler [17]. 𝐾𝐿(𝑝 𝑦 ∥ 𝑝 𝑦 ) = 𝑝 𝑦 𝑦 log⁡ 𝑝 𝑦 (𝑦) 𝑝 𝑦 (𝑦) dy (16) where 𝑝 𝑦 (𝑦) is the actual distribution and 𝑝 𝑦 (𝑦) is the assumed distribution of the estimated signal vector. In an ideal situation 𝑝 𝑦 = 𝑝 𝑦 giving a 𝐾𝐿 of zero. In fact, Equation 16 can correctly be referred to as the objective function [5]. The tricky part is in choosing 𝑝 𝑦 (𝑦), which makes Equation (16) unreliable in many applications. A cost function that is instantaneous and which has the same expected value as that of the 𝐾𝐿(𝑝 𝑦 ∥ 𝑝 𝑦 ) is given by 𝐽 𝑊 = −𝑙𝑜𝑔 𝑝𝑠 𝑦𝑖(𝑘) 𝑚 𝑖=1 𝑑𝑒𝑡𝑊 (17) where 𝑑𝑒𝑡𝑊 indicates the determinant 𝑊. The cost function (Equation 17) can be computed further to yield the stochastic gradient descent given as 𝑊 𝑘 + 1 = 𝑊 𝑘 + 𝜇(𝑘) 𝑊−𝑇 𝑘 − 𝜑(𝑦(𝑘)𝑥 𝑇 (𝑘)) (18) here 𝜑 𝑦 = 𝜑 𝑦1 , … , 𝜑(𝑦 𝑚 ) 𝑇 , 𝜑 𝑦 = −𝛿𝑙𝑜𝑔𝑝𝑠(𝑦/𝑑𝑦), is the cdf or algorithmically the activation function. 𝜇(𝑘) is the step size used by the algorithm to move from one iteration to the next. According to [7], Equation (18) is able to perform Blind Signal Processing (BSS) for any simple choice of 𝜑(𝑦). The Stochastic gradient descent of Equation (18) is limited regarding application because of its slow
  • 6. Performance Analysis Of The Sigmoid And Fibonacci Activation Functions In NGA Architecture… DOI: 10.9790/4200-0702012633 www.iosrjournals.org 31 | Page convergence. A modification to the stochastic gradient descent to remove the mixing matrix 𝐴’s ill-condition is the natural gradient algorithm by Amari [18] or the relative gradient by Gardoso [17]. In NGA, the initial condition is not put into consideration because the algorithm aims at the overall system matrix 𝐶(𝑘). The full derivation of the NGA is found in [7] with its final equation given as: 𝐶 𝑘 + 1 = 𝐶 𝑘 + 𝜇 𝑘 𝐼 − 𝜑(𝐶(𝑘)𝑠(𝑘))𝑠 𝑇 (𝑘)𝐶 𝑇 (𝑘) 𝐶 𝑘 (19) From Equation 19 it is evident that the mixing matrix 𝐴 does not play any role in the separation process as it only used in the initial condition 𝐶(0) = 𝑊(0)𝐴. Therefore even if the mixing matrix is poorly chosen, it will not affect the quality of separated signal. The NGA works even better if the inputs 𝑠(𝑘) adhere to the ICA’s independence requirement. Despite the modifications, both SGA and NGA require a good choice of the activation function 𝜑(. ) as shown in Equation 19. V. Experiment Four sub-Gaussian signals were used as inputs to the algorithm 𝑠1 𝑡 = 𝑠𝑎𝑤𝑡 𝑕 𝑜𝑜𝑡 𝑕 (𝑡) 𝑠2 𝑡 = 𝑠𝑞𝑢𝑎𝑟𝑒 𝑡 𝑠3(𝑡) = sin⁡(𝑡) 𝑠4(𝑡) = cos⁡(𝑡) (20) The four inputs have a duration of 100 seconds with a sample time of 0.01. The MATLAB waveforms of these signals is shown below. Figure 8: From top to bottom – sawthooth, square, sine and cosine signals The inputs are then mixed together using a 4-by-4 mixing matrix generated randomly in the range [-1,+1]. The output of this mixture is shown below Figure 9: Mixed input signals The NGA algorithm is now applied to the mixture of Figure 9 with the Fibonacci and later Sigmoid activation function in turns. The learning rate is set at 𝝁 = 𝟎. 𝟎𝟎𝟏 while the initial demixing matrix is set at 𝑾 𝒐 = 𝟎. 𝟏𝑰.
  • 7. Performance Analysis Of The Sigmoid And Fibonacci Activation Functions In NGA Architecture… DOI: 10.9790/4200-0702012633 www.iosrjournals.org 32 | Page VI. Performance Comparison The outputs from the two activation functions are shown below Figure 10: Separated Signals using FAF Figure 11: Separated Signals using SAF It is evident from the two outputs of Figures 10 and 11 that the FAF realizes much better results than the SAF. Further tests on the convergence and stability of the algorithm using FAF and SAF were done with results shown in Figure 12. Figure 12: Convergence rate of FAF and SAF At 10000 iterations it is clear that the algorithm converges much faster when FAF is employed than SAS. It also indicates that the stability of the algorithm is much higher when FAF is used as opposed to SAF and shown by the few elements required to reach convergence. Finally, a test on dynamic error was also done.
  • 8. Performance Analysis Of The Sigmoid And Fibonacci Activation Functions In NGA Architecture… DOI: 10.9790/4200-0702012633 www.iosrjournals.org 33 | Page Figure 13: Dynamic error It is clear that the dynamic error when the input are compared to their corresponding output signals reveal that when FAF is used it gives a smaller error when compared to SAF. VII. Conclusion The findings reveal that the performance of algorithms is significantly affected by the choice of the activation function. With a focus on activation function research; there are possibilities of algorithms performing better on problems that have long been identified with specific AF. For example, despite the overwhelming use of the Sigmoid in research and engineering applications, this study has shown that Fibonacci is an equally promising AF. Specifically, the findings show that FAF, when used in NGA, makes the algorithm more stable and converges much faster than when SAF is used. FAF also realizes a better error than SAF. References [1]. B. Karlik and V. A. Olgac, "Performance Analysis of Various Activation Functions in Generalized MLP Architectures of Neural Networks," International Journal of Artificial Intelligence And Expert Systems (IJAE), vol. 1, no. 4, pp. 111-122, 2011. [2]. B. DasGupta and G. Schnitger, "The Power of Approximating: A Comparison of Activation Functions," Advances in Neural Information Processing Systems, vol. 5, pp. 615-622, 1993. [3]. N. Suttisinthong, B. Seewirote and A. Ngaopitakkul, "Selection of Proper Activation Functions in Back-propagation Neural Network algorithm for Single-Circuit Transmission Line," in Proceedings of the International MultiConference of Engineers and Computer Scientists, IMECS , Hong Kong, 2014. [4]. L. Lei, W. Yu and W. Xing-Hui, "Natural gradient algorithm based on a class of activation functions and its applications in BSS," in Proceedings of the Fifth International Conference on Machine Learning and Cybernetics, Dalian, 2006. [5]. J. P. Chibole, "Blind separation of two human speech signals using natural gradient algorithm by employing the assumptions of independent component analysis," in Proceedings of the 2014 International Annual Conference on Sustainable Research and Innovation, Nairobi, 2014. [6]. M. Nielsen, Neural Networks and Deep learning, New York: Determination Press, 2015. [7]. D. C. Scott, "7 Blind Signal Seperation and Blind Deconvolution," New York, CRC Press LLC, 2002. [8]. J. F. Cardoso, "Blind signal separation: statistical principles," in Proceedings of the IEEE, Cambridge, MA, 1998. [9]. S. Amari, "Natural gradient works efficiently in learning," Neural Computation, vol. 10, pp. 251-276, 1998.