SlideShare a Scribd company logo
1 of 12
Download to read offline
International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014
DOI:10.5121/ijcsa.2014.4201 01
Feed Forward Neural Network For Sine
Function With Symmetric Table Addition
Method Using Labview And Matlab Code
Fadhil A. Ali
Department of Electrical and Computer Engineering- Oklahoma State University
202 Engineering South Stillwater, OK 74078 USA- Tel:405-714-1084
Abstract
This work is proposed the feed forward neural network with symmetric table addition method to design the
neuron synapses algorithm of the sine function approximations, and according to the Taylor series
expansion. Matlab code and LabVIEW are used to build and create the neural network, which has been
designed and trained database set to improve its performance, and gets the best a global convergence with
small value of MSE errors and 97.22% accuracy.
Keywords
Neural networks, Symmetric Table Addition Method, LabVIEW , Matlab scripts
1.Introduction
1.1 Artificial Neural Network
An Artificial Neuron (ANN) is a model of biological neuron. Many of ANN receives signals from
the environment or other ANNs, and its gathering these signals by using activation function to
the signals sum. Input signals are excited through positive or negative numerical weights that
associated with each connection to the ANNs, thus firing of the ANN and the strength of the
exciting signal are controlled via a function referred to as activation function. The ANNs are
collecting all incoming signals and computing a net input signals as a function of the respective
weights. The net input renders to the activation function by calculated the output signal of the
ANNs, which is called the layered network. ANN may consist of input, hidden and output layers
[1], as shown in Figure (1).
Figure (1) Neural Network Architecture
The ANN is designed to work similar to neural tissue in the brain, where many independent
neurons (processing units) are interconnected into one large data-processing network. In the ANN
International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014
2
the “neurons” are represented as mathematical functions, which pass the data between themselves
in some defined and organized manner. The way of organization of the connections between the
“neurons” defines the features of the ANN, and its capability to perform certain tasks [2].
One of the simplest ANN’s is a feed-forward network, which it has worked in this investigation,
therefore other types of ANN will not be considered here. Such a network consists of: one input
layer, hidden layers (i.e. nodes)-(possibly more than one) and one output layer. The data are
transferred in a form of “signals”, it means values which are passed between the nodes. Each
connection among the nodes has a special variable associated with it – this is a weight. Each
neuron collects all the input values multiplied by the associated weights and processes them with
activation function:
S , = ∑ S , . W , ………….. (1)
where: f – Activation function, Sx,i – output signal of ith neuron in xth layer, Sx-1,i – output of ith
neuron in x-1th layer, Wx,i – weights connected to neuron Sx,i. The ‘response’ is an activation
function output of a given neuron, that is simply named the “neuron function”.
The most activation functions are:
1.Sigmoid function :
( ) =
(
σ
)
………………….. (2)
2.Gaussian function :
( ) = exp (−
σ
)…………………. (3)
Definitely, any neural network is training a number of times that will give the results in a stronger
weight of neurons but could not exceed the specific limits. This can be causes the network to
memorize instead of learning [3].
1.2 Symmetric Table Additional Method
Symmetric Table Addition Methods (STAMs) use two or more parallel table lookups followed by
multi operand addition to approximate the elementary functions. These methods stake advantage
of symmetry and leading sign bits in the table entries to drastically reduce the size of the table. In
[5] the method produces tables for the symmetric table addition method for approximating a
function f(x). The inputs n0, n1, n2, and n3, which corresponds to the number of bits in x0, x1,
and x2 are taken by STAM , where x = x0 + x1 + x2 + x3. It also takes as input the number of
guard digits," ng ", and a variable " f" , which indicates the function to be implement. This
procedure computes the coefficients produced by this approximation method, and reports on the
maximum and average error of the approximations.
According to Nihal Koc-Sahan et al[6], the multi-layer perceptron networks, the inputs to the
sigmoid function and its derivate correspond to sums of weighted values. And x typically has the
form;
x=bm-1bm-2….b0b-1b-2…b-p ………….(4)
Where x is n-bit fixed point number.
International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014
3
Then STAMs divide the input operand, x, into m+1 bit partitions, x0,x1,…xm. This approximation
is based on two term Taylor series expansion.
1.3 Decision Tree
The decision tree (DT) is an algorithm tool that uses a tree-like graph or model for taken
decisions and their possible consequences, including chance event outcomes. The DTs are
commonly used in operations research and decision analysis, to help identification strategy of the
most likely to reach a goal. This goal is to create a model that predicts the value of a target
variable based on several input variables.
The tree also can be "learned" by splitting the source set into sub-sets based on an attribute value
tests. This process is repeated on each derived sub-set in a recursive manner , which is recursive
partitioning. The recursion process is completed when the sub-set at a node for all that has the
same value of the target variable, or when splitting no longer adds value to the predictions.
In data mining, many trees algorithms can be described as the combination of mathematical and
computational techniques to aid the description, categorization and generalization of a given set
of data. Data refers in the specific records of the form:
(x, Y)=(x 1, x 2, x 3… x k, Y) ……................…. (5)
Y is the dependent variable, which is the target variable that must be understand and classify.
Also, a vector x is composed of the input variables ( x1, x2, x3...xk ), those are used for such task of
calculations [7, and 8].
2.Proposed method
In traditional neural network with sigmoid activation function has the advantage, it can
approximate any continuous function but gets the fail in ending up as large network as possible.
The goal in this paper is to design a neuron synapse of multi-layer network. The proposed method
uses sine function as building blocks, and combine together by adapting STAMs parameters to
build a final function. This has achieve a fitting performance for the training data of the present
network, and it has taking a tree formulation (DT) which the function was modeled. Since that
many literatures have proposed and used tree form of neural networks [9, 10, and 11].
In computer algebra and symbolic computations a function can be represented as tree
representation [12]. Its tree form of sine function and operation gets our neural network of trained
data to build the node. The node represents a sine function applied to the sum of STAM’s
parameters, and each terminal node represents input variables and gets its child nodes.
By using Taylor series expansion of sine function E ( );
E( )= − !
+ !
− !
+ ⋯,
= 0 …….....…...........………. (6)
Where is the input argument and are parameters (weights) in our network.
Let us construct the tree
Y=E( 2+ E ( 1)) + E ( 2) .......………. (7)
International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014
4
The training data set has taken an incremental way to perform new data set. The algorithm is used
the following steps for getting a forward network [13];
1- Start from a blank network. Initialize all weights to zero.
2- Testing the effect of adding a layer to the network performance, as follows:
a- Set up the current network configuration.
b- Add a layer to the current network by selecting the layer and terminating node types.
c- The nodes weights for the existing network are initialized by STAM’s values,
d- The optimization algorithm is applied to the new network scheme and find out the
resulting network performance accordingly.
e- Keeping the new added layer.
f- Repeat step 2 to get all possible sine function Taylor series terms.
The database is selected as randomized values during the test interval, in order to make sure that
the network learns instead of memorize. Insufficient training may lead to incorrect results. Thus,
training must be done until 100% training accuracy is achieved. Figure (2) shows the flowchart
for training and testing the database.
.
Figure (2) Flowchart of the training and testing data
On Taylor series, there are expansions for sin function, which are given in terms of odd powers of
x, in this work 6 input nodes have been taken with x,x3
,x5
,x7
,x9
, and x11
. Therefore, all the powers
of x with alternating zero weights; (Sin: 0, 1, 0, -1/6...,). Therefore, the built up neural network
has got each input to one corresponding node, and multiplied it by its weight to fed entire possible
outputs to the output nodes, it would be capable of the 6-terms Taylor's expansion of a sine
function.
International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014
5
Inputs Hidden layer Outputs
x1
--------------- H0 --------------------///
x3
--------------- H1 --------------------///
x5
--------------- H2 --------------------///
x7
--------------- H3 --------------------///
x9
--------------- H4 --------------------///
x11
--------------- H5 --------------------///
Figure (3) explicit representation of the presented neural network
3 .Symmetric table additional method (STAM) - Neuron
The conventional neural network shown in Figure (3) has shown the single variable weight i is
replaced by STAM block diagram shown in Figure (4) [6], which a multi operand adder has
cancelled to get a STAM neuron been established and it is shown in Figure(5).
Figure (3) a conventional neural network
International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014
6
Figure (4) STAM block diagram
Figure (5) STAM‘s Neuron
4.LabVIEW Design
In this section, the system architecture is described in LabVIEW block diagram. It has four
processing stages as seen in Figure (6).The system is initialized when a data collected from
datasheets, followed with neural network configuration, draw the network , resize the inputs and
finally initiate nodes. LABVIEW is used as a design platform for the system that is illustrated in
flowchart steps.
Figure (6) NN LabVIEW System’s flowchart
The processing of collected data includes inputs and operands of STAM output. This work
focuses on application of STAM algorithm gets the design of neuron synapses of the Neural
Network for a sine function which is the third stage of the designed system.
International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014
7
A database was created to accomplish Neural Network System (NN system) comprising data used
for experimental purpose. The database is pre-trained and tested using Matlab script function[14],
which is integrated with LabVIEW for real- time testing.
5.Results & Discussion
This section detaches the parameter setting for the neural network architecture system. In addition
to that, the (training, testing and real- time) data resulted are also discussed in some details.
5.1 Parameters Setting
After many attempts, it was found that the following parameters yield around 100% of training
accuracy. The error measure used to assess the network performance is Mean Square Error
(MSE) as follows [13];
MSE =
( ) ( )
……...............................................................(8)
where y (j), d (j) are the network output and the desired output j respectively, and J is the
size of the data set. Most of error measures and the number of weights in the resulted
network, which is given to compare the network complexities. Table (1) shows the most
parameters used with the training results.
Table (1) Details of Training and Testing for ANN
The data training has been conducted, and tan-sig is chosen as a transfer function for the layers as
it converges. Also, the number of hidden neurons has been set to 6 neurons. The Learning rate
and performance goal is set to a lower value , so that the network learns completely. A
momentum factor of 0.9 is added in the network which can be recover back for training even if it
falls into local minimum.
5.2 Training Results
In figure (7), 100% training accuracy met with 50 iterations. It shows the neural network toolbox
illustrating the transfer function, iterations and performance goal that is achieved during training.
The amount of 100 terms out of 112 inputs are 89% of the data were successfully trained as seen
Feed forward neural network algorithm
Defined Parameters
Transferred function Tan sig
Number of Hidden Neurons 6
Set the Performance Goal 1e-10
The Momentum Factor 0.9
The Learning Rate 0.01
The Total number of terms in the Taylor series
(sine function): 200
Training and Testing Parameters
The Training Epoch 50
The Training Time 0:00:01
The Training Accuracy 100%
The Testing Accuracy 97.22%
International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014
8
in the performance and regression plots in figure (8) and figure (9), with 100% training accuracy
respectively. Regression plot has shown the data are fitted the line when the training is complete.
Also, training state is shown in figure (10) indicates the closer in the accuracy percentage.
Figure (7) NN system toolbox
Figure (8) NN system performance
International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014
9
Figure (9) NN system regression plots
Figure (10) NN system training state plots
5.3 Testing Results
All the terms in the database which is 112 were used for testing purpose. The data was
randomized before testing so that the system has an accurate performance for sine function
(Target). The testing accuracy that has been achieved is 97.22% for 112 inputs of the system.
According to this result, that achieved a higher rates, figure (11) is shown the sine function in
both exact and predicted by the present NN system, while figure (12) illustrates the fitting curves
of the resulted sine function.
International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014
10
Figure (11) sine function plots
Figure (12) Fitting of sine function plot
5.4 Real-Time Testing
Real- time testing has been done for the complete system. The trained network was integrated in
LABVIEW. A GUI panel was designed in LABVIEW indicating the neural network graph in
database as set in Table (1). Random numbers is generated if targeted function data was found not
to be in database. This can be seen from figures (13, and 14) respectively, the frontal system was
designed in LABVIEW for a real- time application with a high accuracy.
International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014
11
Figure (13) configuration frontal of NN system
Figure (14) NN system training frontal
6 .Conclusion
This paper gives a proposed method to create symmetric table additional method (STAM) neuron.
Which it represented the general aspects of a sine function behavior and indicates the neuron
synapses in neural network design for a real-time application used and designed in LABVIEW
integrating the Neural Network from MATLAB codes embedded in the system. A minimized
MSE rate to give the presented value 97.22% has been achieved of the neural network accuracy.
7.Acknowledgement
Many thanks to Oklahoma State University and – School of Electrical and Computer Engineering
Department for hosting me as visitor scholar. My deep appreciation to Dr. James E. Stine for his
valuable notes. Also, many thanks to SRF/IIE for giving me all possible supports.
References
[1] Andries P. Engelbrecht, Computational Intelligence: An Introduction - second edition, John Wiley &
Sons Ltd, 2007
[2] D.T. Pham, X. Liu, Neural networks for identification, prediction and control, Springer-Verlag
London Ltd. UK, 1995
[3] N. F. Naim, A. I. M. Zakaria, N. A. Wahab, Classification of Thumbprint using Artificial Neural
Network (ANN), IEEE International Conference on System Engineering and Technology (ICSET),
Shah Alam, 2011, pp. 231- 234
[4] A. Reda, B. Aoued, Artificial Neural Network Based Face Recognition, First International
Symposium on Control, Communications and Signal Processing, Algeria, 2004, pp. 439- 442
[5] J.E.Stine and M.J.Schulte, The Symmetric Table Addition Method for Accurate Function
Approximation, Journal of VLSI Signal Processing Systems for Signal, Image, and Video
Technology, vol.21,1999,pp.167-177
[6] Nihal Koc-sahan , Jason A. Schlessman , Michael J. Schulte, Symmetric table addition methods for
neural network approximations,Proc. SPIE 4474, Advanced Signal Processing Algorithms,
Architectures, and Implementations XI, 2001,pp.126-133
International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014
12
[7] G. P. J. Schmitz, C. Aldrich, and F. S. Gouws, ANN-DT An Algorithm for Extraction of Decision
Trees from artificial Neural Networks, IEEE Transactions on Neural
Networks,vol.10,No.6,1999,pp.1392-1401
[8] I. K. Sethi, Layered Neural Net Design Through Decision Trees, IEEE International Symposium on
Circuits and Systems, vol.2,1990,pp.1082-1085
[9] S. Behnke, and N. B. Karayiannis, Competitive neural trees for pattern classifications IEEE
Transactions on Neural Networks,vol.9,No.6,1998,pp.1352–1369
[10] A. Sankar, and R. J. Mammone, Growing and pruning neural tree networks, IEEE Transactions on
Computers,vol.42,No.3,1993, pp.291–299
[11] B.T. Zhang, A Bayesian evolutionary approach to the design and learning of heterogeneous neural
trees, Integrated Computer- Aided Engineering,vol.9,2002,pp.73–86
[12] J. S. Cohen, Computer algebra and symbolic computation: Elementary algorithms. Natick: A. K.
Peters, Ltd,2002
[13] George S. Eskander and Amir F. Atiya, neural networks, Neural Networks ,vol.22 ,2009,pp. 395–404
[14] Forrester W. Isen , DSP for MATLAB and LabVIEW: Fundamentals of Discrete Signal Processing,
Morgan & Claypool publishers ,2009

More Related Content

What's hot

Observations
ObservationsObservations
Observationsbutest
 
Modeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technologyModeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technologytheijes
 
Introduction to Neural networks (under graduate course) Lecture 7 of 9
Introduction to Neural networks (under graduate course) Lecture 7 of 9Introduction to Neural networks (under graduate course) Lecture 7 of 9
Introduction to Neural networks (under graduate course) Lecture 7 of 9Randa Elanwar
 
Artificial neural networks
Artificial neural networks Artificial neural networks
Artificial neural networks ShwethaShreeS
 
APPLIED MACHINE LEARNING
APPLIED MACHINE LEARNINGAPPLIED MACHINE LEARNING
APPLIED MACHINE LEARNINGRevanth Kumar
 
Survey on Artificial Neural Network Learning Technique Algorithms
Survey on Artificial Neural Network Learning Technique AlgorithmsSurvey on Artificial Neural Network Learning Technique Algorithms
Survey on Artificial Neural Network Learning Technique AlgorithmsIRJET Journal
 
08 neural networks
08 neural networks08 neural networks
08 neural networksankit_ppt
 
Deep learning algorithms
Deep learning algorithmsDeep learning algorithms
Deep learning algorithmsRevanth Kumar
 
Basic Learning Algorithms of ANN
Basic Learning Algorithms of ANNBasic Learning Algorithms of ANN
Basic Learning Algorithms of ANNwaseem khan
 
Learning Methods in a Neural Network
Learning Methods in a Neural NetworkLearning Methods in a Neural Network
Learning Methods in a Neural NetworkSaransh Choudhary
 
Deep neural networks & computational graphs
Deep neural networks & computational graphsDeep neural networks & computational graphs
Deep neural networks & computational graphsRevanth Kumar
 
Kernal based speaker specific feature extraction and its applications in iTau...
Kernal based speaker specific feature extraction and its applications in iTau...Kernal based speaker specific feature extraction and its applications in iTau...
Kernal based speaker specific feature extraction and its applications in iTau...TELKOMNIKA JOURNAL
 
Artificial Neural Networks
Artificial Neural NetworksArtificial Neural Networks
Artificial Neural NetworksArslan Zulfiqar
 
Derivation of Convolutional Neural Network (ConvNet) from Fully Connected Net...
Derivation of Convolutional Neural Network (ConvNet) from Fully Connected Net...Derivation of Convolutional Neural Network (ConvNet) from Fully Connected Net...
Derivation of Convolutional Neural Network (ConvNet) from Fully Connected Net...Ahmed Gad
 
Neural network image recognition
Neural network image recognitionNeural network image recognition
Neural network image recognitionOleksii Sekundant
 
An introduction to machine learning for particle physics
An introduction to machine learning for particle physicsAn introduction to machine learning for particle physics
An introduction to machine learning for particle physicsAndrew Lowe
 

What's hot (19)

Observations
ObservationsObservations
Observations
 
Modeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technologyModeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technology
 
Introduction to Neural networks (under graduate course) Lecture 7 of 9
Introduction to Neural networks (under graduate course) Lecture 7 of 9Introduction to Neural networks (under graduate course) Lecture 7 of 9
Introduction to Neural networks (under graduate course) Lecture 7 of 9
 
Artificial neural networks
Artificial neural networks Artificial neural networks
Artificial neural networks
 
APPLIED MACHINE LEARNING
APPLIED MACHINE LEARNINGAPPLIED MACHINE LEARNING
APPLIED MACHINE LEARNING
 
Ann
Ann Ann
Ann
 
Survey on Artificial Neural Network Learning Technique Algorithms
Survey on Artificial Neural Network Learning Technique AlgorithmsSurvey on Artificial Neural Network Learning Technique Algorithms
Survey on Artificial Neural Network Learning Technique Algorithms
 
Perceptron in ANN
Perceptron in ANNPerceptron in ANN
Perceptron in ANN
 
08 neural networks
08 neural networks08 neural networks
08 neural networks
 
Deep learning algorithms
Deep learning algorithmsDeep learning algorithms
Deep learning algorithms
 
Basic Learning Algorithms of ANN
Basic Learning Algorithms of ANNBasic Learning Algorithms of ANN
Basic Learning Algorithms of ANN
 
Learning Methods in a Neural Network
Learning Methods in a Neural NetworkLearning Methods in a Neural Network
Learning Methods in a Neural Network
 
Deep neural networks & computational graphs
Deep neural networks & computational graphsDeep neural networks & computational graphs
Deep neural networks & computational graphs
 
Kernal based speaker specific feature extraction and its applications in iTau...
Kernal based speaker specific feature extraction and its applications in iTau...Kernal based speaker specific feature extraction and its applications in iTau...
Kernal based speaker specific feature extraction and its applications in iTau...
 
Artificial Neural Networks
Artificial Neural NetworksArtificial Neural Networks
Artificial Neural Networks
 
Derivation of Convolutional Neural Network (ConvNet) from Fully Connected Net...
Derivation of Convolutional Neural Network (ConvNet) from Fully Connected Net...Derivation of Convolutional Neural Network (ConvNet) from Fully Connected Net...
Derivation of Convolutional Neural Network (ConvNet) from Fully Connected Net...
 
Neural network image recognition
Neural network image recognitionNeural network image recognition
Neural network image recognition
 
An introduction to machine learning for particle physics
An introduction to machine learning for particle physicsAn introduction to machine learning for particle physics
An introduction to machine learning for particle physics
 
04 Multi-layer Feedforward Networks
04 Multi-layer Feedforward Networks04 Multi-layer Feedforward Networks
04 Multi-layer Feedforward Networks
 

Viewers also liked

House Price Estimation as a Function Fitting Problem with using ANN Approach
House Price Estimation as a Function Fitting Problem with using ANN ApproachHouse Price Estimation as a Function Fitting Problem with using ANN Approach
House Price Estimation as a Function Fitting Problem with using ANN ApproachYusuf Uzun
 
Prototyping is an attitude
Prototyping is an attitudePrototyping is an attitude
Prototyping is an attitudeWith Company
 
10 Insightful Quotes On Designing A Better Customer Experience
10 Insightful Quotes On Designing A Better Customer Experience10 Insightful Quotes On Designing A Better Customer Experience
10 Insightful Quotes On Designing A Better Customer ExperienceYuan Wang
 
Learn BEM: CSS Naming Convention
Learn BEM: CSS Naming ConventionLearn BEM: CSS Naming Convention
Learn BEM: CSS Naming ConventionIn a Rocket
 
How to Build a Dynamic Social Media Plan
How to Build a Dynamic Social Media PlanHow to Build a Dynamic Social Media Plan
How to Build a Dynamic Social Media PlanPost Planner
 
SEO: Getting Personal
SEO: Getting PersonalSEO: Getting Personal
SEO: Getting PersonalKirsty Hulse
 
Lightning Talk #9: How UX and Data Storytelling Can Shape Policy by Mika Aldaba
Lightning Talk #9: How UX and Data Storytelling Can Shape Policy by Mika AldabaLightning Talk #9: How UX and Data Storytelling Can Shape Policy by Mika Aldaba
Lightning Talk #9: How UX and Data Storytelling Can Shape Policy by Mika Aldabaux singapore
 

Viewers also liked (8)

House Price Estimation as a Function Fitting Problem with using ANN Approach
House Price Estimation as a Function Fitting Problem with using ANN ApproachHouse Price Estimation as a Function Fitting Problem with using ANN Approach
House Price Estimation as a Function Fitting Problem with using ANN Approach
 
Prototyping is an attitude
Prototyping is an attitudePrototyping is an attitude
Prototyping is an attitude
 
10 Insightful Quotes On Designing A Better Customer Experience
10 Insightful Quotes On Designing A Better Customer Experience10 Insightful Quotes On Designing A Better Customer Experience
10 Insightful Quotes On Designing A Better Customer Experience
 
Learn BEM: CSS Naming Convention
Learn BEM: CSS Naming ConventionLearn BEM: CSS Naming Convention
Learn BEM: CSS Naming Convention
 
How to Build a Dynamic Social Media Plan
How to Build a Dynamic Social Media PlanHow to Build a Dynamic Social Media Plan
How to Build a Dynamic Social Media Plan
 
SEO: Getting Personal
SEO: Getting PersonalSEO: Getting Personal
SEO: Getting Personal
 
Lightning Talk #9: How UX and Data Storytelling Can Shape Policy by Mika Aldaba
Lightning Talk #9: How UX and Data Storytelling Can Shape Policy by Mika AldabaLightning Talk #9: How UX and Data Storytelling Can Shape Policy by Mika Aldaba
Lightning Talk #9: How UX and Data Storytelling Can Shape Policy by Mika Aldaba
 
Succession “Losers”: What Happens to Executives Passed Over for the CEO Job?
Succession “Losers”: What Happens to Executives Passed Over for the CEO Job? Succession “Losers”: What Happens to Executives Passed Over for the CEO Job?
Succession “Losers”: What Happens to Executives Passed Over for the CEO Job?
 

Similar to Feed forward neural network for sine

X-TREPAN: A MULTI CLASS REGRESSION AND ADAPTED EXTRACTION OF COMPREHENSIBLE D...
X-TREPAN: A MULTI CLASS REGRESSION AND ADAPTED EXTRACTION OF COMPREHENSIBLE D...X-TREPAN: A MULTI CLASS REGRESSION AND ADAPTED EXTRACTION OF COMPREHENSIBLE D...
X-TREPAN: A MULTI CLASS REGRESSION AND ADAPTED EXTRACTION OF COMPREHENSIBLE D...cscpconf
 
Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...
Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...
Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...Cemal Ardil
 
NeuralProcessingofGeneralPurposeApproximatePrograms
NeuralProcessingofGeneralPurposeApproximateProgramsNeuralProcessingofGeneralPurposeApproximatePrograms
NeuralProcessingofGeneralPurposeApproximateProgramsMohid Nabil
 
Towards neuralprocessingofgeneralpurposeapproximateprograms
Towards neuralprocessingofgeneralpurposeapproximateprogramsTowards neuralprocessingofgeneralpurposeapproximateprograms
Towards neuralprocessingofgeneralpurposeapproximateprogramsParidha Saxena
 
X trepan an extended trepan for
X trepan an extended trepan forX trepan an extended trepan for
X trepan an extended trepan forijaia
 
Efficiency of Neural Networks Study in the Design of Trusses
Efficiency of Neural Networks Study in the Design of TrussesEfficiency of Neural Networks Study in the Design of Trusses
Efficiency of Neural Networks Study in the Design of TrussesIRJET Journal
 
Hybrid PSO-SA algorithm for training a Neural Network for Classification
Hybrid PSO-SA algorithm for training a Neural Network for ClassificationHybrid PSO-SA algorithm for training a Neural Network for Classification
Hybrid PSO-SA algorithm for training a Neural Network for ClassificationIJCSEA Journal
 
Black-box modeling of nonlinear system using evolutionary neural NARX model
Black-box modeling of nonlinear system using evolutionary neural NARX modelBlack-box modeling of nonlinear system using evolutionary neural NARX model
Black-box modeling of nonlinear system using evolutionary neural NARX modelIJECEIAES
 
Electricity Demand Forecasting Using Fuzzy-Neural Network
Electricity Demand Forecasting Using Fuzzy-Neural NetworkElectricity Demand Forecasting Using Fuzzy-Neural Network
Electricity Demand Forecasting Using Fuzzy-Neural NetworkNaren Chandra Kattla
 
Neural basics
Neural basicsNeural basics
Neural basicscoursesub
 
Optimization of Number of Neurons in the Hidden Layer in Feed Forward Neural ...
Optimization of Number of Neurons in the Hidden Layer in Feed Forward Neural ...Optimization of Number of Neurons in the Hidden Layer in Feed Forward Neural ...
Optimization of Number of Neurons in the Hidden Layer in Feed Forward Neural ...IJERA Editor
 
Neural network based numerical digits recognization using nnt in matlab
Neural network based numerical digits recognization using nnt in matlabNeural network based numerical digits recognization using nnt in matlab
Neural network based numerical digits recognization using nnt in matlabijcses
 
A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...
A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...
A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...IJECEIAES
 
Introduction Of Artificial neural network
Introduction Of Artificial neural networkIntroduction Of Artificial neural network
Introduction Of Artificial neural networkNagarajan
 
ANALYSIS AND COMPARISON STUDY OF DATA MINING ALGORITHMS USING RAPIDMINER
ANALYSIS AND COMPARISON STUDY OF DATA MINING ALGORITHMS USING RAPIDMINERANALYSIS AND COMPARISON STUDY OF DATA MINING ALGORITHMS USING RAPIDMINER
ANALYSIS AND COMPARISON STUDY OF DATA MINING ALGORITHMS USING RAPIDMINERIJCSEA Journal
 
HW2-1_05.doc
HW2-1_05.docHW2-1_05.doc
HW2-1_05.docbutest
 
AN IMPROVED METHOD FOR IDENTIFYING WELL-TEST INTERPRETATION MODEL BASED ON AG...
AN IMPROVED METHOD FOR IDENTIFYING WELL-TEST INTERPRETATION MODEL BASED ON AG...AN IMPROVED METHOD FOR IDENTIFYING WELL-TEST INTERPRETATION MODEL BASED ON AG...
AN IMPROVED METHOD FOR IDENTIFYING WELL-TEST INTERPRETATION MODEL BASED ON AG...IAEME Publication
 
Artificial Neural Networks ppt.pptx for final sem cse
Artificial Neural Networks  ppt.pptx for final sem cseArtificial Neural Networks  ppt.pptx for final sem cse
Artificial Neural Networks ppt.pptx for final sem cseNaveenBhajantri1
 

Similar to Feed forward neural network for sine (20)

X-TREPAN: A MULTI CLASS REGRESSION AND ADAPTED EXTRACTION OF COMPREHENSIBLE D...
X-TREPAN: A MULTI CLASS REGRESSION AND ADAPTED EXTRACTION OF COMPREHENSIBLE D...X-TREPAN: A MULTI CLASS REGRESSION AND ADAPTED EXTRACTION OF COMPREHENSIBLE D...
X-TREPAN: A MULTI CLASS REGRESSION AND ADAPTED EXTRACTION OF COMPREHENSIBLE D...
 
Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...
Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...
Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...
 
N ns 1
N ns 1N ns 1
N ns 1
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
 
NeuralProcessingofGeneralPurposeApproximatePrograms
NeuralProcessingofGeneralPurposeApproximateProgramsNeuralProcessingofGeneralPurposeApproximatePrograms
NeuralProcessingofGeneralPurposeApproximatePrograms
 
Towards neuralprocessingofgeneralpurposeapproximateprograms
Towards neuralprocessingofgeneralpurposeapproximateprogramsTowards neuralprocessingofgeneralpurposeapproximateprograms
Towards neuralprocessingofgeneralpurposeapproximateprograms
 
X trepan an extended trepan for
X trepan an extended trepan forX trepan an extended trepan for
X trepan an extended trepan for
 
Efficiency of Neural Networks Study in the Design of Trusses
Efficiency of Neural Networks Study in the Design of TrussesEfficiency of Neural Networks Study in the Design of Trusses
Efficiency of Neural Networks Study in the Design of Trusses
 
Hybrid PSO-SA algorithm for training a Neural Network for Classification
Hybrid PSO-SA algorithm for training a Neural Network for ClassificationHybrid PSO-SA algorithm for training a Neural Network for Classification
Hybrid PSO-SA algorithm for training a Neural Network for Classification
 
Black-box modeling of nonlinear system using evolutionary neural NARX model
Black-box modeling of nonlinear system using evolutionary neural NARX modelBlack-box modeling of nonlinear system using evolutionary neural NARX model
Black-box modeling of nonlinear system using evolutionary neural NARX model
 
Electricity Demand Forecasting Using Fuzzy-Neural Network
Electricity Demand Forecasting Using Fuzzy-Neural NetworkElectricity Demand Forecasting Using Fuzzy-Neural Network
Electricity Demand Forecasting Using Fuzzy-Neural Network
 
Neural basics
Neural basicsNeural basics
Neural basics
 
Optimization of Number of Neurons in the Hidden Layer in Feed Forward Neural ...
Optimization of Number of Neurons in the Hidden Layer in Feed Forward Neural ...Optimization of Number of Neurons in the Hidden Layer in Feed Forward Neural ...
Optimization of Number of Neurons in the Hidden Layer in Feed Forward Neural ...
 
Neural network based numerical digits recognization using nnt in matlab
Neural network based numerical digits recognization using nnt in matlabNeural network based numerical digits recognization using nnt in matlab
Neural network based numerical digits recognization using nnt in matlab
 
A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...
A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...
A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...
 
Introduction Of Artificial neural network
Introduction Of Artificial neural networkIntroduction Of Artificial neural network
Introduction Of Artificial neural network
 
ANALYSIS AND COMPARISON STUDY OF DATA MINING ALGORITHMS USING RAPIDMINER
ANALYSIS AND COMPARISON STUDY OF DATA MINING ALGORITHMS USING RAPIDMINERANALYSIS AND COMPARISON STUDY OF DATA MINING ALGORITHMS USING RAPIDMINER
ANALYSIS AND COMPARISON STUDY OF DATA MINING ALGORITHMS USING RAPIDMINER
 
HW2-1_05.doc
HW2-1_05.docHW2-1_05.doc
HW2-1_05.doc
 
AN IMPROVED METHOD FOR IDENTIFYING WELL-TEST INTERPRETATION MODEL BASED ON AG...
AN IMPROVED METHOD FOR IDENTIFYING WELL-TEST INTERPRETATION MODEL BASED ON AG...AN IMPROVED METHOD FOR IDENTIFYING WELL-TEST INTERPRETATION MODEL BASED ON AG...
AN IMPROVED METHOD FOR IDENTIFYING WELL-TEST INTERPRETATION MODEL BASED ON AG...
 
Artificial Neural Networks ppt.pptx for final sem cse
Artificial Neural Networks  ppt.pptx for final sem cseArtificial Neural Networks  ppt.pptx for final sem cse
Artificial Neural Networks ppt.pptx for final sem cse
 

Recently uploaded

08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Paola De la Torre
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Patryk Bandurski
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machinePadma Pradeep
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreternaman860154
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Alan Dix
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptxLBM Solutions
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...HostedbyConfluent
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions
 

Recently uploaded (20)

08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machine
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptx
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food Manufacturing
 

Feed forward neural network for sine

  • 1. International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014 DOI:10.5121/ijcsa.2014.4201 01 Feed Forward Neural Network For Sine Function With Symmetric Table Addition Method Using Labview And Matlab Code Fadhil A. Ali Department of Electrical and Computer Engineering- Oklahoma State University 202 Engineering South Stillwater, OK 74078 USA- Tel:405-714-1084 Abstract This work is proposed the feed forward neural network with symmetric table addition method to design the neuron synapses algorithm of the sine function approximations, and according to the Taylor series expansion. Matlab code and LabVIEW are used to build and create the neural network, which has been designed and trained database set to improve its performance, and gets the best a global convergence with small value of MSE errors and 97.22% accuracy. Keywords Neural networks, Symmetric Table Addition Method, LabVIEW , Matlab scripts 1.Introduction 1.1 Artificial Neural Network An Artificial Neuron (ANN) is a model of biological neuron. Many of ANN receives signals from the environment or other ANNs, and its gathering these signals by using activation function to the signals sum. Input signals are excited through positive or negative numerical weights that associated with each connection to the ANNs, thus firing of the ANN and the strength of the exciting signal are controlled via a function referred to as activation function. The ANNs are collecting all incoming signals and computing a net input signals as a function of the respective weights. The net input renders to the activation function by calculated the output signal of the ANNs, which is called the layered network. ANN may consist of input, hidden and output layers [1], as shown in Figure (1). Figure (1) Neural Network Architecture The ANN is designed to work similar to neural tissue in the brain, where many independent neurons (processing units) are interconnected into one large data-processing network. In the ANN
  • 2. International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014 2 the “neurons” are represented as mathematical functions, which pass the data between themselves in some defined and organized manner. The way of organization of the connections between the “neurons” defines the features of the ANN, and its capability to perform certain tasks [2]. One of the simplest ANN’s is a feed-forward network, which it has worked in this investigation, therefore other types of ANN will not be considered here. Such a network consists of: one input layer, hidden layers (i.e. nodes)-(possibly more than one) and one output layer. The data are transferred in a form of “signals”, it means values which are passed between the nodes. Each connection among the nodes has a special variable associated with it – this is a weight. Each neuron collects all the input values multiplied by the associated weights and processes them with activation function: S , = ∑ S , . W , ………….. (1) where: f – Activation function, Sx,i – output signal of ith neuron in xth layer, Sx-1,i – output of ith neuron in x-1th layer, Wx,i – weights connected to neuron Sx,i. The ‘response’ is an activation function output of a given neuron, that is simply named the “neuron function”. The most activation functions are: 1.Sigmoid function : ( ) = ( σ ) ………………….. (2) 2.Gaussian function : ( ) = exp (− σ )…………………. (3) Definitely, any neural network is training a number of times that will give the results in a stronger weight of neurons but could not exceed the specific limits. This can be causes the network to memorize instead of learning [3]. 1.2 Symmetric Table Additional Method Symmetric Table Addition Methods (STAMs) use two or more parallel table lookups followed by multi operand addition to approximate the elementary functions. These methods stake advantage of symmetry and leading sign bits in the table entries to drastically reduce the size of the table. In [5] the method produces tables for the symmetric table addition method for approximating a function f(x). The inputs n0, n1, n2, and n3, which corresponds to the number of bits in x0, x1, and x2 are taken by STAM , where x = x0 + x1 + x2 + x3. It also takes as input the number of guard digits," ng ", and a variable " f" , which indicates the function to be implement. This procedure computes the coefficients produced by this approximation method, and reports on the maximum and average error of the approximations. According to Nihal Koc-Sahan et al[6], the multi-layer perceptron networks, the inputs to the sigmoid function and its derivate correspond to sums of weighted values. And x typically has the form; x=bm-1bm-2….b0b-1b-2…b-p ………….(4) Where x is n-bit fixed point number.
  • 3. International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014 3 Then STAMs divide the input operand, x, into m+1 bit partitions, x0,x1,…xm. This approximation is based on two term Taylor series expansion. 1.3 Decision Tree The decision tree (DT) is an algorithm tool that uses a tree-like graph or model for taken decisions and their possible consequences, including chance event outcomes. The DTs are commonly used in operations research and decision analysis, to help identification strategy of the most likely to reach a goal. This goal is to create a model that predicts the value of a target variable based on several input variables. The tree also can be "learned" by splitting the source set into sub-sets based on an attribute value tests. This process is repeated on each derived sub-set in a recursive manner , which is recursive partitioning. The recursion process is completed when the sub-set at a node for all that has the same value of the target variable, or when splitting no longer adds value to the predictions. In data mining, many trees algorithms can be described as the combination of mathematical and computational techniques to aid the description, categorization and generalization of a given set of data. Data refers in the specific records of the form: (x, Y)=(x 1, x 2, x 3… x k, Y) ……................…. (5) Y is the dependent variable, which is the target variable that must be understand and classify. Also, a vector x is composed of the input variables ( x1, x2, x3...xk ), those are used for such task of calculations [7, and 8]. 2.Proposed method In traditional neural network with sigmoid activation function has the advantage, it can approximate any continuous function but gets the fail in ending up as large network as possible. The goal in this paper is to design a neuron synapse of multi-layer network. The proposed method uses sine function as building blocks, and combine together by adapting STAMs parameters to build a final function. This has achieve a fitting performance for the training data of the present network, and it has taking a tree formulation (DT) which the function was modeled. Since that many literatures have proposed and used tree form of neural networks [9, 10, and 11]. In computer algebra and symbolic computations a function can be represented as tree representation [12]. Its tree form of sine function and operation gets our neural network of trained data to build the node. The node represents a sine function applied to the sum of STAM’s parameters, and each terminal node represents input variables and gets its child nodes. By using Taylor series expansion of sine function E ( ); E( )= − ! + ! − ! + ⋯, = 0 …….....…...........………. (6) Where is the input argument and are parameters (weights) in our network. Let us construct the tree Y=E( 2+ E ( 1)) + E ( 2) .......………. (7)
  • 4. International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014 4 The training data set has taken an incremental way to perform new data set. The algorithm is used the following steps for getting a forward network [13]; 1- Start from a blank network. Initialize all weights to zero. 2- Testing the effect of adding a layer to the network performance, as follows: a- Set up the current network configuration. b- Add a layer to the current network by selecting the layer and terminating node types. c- The nodes weights for the existing network are initialized by STAM’s values, d- The optimization algorithm is applied to the new network scheme and find out the resulting network performance accordingly. e- Keeping the new added layer. f- Repeat step 2 to get all possible sine function Taylor series terms. The database is selected as randomized values during the test interval, in order to make sure that the network learns instead of memorize. Insufficient training may lead to incorrect results. Thus, training must be done until 100% training accuracy is achieved. Figure (2) shows the flowchart for training and testing the database. . Figure (2) Flowchart of the training and testing data On Taylor series, there are expansions for sin function, which are given in terms of odd powers of x, in this work 6 input nodes have been taken with x,x3 ,x5 ,x7 ,x9 , and x11 . Therefore, all the powers of x with alternating zero weights; (Sin: 0, 1, 0, -1/6...,). Therefore, the built up neural network has got each input to one corresponding node, and multiplied it by its weight to fed entire possible outputs to the output nodes, it would be capable of the 6-terms Taylor's expansion of a sine function.
  • 5. International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014 5 Inputs Hidden layer Outputs x1 --------------- H0 --------------------/// x3 --------------- H1 --------------------/// x5 --------------- H2 --------------------/// x7 --------------- H3 --------------------/// x9 --------------- H4 --------------------/// x11 --------------- H5 --------------------/// Figure (3) explicit representation of the presented neural network 3 .Symmetric table additional method (STAM) - Neuron The conventional neural network shown in Figure (3) has shown the single variable weight i is replaced by STAM block diagram shown in Figure (4) [6], which a multi operand adder has cancelled to get a STAM neuron been established and it is shown in Figure(5). Figure (3) a conventional neural network
  • 6. International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014 6 Figure (4) STAM block diagram Figure (5) STAM‘s Neuron 4.LabVIEW Design In this section, the system architecture is described in LabVIEW block diagram. It has four processing stages as seen in Figure (6).The system is initialized when a data collected from datasheets, followed with neural network configuration, draw the network , resize the inputs and finally initiate nodes. LABVIEW is used as a design platform for the system that is illustrated in flowchart steps. Figure (6) NN LabVIEW System’s flowchart The processing of collected data includes inputs and operands of STAM output. This work focuses on application of STAM algorithm gets the design of neuron synapses of the Neural Network for a sine function which is the third stage of the designed system.
  • 7. International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014 7 A database was created to accomplish Neural Network System (NN system) comprising data used for experimental purpose. The database is pre-trained and tested using Matlab script function[14], which is integrated with LabVIEW for real- time testing. 5.Results & Discussion This section detaches the parameter setting for the neural network architecture system. In addition to that, the (training, testing and real- time) data resulted are also discussed in some details. 5.1 Parameters Setting After many attempts, it was found that the following parameters yield around 100% of training accuracy. The error measure used to assess the network performance is Mean Square Error (MSE) as follows [13]; MSE = ( ) ( ) ……...............................................................(8) where y (j), d (j) are the network output and the desired output j respectively, and J is the size of the data set. Most of error measures and the number of weights in the resulted network, which is given to compare the network complexities. Table (1) shows the most parameters used with the training results. Table (1) Details of Training and Testing for ANN The data training has been conducted, and tan-sig is chosen as a transfer function for the layers as it converges. Also, the number of hidden neurons has been set to 6 neurons. The Learning rate and performance goal is set to a lower value , so that the network learns completely. A momentum factor of 0.9 is added in the network which can be recover back for training even if it falls into local minimum. 5.2 Training Results In figure (7), 100% training accuracy met with 50 iterations. It shows the neural network toolbox illustrating the transfer function, iterations and performance goal that is achieved during training. The amount of 100 terms out of 112 inputs are 89% of the data were successfully trained as seen Feed forward neural network algorithm Defined Parameters Transferred function Tan sig Number of Hidden Neurons 6 Set the Performance Goal 1e-10 The Momentum Factor 0.9 The Learning Rate 0.01 The Total number of terms in the Taylor series (sine function): 200 Training and Testing Parameters The Training Epoch 50 The Training Time 0:00:01 The Training Accuracy 100% The Testing Accuracy 97.22%
  • 8. International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014 8 in the performance and regression plots in figure (8) and figure (9), with 100% training accuracy respectively. Regression plot has shown the data are fitted the line when the training is complete. Also, training state is shown in figure (10) indicates the closer in the accuracy percentage. Figure (7) NN system toolbox Figure (8) NN system performance
  • 9. International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014 9 Figure (9) NN system regression plots Figure (10) NN system training state plots 5.3 Testing Results All the terms in the database which is 112 were used for testing purpose. The data was randomized before testing so that the system has an accurate performance for sine function (Target). The testing accuracy that has been achieved is 97.22% for 112 inputs of the system. According to this result, that achieved a higher rates, figure (11) is shown the sine function in both exact and predicted by the present NN system, while figure (12) illustrates the fitting curves of the resulted sine function.
  • 10. International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014 10 Figure (11) sine function plots Figure (12) Fitting of sine function plot 5.4 Real-Time Testing Real- time testing has been done for the complete system. The trained network was integrated in LABVIEW. A GUI panel was designed in LABVIEW indicating the neural network graph in database as set in Table (1). Random numbers is generated if targeted function data was found not to be in database. This can be seen from figures (13, and 14) respectively, the frontal system was designed in LABVIEW for a real- time application with a high accuracy.
  • 11. International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014 11 Figure (13) configuration frontal of NN system Figure (14) NN system training frontal 6 .Conclusion This paper gives a proposed method to create symmetric table additional method (STAM) neuron. Which it represented the general aspects of a sine function behavior and indicates the neuron synapses in neural network design for a real-time application used and designed in LABVIEW integrating the Neural Network from MATLAB codes embedded in the system. A minimized MSE rate to give the presented value 97.22% has been achieved of the neural network accuracy. 7.Acknowledgement Many thanks to Oklahoma State University and – School of Electrical and Computer Engineering Department for hosting me as visitor scholar. My deep appreciation to Dr. James E. Stine for his valuable notes. Also, many thanks to SRF/IIE for giving me all possible supports. References [1] Andries P. Engelbrecht, Computational Intelligence: An Introduction - second edition, John Wiley & Sons Ltd, 2007 [2] D.T. Pham, X. Liu, Neural networks for identification, prediction and control, Springer-Verlag London Ltd. UK, 1995 [3] N. F. Naim, A. I. M. Zakaria, N. A. Wahab, Classification of Thumbprint using Artificial Neural Network (ANN), IEEE International Conference on System Engineering and Technology (ICSET), Shah Alam, 2011, pp. 231- 234 [4] A. Reda, B. Aoued, Artificial Neural Network Based Face Recognition, First International Symposium on Control, Communications and Signal Processing, Algeria, 2004, pp. 439- 442 [5] J.E.Stine and M.J.Schulte, The Symmetric Table Addition Method for Accurate Function Approximation, Journal of VLSI Signal Processing Systems for Signal, Image, and Video Technology, vol.21,1999,pp.167-177 [6] Nihal Koc-sahan , Jason A. Schlessman , Michael J. Schulte, Symmetric table addition methods for neural network approximations,Proc. SPIE 4474, Advanced Signal Processing Algorithms, Architectures, and Implementations XI, 2001,pp.126-133
  • 12. International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014 12 [7] G. P. J. Schmitz, C. Aldrich, and F. S. Gouws, ANN-DT An Algorithm for Extraction of Decision Trees from artificial Neural Networks, IEEE Transactions on Neural Networks,vol.10,No.6,1999,pp.1392-1401 [8] I. K. Sethi, Layered Neural Net Design Through Decision Trees, IEEE International Symposium on Circuits and Systems, vol.2,1990,pp.1082-1085 [9] S. Behnke, and N. B. Karayiannis, Competitive neural trees for pattern classifications IEEE Transactions on Neural Networks,vol.9,No.6,1998,pp.1352–1369 [10] A. Sankar, and R. J. Mammone, Growing and pruning neural tree networks, IEEE Transactions on Computers,vol.42,No.3,1993, pp.291–299 [11] B.T. Zhang, A Bayesian evolutionary approach to the design and learning of heterogeneous neural trees, Integrated Computer- Aided Engineering,vol.9,2002,pp.73–86 [12] J. S. Cohen, Computer algebra and symbolic computation: Elementary algorithms. Natick: A. K. Peters, Ltd,2002 [13] George S. Eskander and Amir F. Atiya, neural networks, Neural Networks ,vol.22 ,2009,pp. 395–404 [14] Forrester W. Isen , DSP for MATLAB and LabVIEW: Fundamentals of Discrete Signal Processing, Morgan & Claypool publishers ,2009