SlideShare a Scribd company logo
1 of 8
Download to read offline
Journal of Advanced Computing and Communication Technologies (ISSN: 2347 - 2804)
Volume No. 3 Issue No. 3, June 2015
44
DESIGN OF AIRFOIL USING BACKPROPAGATION TRAINING
WITH MIXED APPROACH
K.Thinakaran, Dr.R.Rajasekar
Sri Venkateswara College of Engineering & Tech., Thiruvallur, Tamil Nadu,
Excel Engineering College, Pallakkapalayam, Tamil Nadu,
thina85@rediffmail.com ,rajasekar080564@yahoo.com
ABSTRACT
Levenberg-Marquardt back-propagation training method has
some limitations associated with over fitting and local optimum
problems. Here, we proposed a new algorithm to increase the
convergence speed of Backpropagation learning to design the
airfoil. The aerodynamic force coefficients corresponding to
series of airfoil are stored in a database along with the airfoil
coordinates. A feedforward neural network is created with
aerodynamic coefficient as input to produce the airfoil
coordinates as output. In the proposed algorithm, for output
layer, we used the cost function having linear & nonlinear error
terms then for the hidden layer, we used steepest descent cost
function. Results indicate that this mixed approach greatly
enhances the training of artificial neural network and may
accurately predict airfoil profile.
Keyword: linear error, nonlinear error, steepest descent
method, airfoil design, neural networks, backpropagation,
Levenberg-Marquardt.
1. INTRODUCTION
The aim of this paper is to generate geometry for airfoil using
inverse design method with high convergence speed and with
minimal error. Hence we are supposed to enhance inverse
design method which will converge fast and have minimal error.
There are many inverse techniques in use, for example, hodo-
graph methods for two-dimensional flows (1-3). The inverse
design of backpropagation takes long time to converge. Some
focused on better function and suitable learning rate and
momentum (4-9). To design fast algorithm, Abid et al. proposed
a new algorithm by minimizing sum of squares of linear and
nonlinear errors for all output (10). Jeong et al. proposed
learning algorithm based on first and second order derivatives of
neural activation at hidden layers (11). Han et al. proposed two
modified constrained learning algorithms - First new Learning
Algorithm and Second new Learning Algorithm to obtain faster
convergence rate. The additional cost terms of the first algorithm
are selected based on the first derivatives of the activation
functions of the hidden neurons and second order derivatives of
the activation functions of the output neurons while the
additional cost terms of second one are selected based on the
first order derivatives of the activation functions of the output
neurons and second order derivatives of the activation functions
of the hidden neurons [12]. The notable differences of the
accelerated modified LM method are that the line search for the
approximate LM step [16].
The objective of this work is to show that using cost function
having linear & nonlinear terms in output layer and using cost
function of steepest descent [13-15] in hidden layer trains the
network fast. This mixed approach trains the network in such
a way that it converges fast with minimal error.One of the
issues when designing a particular neural network is to
calculate proper weight for neural activities. These are
obtained from the training process of neural network. The
process of obtaining appropriate weight in a neural network
design utilizes two set of equations. First, the feedforward
equation is used to calculate the error function. The feedback
equation is next used to calculate the gradient vector. This
gradient vector is used for defining search directions in order
to calculate weight change. The gradient direction is the
direction of steepest descent. But, the learning rate may be
different in different directions. The linear and nonlinear error
terms introduces certain modification in directions in order to
speed up the convergence. Hence the mixed approach of using
cost function having linear & non-linear error terms in output
layer and using cost function of steepest descent in hidden
layer results fast convergence. So, we are going to analyze this
mixed approach in the proposed algorithm.
2. ANN TRAINING METHOD
Figure-1 illustrates an airfoil profile which is described by a
set of x- and y-coordinates.
Figure-1. Flow field and airfoil data
The aerodynamic force coefficients corresponding to series of
airfoil are stored in a database along with the airfoil coordinates.
A feedforward neural network is created with the inputs -
aerodynamic coefficients Cl, Cd and X Coordinates to produce
the outputs - the airfoil Y coordinates.
Here, we considered an ANN with an input layer, hidden layer
and an output layer as in Figure2. Each layer has its own
Journal of Advanced Computing and Communication Technologies (ISSN: 2347 - 2804)
Volume No. 3 Issue No. 3, June 2015
45
neurons which interconnect with neurons in the consecutive
layer. The airfoil co-efficients and the „x‟ co-ordinates are
considered as neurons in the input layer. We are aimed to
generate the airfoil „y‟ co-ordinates. The input and the weight
determine again network traverse happens till it meets threshold
the output of each neuron in each layer. The actual output from
the output layer is compared against targeted output. If it doesn‟t
match the threshold value, then we need to find weight change
based on the outputs and once value. This process is termed to
be training the network. Training generally involves
minimization of error with respect to the training set. Learning
algorithms such as the back-propagation algorithm for feed-
forward multilayer networks [5] help us to find set of weights by
successive improvement from an arbitrary starting point. Once
network is trained with trained data set, then using feedforward
function, it is tested with test data to verify whether it generates
the desired output – the airfoil „y‟ co-ordinates.
Figure-2. ANN architecture used
For our analysis, we considered the below sigmoidal
activation function to generate the output.
x
e
xf 


1
1
)( (1)
Abid et al. defines the linear output for a neuron j at output
layer L as
 H
iji
L
j ywu (2)
And Abid et al. defines the non linear output as
L
ju
L
j
e
uf 


1
1
)( (3)
The non linear error is given by
L
j
L
j
L
j yde 1 (4)
Where and respectively is desired and current
output for jth
unit in the Lth
layer.
The linear error is given by
L
j
L
j
L
j ulde 2 (5)
where is defined as below
)(1 L
j
L
j dfld 
 (Inversing the output layer)
The learning function depends on the instantaneous
value of the total error. As per the modified
backpropagation by Abid et al, the new cost term to
increase the convergence speed is defined as
2
2
2
1 )(
2
1
)(
2
1 L
j
L
jp eeE  (6)
Where  is Weighting coefficient
Now, this new cost term can be interpreted as below
Ep = (nonlinear error + linear error)
Learning of the Output Layer
Now, the weight update rule is derived by applying the
gradient descent method to Ep. Hence we get weight update
rule for output Layer L as
ji
p
Lji
w
E
w


 
where L is the network learning parameter.
Hence, differentiating in equation (6) w.r.t jiw , we get
ji
L
jL
jL
ji
L
jL
jLji
w
u
e
w
y
ew





 21 
H
i
L
jL
ji
L
j
j
L
jL
jLji ye
w
u
u
y
ew 21  





H
i
L
jL
H
i
L
j
L
jLji yeyufew 21 )(  
(7
)
Learning of the hidden Layer Using
Steepest Descent Algorithm
The learning algorithm is the repeated application of the
chain rule to compute the influence of each weight in the
network with respect to an arbitrary error function E. Here
the Error function is,
2
)(
2
1
okkk OTE 
(8
)
And the weight change for hidden layer ( ) is,
Journal of Advanced Computing and Communication Technologies (ISSN: 2347 - 2804)
Volume No. 3 Issue No. 3, June 2015
46
ij
k
V
E
V


 
ij
HJ
HJ
Hi
Hi
ok
ok
ok
ok
k
ij
k
V
I
I
O
O
I
I
O
O
E
V
E













By differentiating error function (8) w.r.to okO , we get
)( okk
ok
k
OT
O
E



To get the term ok
ok
I
O


, we first calculate the output of kth
output neuron okO
using below equation
)(
1
1
okokIok
e
O  


1
)1( 
 okok
eeO I
ok

Now, differentiate okO w.r.to okI gives,
2
)1( 


 okokokok
eeee
I
O II
ok
ok 

)1( okok
ok
ok
OO
I
O




Hence,
kokokokk
ok
ok
ok
k
dOOOT
I
O
O
E





)1()(
ik
Hi
ok
W
O
I



)1)(( HiHi
HJ
Hi
OO
I
O




ij
ij
HJ
I
V
I



ijHiHiikk
ij
k
IOOWd
V
E



)1)((
iji
ij
k
Id
V
E 



where,
)1)(( HiHiijki OOWdd 

Hence the weight change for hidden layer,

 dIV i (9)
LEVENBERG-MARQUARDT ALGORITHM
The Levenberg-Marquardt is one of the fastest and accurate
learning algorithms for small to medium sized networks. The ad-
vantage of the LM algorithm decreases as the number of network
parameters increases. The Levenberg-Marquardt algorithm (LM)
can find a solution of a system of non-linear equations, y = φx,
by finding the parameters, φ, that link variables, y, to variables,
x. The LM algorithm in (10), finding the appropriate change, Δφ,
leading to smaller errors. The LM-algorithm depends on error, E,
the Hessian matrix H, the gradient of the error,  J, a scalar µ
which controls the trust region, and I is the identity matrix.
JJH 
EJJ 
JIH  1
)(  (10)
Proposed Mixed approach algorithm
In the Proposed Algorithm the learning parameters µ and 
are initialized according to the convergence of the problem.
Here, the change of weight for output layer is determined
using linear and nonlinear error and the hidden layer is
trained with steepest descent method. Its mean square error
is calculated and compared with threshold value. Based on
the comparison, the network is declared as trained. Now the
test data is passed to the trained network to get the desired
output.
Step1: Initialize the learning parameters to some
random values
Step2: Assign Threshold value to a fixed value based on the
sigmoid function.
Step3: Calculate linear output using equation (2)
Step4: Calculate Non-Linear output using sigmoid function
as in the equation (3)
Journal of Advanced Computing and Communication Technologies (ISSN: 2347 - 2804)
Volume No. 3 Issue No. 3, June 2015
47
Step5: Then calculate weight change for output layer using
the equation (7) and calculate the weight change
for hidden layer using the equation (9)
Step6: Calculate the mean square error using the equation
(8)
Step7: The above calculated mean square error value is
compared with threshold value.
Step8: If the mean square error value is greater than
threshold value, then the above steps from 3 to 6 is
repeated.
Step9: If the mean square error value is less than threshold
value, then declare that the network is trained
Step10: Now the test data is passed to trained network to
produce the desired airfoil profile.
3. RESULTS AND DISCUSSION
It is said that Leverberg-Marquardt is one of the fastest and
accurate learning algorithms for small to medium sized net-
works. But our proposed algorithm is faster than Levenberg-
Marquardt algorithm in FNN with the structure of 2-2-1 to ad-
dress XOR problem. From the below Table 1 observation,
you can see the time taken to converge in mixed approach to
solve XOR problem is less when compared to the Leven-
berg-Marquardt algorithm. In general, the standard LM al-
gorithm does not perform as well on pattern recognition
problems as it does on function approximation problems.
The advantage of the LM algorithm decreases as the number
of network parameters increases.
TABLE 1- Performance comparison in XOR problem
Mixed Approach LMBP
epoch MSE
Time
epoch MSE
Time
(ms) (ms)
0 0.108857 16 1 0.500698 109
480 0.124079 31 2 0.442961 111
840 0.124255 47 4 0.347800 114
1140 0.123867 62 6 0.314990 116
1500 0.106499 78 8 0.269975 119
1860 0.021690 82 10 0.192142 122
2280 0.000039 94 12 0.098765 125
If the Hessian is inaccurate this can greatly throw off
Newton's Method. Higher lambda favors gradient descent,
lower lambda favors Newton. Each cell in this matrix
represents the second order derivative of the output of the
neural network. Calculation of the Hessian accomplished by
calculating the gradients.
Actually the second derivative is element of the Hessian
matrix. This is calculated with the following formula.
Some of the LMA algorithm only supports a single output
neuron. This is because the LMA algorithm has its roots in
mathematical function approximation. So by the above we
identify that Levenberg-Marquardt algorithm has some
drawback that is the each epoch will take more time to
complete the computation. In the Table I the levenberg-
Marquardt algorithm take more time time to complete each
epoch. In mixed approach for the output layer we use the
steepest descent cost function to identify the error value but
to increase the speed of convergence we use non linear
approach in the hidden layer, so it will increase the
convergence speed of backpropagation algorithm.
In our investigation of neural network models for inverse
design of airfoil sections, we found that satisfactory results
were obtained by using the mixed approach of steepest
descent and linear and non linear error terms.
In our research, an airfoil is drawn with 26 x and y
coordinates with corresponding coefficient of lift (CL) and
the coefficient of drag (CD). We have database comprised of
78 NACA four series airfoils with different combinations
of CL, CD, x & y coordinates. Here we are considering CL,
CD and x coordinates as input values to the neurons in the
input layer of ANN. Also we considered twenty hidden
neurons in the hidden layer. The y coordinates are
considered as output neurons in the output layer of ANN.
We are going to calculate the y coordinates using the
proposed algorithm for given x coordinates and CL & CD.
The calculated y coordinates should match with the y
coordinates stored in the database. Hence we can say that
the main goal is to determine the y coordinates of airfoil
profile for given x coordinates and CL & CD. This is the
"inverse" problem for airfoil design.
We trained the network with 60 airfoil sets. Then we tested
the network for the remaining 18 sets. We gave CL, CD &
x coordinates and our ANN generated its corresponding y
coordinates which equals the y coordinates stored in our
database at optimized speed. In Table 2, we have given the
values of stored y coordinates, the values of calculated y
coordinates (calculated using our algorithm) for a pattern
NACA1017 and also the difference between these values.
Just for sample, we have given 8 coordinates out of 26
coordinates for the pattern NACA1017. From this table,
we can say that the computed profiles generated during the
Journal of Advanced Computing and Communication Technologies (ISSN: 2347 - 2804)
Volume No. 3 Issue No. 3, June 2015
48
test process show good agreement with the actual profiles.
Table 2 - Profile comparison between calculated &
stored Y coordinates
Y
coordinate
in
database
Y coordinate
calculated
using proposed
algorithm in
test phase
Difference
0.00350 0.003086 0.000414
0.00993 0.009320 0.000610
0.03173 0.030728 0.001002
0.04468 0.044099 0.000581
0.05721 0.056115 0.001095
0.07540 0.074828 0.000572
0.07835 0.078136 0.000214
0.07593 0.075784 0.000146
The network was trained with 60 patterns to minimize error.
Then network was tested with a test set comprising 18
patterns which was not used in the training process. The
new computed airfoil x and y coordinates from the testing
process are passed to XFoil tool. This tool creates the
airfoil and generates its corresponding CL, CD. Four such
airfoils from XFoil tool are shown in Figure-3. The CL, CD
from XFoil tool and the CL, CD from our database are
compared and found that it matches. This shows that our
ANN is trained with minimal error.
A measure of the accuracy of the results obtained can be
inferred from examination of error which is defined as
100
)()(



ratiothicknessairfoil
yy
Error
computediactuali
Where yi(actual) is the actual y-coordinate of the section at
location i, yi(computed) is the computed y-coordinate. We
computed the error using the above formula considering the
y coordinate we derived from our mixed approach as
yi(computed) and the y coordinate from the database as
yi(actual) and tabulated in Table-3 and this table shows the
maximum error in percentage for the airfoils shown in
Figure-3. From this table, we can conclude that the mixed
approach predicated comparatively the correct airfoil
profiles
Table 3- Maximum error for airfoils shown in Fig3
Airfoil Maximum error (%)
NACA0012 0.00194
NACA0014 0.00265
NACA2013 0.00113
NACA2017 0.00238
Figure-3. Airfoil profiles from XFoil tool
Figure-4 shows how the training decreases mean square
Error (MSE) with the epoch. The red dotted line indicates
the error in linear and nonlinear approach. The green line
(line with hyphen and dot ) indicates the error in steepest
descent. And the blue line indicates that of mixed
approach. From the figure, it is obvious that mixed
NACA0012
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
-0.3
-0.2
-0.1
0
0.1
0.2
0.3
NACA0014
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
-0.3
-0.2
-0.1
0
0.1
0.2
0.3
NACA2013
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
-0.3
-0.2
-0.1
0
0.1
0.2
0.3
NACA2017
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
-0.3
-0.2
-0.1
0
0.1
0.2
0.3
Journal of Advanced Computing and Communication Technologies (ISSN: 2347 - 2804)
Volume No. 3 Issue No. 3, June 2015
49
approach converges quickly compared to steepest descent
or nonlinear approach. Also the error is minimal for mixed
approach compared to steepest descent and nonlinear
approach.
Figure-4. Convergence comparison
The results obtained from steepest descent, non-linear and
mixed approach are compared and tabulated in Table-4.
When you check the table, you can easily find that our
proposed algorithm converges quickly than the other two
approaches. This table clearly proves that the mixed
approach results in less error and takes less time to predict
the airfoil for the given CL, CD. The time is denoted by
number of clock ticks in the table.
Table 4- Comparison Table
Epoch *
1000
error in
steepest
descent
error in non
linear approach
Mixed
Approach
(Proposed
Algorithm)
0 3.005668 3.20248 3.011418
2 0.000261 0.00013 0.000131
4 0.00102 0.001444 0.001125
6 0.002218 0.003933 0.000961
8 0.000116 0.00014 0.000101
10 0.000545 0.001281 0.000041
12 0.001216 0.002752 0.000141
14 0.00007 0.000065 0.000061
16 0.000315 0.00020 0.000021
18 0.000806 0.001336 0.000093
20 0.000064 0.000294 0.000056
22 0.000229 0.000106 0.000018
24 0.000623 0.000926 0.000068
26 0.000067 0.000499 0.000016
28 0.00018 0.000039
Converged & the
aim attained
30 0.000518 0.000202 -
32 0.000068 0.000064 -
34 0.000148 0.000024 -
36 0.000446 0.000063 -
38 0.000066 0.000294 -
40 0.000126 0.000034 -
42 0.000391 0.000538 -
44 0.000064 0.000096 -
46 0.000111 0.000172 -
48 0.000347 0.000245 -
50 0.000061 0.000067 -
52 0.000099 0.000151 -
54 0.000312 0.000187 -
56 0.000058 0.000051 -
58 0.00009 0.000138 -
60 0.000282 0.000136 -
62 0.000055 0.000039 -
64 0.000082 0.000134 -
66 0.000257 0.000099 -
68 0.000053 0.000037 -
70 0.000076 0.00012 -
72 0.000235 0.000095 -
74 0.000051 0.000041 -
76 0.000071 0.000098 -
78 0.000216 0.000103 -
80 0.000049 0.000047 -
82 0.000066 0.000056 -
84 0.00020
Converged & the aim
attained -
86 0.000047 -
4. CONCLUSIONS
In this paper, we have used an inverse design methodology
in artificial neural networks for the design of airfoil. We did
a comparison between mixed algorithm and Levenberg-
Marquardt algorithm. The results show that mixed
algorithm is better than Levenberg-Marquardt algorithm in
terms of convergence speed. The results indicate that our
proposed mixed approach will converge at comparatively
high speed with fewer errors. In the proposed algorithm for
output layer we used the cost function having linear and
nonlinear error terms then for the hidden layer steepest
descent cost function is used. Results indicate that
Journal of Advanced Computing and Communication Technologies (ISSN: 2347 - 2804)
Volume No. 3 Issue No. 3, June 2015
50
optimally trained artificial neural networks using proposed
algorithm may accurately predict airfoil profile at a faster
pace without oscillation in learning.
Annexure
The surface direction accurately estimated using Partial
derivatives. To find the non linear error we take inverse
of f(x).
)(xf is defined as below,
x
e
xf 


1
1
)(
Hence inverse of
)(xf ,
x
exf 
1)(1
To find the accurate direction we need the partial
derivatives of f.
2
)1( u
u
e
e
f 



This f is used in the equation (7) and the weight change
is derived using it.
The complete results obtained using our mixed approach
is tabulated in Table-IV.
REFERENCES
[1]. Garabedian, P.R. and Korn, D.G.,"Numerical
Design of Transonic Airfoils," Numerical
Solution of Differential Equations - II, Academic
Press, 1971
[2]. Hoist, T.L., Sloof, J.W. Yoshihara, H. and all-haus
Jr.,W.F.,z" Applied computational Transonic
Aerodynamics" AGARD-AG-266,1982.
[3]. Sloof, J.W., "Computational Procedures in
Transonic Aerodynamic Design," Lecture
presented at ITCS Short Course on
Computational Methods in potential
Aerodynamics, Amalfi,Italy, 1982 (also NLR
MP 82020 U)
[4]. Hertz, J., Krogh, A. and Palmer, R.G.,
Introduction to the Theory of Neural
Computation, Addison-Wesley Publishing Co.,
California, 1991.
[5]. Riedmiller, M. and Braun, H., "A Direct
Adaptive Method for Faster Backpropagation
Learning: the RPROP Algorithm," Proceedings
of the IEEE International Conference on Neural
Networks, San Francisco, CA, March 28-April
1, 1993.
[6]. L.Behera, S. Kumar, A. Patnaik “On adaptive
learning rate that guarantees convergence in
feedforward networks”, IEEE Trans. Neural
Networks 17, 1116-1125, (2006).
[7]. R. A. Jacobs “Increased rates of convergence
through learning rate adaptation”. Neural
Networks 1, 295-307, (1988).
[8]. G.D.Magoulas,V.P.Plagianakos, M.N.Vrahatis
“Globally convergent algorithm with local
learning rate” IEEE Trans. Neural Network 13,
774-779,(2002)
[9]. X. H. Yu, G. A. Chen, S. X. Cheng
“Acceleration of backpropagation learning using
optimized learning rate and momentum”,
Electron. Lett. 29, 1288-1289, (1993).
[10]. S. Abid, F. Fnaiech, M. Najim “A fast
feedforward training algorithm using a modified
form of the standard backpropagation
algorithm”. IEEE Trans. Neural Networks 12,
424-430, (2001).
[11]. Jeong S.Y, Lee S.Y, (2000) “Adaptive
learning algorithms to incorporate additional
functional constraints into neural networks”,
Neuro computing 35,73-90,(2000)
[12]. F.Han, Q.H Ling, d.S.Huang “Modified
constrained learning algorithms incorporating
additional functional constraints into neural
networks”, Information Sciences 178,907-
919,(2008).
[13]. Dilip Sarkar, ”Methods to speed up Error Back-
propagation Learning Algorithm” , ACM Computing
surveys, Vol.27.No4,December 1995.
[14]. Thiang, Handry Khoswanto, Rendy Pangaldus
,Artificial Neural Network with Steepest Descent
Backpropagation Training Algorithm for Modeling
Inverse Kinematics of Manipulator, World Academy
of Science, Engineering and Technology 36 ,2009.
[15]. Richard S.Sutton, Two problems with
backpropagation and other steepest-descent learning
procedure for networks, GTE Laboratories.
[16]. Jinyan Fan, Accelerating the modified Levenberg-
Marquardt method for nonlinear equations “Math.
Comp”, 1173-1187 ,(2014).
Journal of Advanced Computing and Communication Technologies (ISSN: 2347 - 2804)
Volume No. 3 Issue No. 3, June 2015
51
ABOUT THE AUTHORS
K.Thinakaran received M.E., degree in computer science from Mahendra Engineering College which is affiliated to
Anna University, Coimbatore, Tamilnadu in 2009. He is currently working toward the Ph.D. degree at the Anna
University. He is currently an Associate Professor in Computer Science Engineering, Sri Venkateswara College of
Engineering & Technology, Thiruvallur, India. His current research interests include Neural Network and Data
Mining.
Dr.R.Rajasekar received his doctorate from Department of Aeronautics, Imperial College, London, UK. His
aeronautical masters‟ degree was from IIT, Madras. He is currently working as the Professor and Head of
Aeronautical Engineering Department, Excel Engineering College, Pallakkapalayam, Tamil Nadu. (an affiliated
college under Anna University, Chennai). His specialization and research interests are aerodynamics and its
applications.

More Related Content

What's hot

A HYBRID CLUSTERING ALGORITHM FOR DATA MINING
A HYBRID CLUSTERING ALGORITHM FOR DATA MININGA HYBRID CLUSTERING ALGORITHM FOR DATA MINING
A HYBRID CLUSTERING ALGORITHM FOR DATA MININGcscpconf
 
MSc Thesis Presentation
MSc Thesis PresentationMSc Thesis Presentation
MSc Thesis PresentationReem Sherif
 
An Area-efficient Montgomery Modular Multiplier for Cryptosystems
An Area-efficient Montgomery Modular Multiplier for CryptosystemsAn Area-efficient Montgomery Modular Multiplier for Cryptosystems
An Area-efficient Montgomery Modular Multiplier for CryptosystemsIJERA Editor
 
Radial basis function network ppt bySheetal,Samreen and Dhanashri
Radial basis function network ppt bySheetal,Samreen and DhanashriRadial basis function network ppt bySheetal,Samreen and Dhanashri
Radial basis function network ppt bySheetal,Samreen and Dhanashrisheetal katkar
 
Hybrid PSO-SA algorithm for training a Neural Network for Classification
Hybrid PSO-SA algorithm for training a Neural Network for ClassificationHybrid PSO-SA algorithm for training a Neural Network for Classification
Hybrid PSO-SA algorithm for training a Neural Network for ClassificationIJCSEA Journal
 
Single Channel Speech De-noising Using Kernel Independent Component Analysis...
	Single Channel Speech De-noising Using Kernel Independent Component Analysis...	Single Channel Speech De-noising Using Kernel Independent Component Analysis...
Single Channel Speech De-noising Using Kernel Independent Component Analysis...theijes
 
11.dynamic instruction scheduling for microprocessors having out of order exe...
11.dynamic instruction scheduling for microprocessors having out of order exe...11.dynamic instruction scheduling for microprocessors having out of order exe...
11.dynamic instruction scheduling for microprocessors having out of order exe...Alexander Decker
 
A Performance Analysis of CLMS and Augmented CLMS Algorithms for Smart Antennas
A Performance Analysis of CLMS and Augmented CLMS Algorithms for Smart Antennas A Performance Analysis of CLMS and Augmented CLMS Algorithms for Smart Antennas
A Performance Analysis of CLMS and Augmented CLMS Algorithms for Smart Antennas cscpconf
 
What is pattern recognition (lecture 6 of 6)
What is pattern recognition (lecture 6 of 6)What is pattern recognition (lecture 6 of 6)
What is pattern recognition (lecture 6 of 6)Randa Elanwar
 
11.[10 14]dynamic instruction scheduling for microprocessors having out of or...
11.[10 14]dynamic instruction scheduling for microprocessors having out of or...11.[10 14]dynamic instruction scheduling for microprocessors having out of or...
11.[10 14]dynamic instruction scheduling for microprocessors having out of or...Alexander Decker
 
A Study of Training and Blind Equalization Algorithms for Quadrature Amplitud...
A Study of Training and Blind Equalization Algorithms for Quadrature Amplitud...A Study of Training and Blind Equalization Algorithms for Quadrature Amplitud...
A Study of Training and Blind Equalization Algorithms for Quadrature Amplitud...IRJET Journal
 
Hybrid neuro-fuzzy-approach-for-automatic-generation-control-of-two--area-int...
Hybrid neuro-fuzzy-approach-for-automatic-generation-control-of-two--area-int...Hybrid neuro-fuzzy-approach-for-automatic-generation-control-of-two--area-int...
Hybrid neuro-fuzzy-approach-for-automatic-generation-control-of-two--area-int...Cemal Ardil
 
Hierarchical algorithms of quasi linear ARX Neural Networks for Identificatio...
Hierarchical algorithms of quasi linear ARX Neural Networks for Identificatio...Hierarchical algorithms of quasi linear ARX Neural Networks for Identificatio...
Hierarchical algorithms of quasi linear ARX Neural Networks for Identificatio...Yuyun Wabula
 
Area-Delay Efficient Binary Adders in QCA
Area-Delay Efficient Binary Adders in QCAArea-Delay Efficient Binary Adders in QCA
Area-Delay Efficient Binary Adders in QCAIJERA Editor
 
On The Application of Hyperbolic Activation Function in Computing the Acceler...
On The Application of Hyperbolic Activation Function in Computing the Acceler...On The Application of Hyperbolic Activation Function in Computing the Acceler...
On The Application of Hyperbolic Activation Function in Computing the Acceler...iosrjce
 

What's hot (20)

R04605106110
R04605106110R04605106110
R04605106110
 
A HYBRID CLUSTERING ALGORITHM FOR DATA MINING
A HYBRID CLUSTERING ALGORITHM FOR DATA MININGA HYBRID CLUSTERING ALGORITHM FOR DATA MINING
A HYBRID CLUSTERING ALGORITHM FOR DATA MINING
 
MSc Thesis Presentation
MSc Thesis PresentationMSc Thesis Presentation
MSc Thesis Presentation
 
An Area-efficient Montgomery Modular Multiplier for Cryptosystems
An Area-efficient Montgomery Modular Multiplier for CryptosystemsAn Area-efficient Montgomery Modular Multiplier for Cryptosystems
An Area-efficient Montgomery Modular Multiplier for Cryptosystems
 
Radial basis function network ppt bySheetal,Samreen and Dhanashri
Radial basis function network ppt bySheetal,Samreen and DhanashriRadial basis function network ppt bySheetal,Samreen and Dhanashri
Radial basis function network ppt bySheetal,Samreen and Dhanashri
 
Hybrid PSO-SA algorithm for training a Neural Network for Classification
Hybrid PSO-SA algorithm for training a Neural Network for ClassificationHybrid PSO-SA algorithm for training a Neural Network for Classification
Hybrid PSO-SA algorithm for training a Neural Network for Classification
 
Backpropagation algo
Backpropagation  algoBackpropagation  algo
Backpropagation algo
 
Single Channel Speech De-noising Using Kernel Independent Component Analysis...
	Single Channel Speech De-noising Using Kernel Independent Component Analysis...	Single Channel Speech De-noising Using Kernel Independent Component Analysis...
Single Channel Speech De-noising Using Kernel Independent Component Analysis...
 
Back propagation method
Back propagation methodBack propagation method
Back propagation method
 
G010334554
G010334554G010334554
G010334554
 
11.dynamic instruction scheduling for microprocessors having out of order exe...
11.dynamic instruction scheduling for microprocessors having out of order exe...11.dynamic instruction scheduling for microprocessors having out of order exe...
11.dynamic instruction scheduling for microprocessors having out of order exe...
 
A Performance Analysis of CLMS and Augmented CLMS Algorithms for Smart Antennas
A Performance Analysis of CLMS and Augmented CLMS Algorithms for Smart Antennas A Performance Analysis of CLMS and Augmented CLMS Algorithms for Smart Antennas
A Performance Analysis of CLMS and Augmented CLMS Algorithms for Smart Antennas
 
What is pattern recognition (lecture 6 of 6)
What is pattern recognition (lecture 6 of 6)What is pattern recognition (lecture 6 of 6)
What is pattern recognition (lecture 6 of 6)
 
Radial Basis Function
Radial Basis FunctionRadial Basis Function
Radial Basis Function
 
11.[10 14]dynamic instruction scheduling for microprocessors having out of or...
11.[10 14]dynamic instruction scheduling for microprocessors having out of or...11.[10 14]dynamic instruction scheduling for microprocessors having out of or...
11.[10 14]dynamic instruction scheduling for microprocessors having out of or...
 
A Study of Training and Blind Equalization Algorithms for Quadrature Amplitud...
A Study of Training and Blind Equalization Algorithms for Quadrature Amplitud...A Study of Training and Blind Equalization Algorithms for Quadrature Amplitud...
A Study of Training and Blind Equalization Algorithms for Quadrature Amplitud...
 
Hybrid neuro-fuzzy-approach-for-automatic-generation-control-of-two--area-int...
Hybrid neuro-fuzzy-approach-for-automatic-generation-control-of-two--area-int...Hybrid neuro-fuzzy-approach-for-automatic-generation-control-of-two--area-int...
Hybrid neuro-fuzzy-approach-for-automatic-generation-control-of-two--area-int...
 
Hierarchical algorithms of quasi linear ARX Neural Networks for Identificatio...
Hierarchical algorithms of quasi linear ARX Neural Networks for Identificatio...Hierarchical algorithms of quasi linear ARX Neural Networks for Identificatio...
Hierarchical algorithms of quasi linear ARX Neural Networks for Identificatio...
 
Area-Delay Efficient Binary Adders in QCA
Area-Delay Efficient Binary Adders in QCAArea-Delay Efficient Binary Adders in QCA
Area-Delay Efficient Binary Adders in QCA
 
On The Application of Hyperbolic Activation Function in Computing the Acceler...
On The Application of Hyperbolic Activation Function in Computing the Acceler...On The Application of Hyperbolic Activation Function in Computing the Acceler...
On The Application of Hyperbolic Activation Function in Computing the Acceler...
 

Viewers also liked

Fabricair 2015 catalog
Fabricair 2015 catalogFabricair 2015 catalog
Fabricair 2015 catalogOyewo Yinusa
 
Comparison Between Levenberg-Marquardt And Scaled Conjugate Gradient Training...
Comparison Between Levenberg-Marquardt And Scaled Conjugate Gradient Training...Comparison Between Levenberg-Marquardt And Scaled Conjugate Gradient Training...
Comparison Between Levenberg-Marquardt And Scaled Conjugate Gradient Training...CSCJournals
 
Manual de Instalación de Windows XP
Manual de Instalación de Windows XPManual de Instalación de Windows XP
Manual de Instalación de Windows XPkenni117
 
20140217 博元_r_lh vs hmg&crinone (2)
20140217  博元_r_lh vs hmg&crinone (2)20140217  博元_r_lh vs hmg&crinone (2)
20140217 博元_r_lh vs hmg&crinone (2)t7260678
 
Ideas advertising campaign
Ideas advertising campaign Ideas advertising campaign
Ideas advertising campaign AliRashidMedia
 
Digital marketing-training-institute
Digital marketing-training-instituteDigital marketing-training-institute
Digital marketing-training-institutesoftprostudent2
 
Franchise Fee Pitfalls and How to Identify Them
Franchise Fee Pitfalls and How to Identify ThemFranchise Fee Pitfalls and How to Identify Them
Franchise Fee Pitfalls and How to Identify ThemCitrin Cooperman
 
Seo company india
Seo company indiaSeo company india
Seo company indiaRahul Kumar
 

Viewers also liked (16)

Fabricair 2015 catalog
Fabricair 2015 catalogFabricair 2015 catalog
Fabricair 2015 catalog
 
Systemische Professionalität - Meta-Konzepte und Methoden
Systemische Professionalität - Meta-Konzepte und MethodenSystemische Professionalität - Meta-Konzepte und Methoden
Systemische Professionalität - Meta-Konzepte und Methoden
 
BOD's profiles
BOD's profilesBOD's profiles
BOD's profiles
 
Comparison Between Levenberg-Marquardt And Scaled Conjugate Gradient Training...
Comparison Between Levenberg-Marquardt And Scaled Conjugate Gradient Training...Comparison Between Levenberg-Marquardt And Scaled Conjugate Gradient Training...
Comparison Between Levenberg-Marquardt And Scaled Conjugate Gradient Training...
 
Latihan excel
Latihan excelLatihan excel
Latihan excel
 
Hero Future Energies
Hero Future EnergiesHero Future Energies
Hero Future Energies
 
Manual de Instalación de Windows XP
Manual de Instalación de Windows XPManual de Instalación de Windows XP
Manual de Instalación de Windows XP
 
20140217 博元_r_lh vs hmg&crinone (2)
20140217  博元_r_lh vs hmg&crinone (2)20140217  博元_r_lh vs hmg&crinone (2)
20140217 博元_r_lh vs hmg&crinone (2)
 
Ideas advertising campaign
Ideas advertising campaign Ideas advertising campaign
Ideas advertising campaign
 
Back to the start
Back to the startBack to the start
Back to the start
 
INDIGENOUS KNOWLEDGE OF COMMUNITIES AROUND LAKE VICTORIA BASIN REGARDING TREA...
INDIGENOUS KNOWLEDGE OF COMMUNITIES AROUND LAKE VICTORIA BASIN REGARDING TREA...INDIGENOUS KNOWLEDGE OF COMMUNITIES AROUND LAKE VICTORIA BASIN REGARDING TREA...
INDIGENOUS KNOWLEDGE OF COMMUNITIES AROUND LAKE VICTORIA BASIN REGARDING TREA...
 
Digital marketing-training-institute
Digital marketing-training-instituteDigital marketing-training-institute
Digital marketing-training-institute
 
Medhabikash_Infograph
Medhabikash_InfographMedhabikash_Infograph
Medhabikash_Infograph
 
Franchise Fee Pitfalls and How to Identify Them
Franchise Fee Pitfalls and How to Identify ThemFranchise Fee Pitfalls and How to Identify Them
Franchise Fee Pitfalls and How to Identify Them
 
Codes and conventions
Codes and conventionsCodes and conventions
Codes and conventions
 
Seo company india
Seo company indiaSeo company india
Seo company india
 

Similar to Design of airfoil using backpropagation training with mixed approach

Adaptive modified backpropagation algorithm based on differential errors
Adaptive modified backpropagation algorithm based on differential errorsAdaptive modified backpropagation algorithm based on differential errors
Adaptive modified backpropagation algorithm based on differential errorsIJCSEA Journal
 
A comparison-of-first-and-second-order-training-algorithms-for-artificial-neu...
A comparison-of-first-and-second-order-training-algorithms-for-artificial-neu...A comparison-of-first-and-second-order-training-algorithms-for-artificial-neu...
A comparison-of-first-and-second-order-training-algorithms-for-artificial-neu...Cemal Ardil
 
Web spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithmsWeb spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithmsaciijournal
 
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network AlgorithmsWeb Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network Algorithmsaciijournal
 
Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...
Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...
Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...Cemal Ardil
 
Optimal neural network models for wind speed prediction
Optimal neural network models for wind speed predictionOptimal neural network models for wind speed prediction
Optimal neural network models for wind speed predictionIAEME Publication
 
Optimal neural network models for wind speed prediction
Optimal neural network models for wind speed predictionOptimal neural network models for wind speed prediction
Optimal neural network models for wind speed predictionIAEME Publication
 
Optimal neural network models for wind speed prediction
Optimal neural network models for wind speed predictionOptimal neural network models for wind speed prediction
Optimal neural network models for wind speed predictionIAEME Publication
 
Black-box modeling of nonlinear system using evolutionary neural NARX model
Black-box modeling of nonlinear system using evolutionary neural NARX modelBlack-box modeling of nonlinear system using evolutionary neural NARX model
Black-box modeling of nonlinear system using evolutionary neural NARX modelIJECEIAES
 
SLIDING WINDOW SUM ALGORITHMS FOR DEEP NEURAL NETWORKS
SLIDING WINDOW SUM ALGORITHMS FOR DEEP NEURAL NETWORKSSLIDING WINDOW SUM ALGORITHMS FOR DEEP NEURAL NETWORKS
SLIDING WINDOW SUM ALGORITHMS FOR DEEP NEURAL NETWORKSIJCI JOURNAL
 
A HYBRID GENETIC ALGORITHM APPROACH FOR OSPF WEIGHT SETTING PROBLEM
A HYBRID GENETIC ALGORITHM APPROACH FOR OSPF WEIGHT SETTING PROBLEMA HYBRID GENETIC ALGORITHM APPROACH FOR OSPF WEIGHT SETTING PROBLEM
A HYBRID GENETIC ALGORITHM APPROACH FOR OSPF WEIGHT SETTING PROBLEMEM Legacy
 
APPLIED MACHINE LEARNING
APPLIED MACHINE LEARNINGAPPLIED MACHINE LEARNING
APPLIED MACHINE LEARNINGRevanth Kumar
 
Prediction of Extreme Wind Speed Using Artificial Neural Network Approach
Prediction of Extreme Wind Speed Using Artificial Neural  Network ApproachPrediction of Extreme Wind Speed Using Artificial Neural  Network Approach
Prediction of Extreme Wind Speed Using Artificial Neural Network ApproachScientific Review SR
 
ML_ Unit 2_Part_B
ML_ Unit 2_Part_BML_ Unit 2_Part_B
ML_ Unit 2_Part_BSrimatre K
 
Efficient realization-of-an-adfe-with-a-new-adaptive-algorithm
Efficient realization-of-an-adfe-with-a-new-adaptive-algorithmEfficient realization-of-an-adfe-with-a-new-adaptive-algorithm
Efficient realization-of-an-adfe-with-a-new-adaptive-algorithmCemal Ardil
 
Electricity Demand Forecasting Using Fuzzy-Neural Network
Electricity Demand Forecasting Using Fuzzy-Neural NetworkElectricity Demand Forecasting Using Fuzzy-Neural Network
Electricity Demand Forecasting Using Fuzzy-Neural NetworkNaren Chandra Kattla
 
Expert system design for elastic scattering neutrons optical model using bpnn
Expert system design for elastic scattering neutrons optical model using bpnnExpert system design for elastic scattering neutrons optical model using bpnn
Expert system design for elastic scattering neutrons optical model using bpnnijcsa
 

Similar to Design of airfoil using backpropagation training with mixed approach (20)

Adaptive modified backpropagation algorithm based on differential errors
Adaptive modified backpropagation algorithm based on differential errorsAdaptive modified backpropagation algorithm based on differential errors
Adaptive modified backpropagation algorithm based on differential errors
 
A comparison-of-first-and-second-order-training-algorithms-for-artificial-neu...
A comparison-of-first-and-second-order-training-algorithms-for-artificial-neu...A comparison-of-first-and-second-order-training-algorithms-for-artificial-neu...
A comparison-of-first-and-second-order-training-algorithms-for-artificial-neu...
 
Web spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithmsWeb spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithms
 
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network AlgorithmsWeb Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
 
Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...
Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...
Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...
 
Optimal neural network models for wind speed prediction
Optimal neural network models for wind speed predictionOptimal neural network models for wind speed prediction
Optimal neural network models for wind speed prediction
 
Optimal neural network models for wind speed prediction
Optimal neural network models for wind speed predictionOptimal neural network models for wind speed prediction
Optimal neural network models for wind speed prediction
 
Optimal neural network models for wind speed prediction
Optimal neural network models for wind speed predictionOptimal neural network models for wind speed prediction
Optimal neural network models for wind speed prediction
 
Black-box modeling of nonlinear system using evolutionary neural NARX model
Black-box modeling of nonlinear system using evolutionary neural NARX modelBlack-box modeling of nonlinear system using evolutionary neural NARX model
Black-box modeling of nonlinear system using evolutionary neural NARX model
 
SLIDING WINDOW SUM ALGORITHMS FOR DEEP NEURAL NETWORKS
SLIDING WINDOW SUM ALGORITHMS FOR DEEP NEURAL NETWORKSSLIDING WINDOW SUM ALGORITHMS FOR DEEP NEURAL NETWORKS
SLIDING WINDOW SUM ALGORITHMS FOR DEEP NEURAL NETWORKS
 
A HYBRID GENETIC ALGORITHM APPROACH FOR OSPF WEIGHT SETTING PROBLEM
A HYBRID GENETIC ALGORITHM APPROACH FOR OSPF WEIGHT SETTING PROBLEMA HYBRID GENETIC ALGORITHM APPROACH FOR OSPF WEIGHT SETTING PROBLEM
A HYBRID GENETIC ALGORITHM APPROACH FOR OSPF WEIGHT SETTING PROBLEM
 
APPLIED MACHINE LEARNING
APPLIED MACHINE LEARNINGAPPLIED MACHINE LEARNING
APPLIED MACHINE LEARNING
 
Prediction of Extreme Wind Speed Using Artificial Neural Network Approach
Prediction of Extreme Wind Speed Using Artificial Neural  Network ApproachPrediction of Extreme Wind Speed Using Artificial Neural  Network Approach
Prediction of Extreme Wind Speed Using Artificial Neural Network Approach
 
ML_ Unit 2_Part_B
ML_ Unit 2_Part_BML_ Unit 2_Part_B
ML_ Unit 2_Part_B
 
Efficient realization-of-an-adfe-with-a-new-adaptive-algorithm
Efficient realization-of-an-adfe-with-a-new-adaptive-algorithmEfficient realization-of-an-adfe-with-a-new-adaptive-algorithm
Efficient realization-of-an-adfe-with-a-new-adaptive-algorithm
 
Ann
Ann Ann
Ann
 
Electricity Demand Forecasting Using Fuzzy-Neural Network
Electricity Demand Forecasting Using Fuzzy-Neural NetworkElectricity Demand Forecasting Using Fuzzy-Neural Network
Electricity Demand Forecasting Using Fuzzy-Neural Network
 
Experimental investigation of artificial intelligence applied in MPPT techniques
Experimental investigation of artificial intelligence applied in MPPT techniquesExperimental investigation of artificial intelligence applied in MPPT techniques
Experimental investigation of artificial intelligence applied in MPPT techniques
 
A0270107
A0270107A0270107
A0270107
 
Expert system design for elastic scattering neutrons optical model using bpnn
Expert system design for elastic scattering neutrons optical model using bpnnExpert system design for elastic scattering neutrons optical model using bpnn
Expert system design for elastic scattering neutrons optical model using bpnn
 

More from Editor Jacotech

Performance of Wideband Mobile Channel with Perfect Synchronism BPSK vs QPSK ...
Performance of Wideband Mobile Channel with Perfect Synchronism BPSK vs QPSK ...Performance of Wideband Mobile Channel with Perfect Synchronism BPSK vs QPSK ...
Performance of Wideband Mobile Channel with Perfect Synchronism BPSK vs QPSK ...Editor Jacotech
 
MOVIE RATING PREDICTION BASED ON TWITTER SENTIMENT ANALYSIS
MOVIE RATING PREDICTION BASED ON TWITTER SENTIMENT ANALYSISMOVIE RATING PREDICTION BASED ON TWITTER SENTIMENT ANALYSIS
MOVIE RATING PREDICTION BASED ON TWITTER SENTIMENT ANALYSISEditor Jacotech
 
Non integer order controller based robust performance analysis of a conical t...
Non integer order controller based robust performance analysis of a conical t...Non integer order controller based robust performance analysis of a conical t...
Non integer order controller based robust performance analysis of a conical t...Editor Jacotech
 
FACTORS CAUSING STRESS AMONG FEMALE DOCTORS (A COMPARATIVE STUDY BETWEEN SELE...
FACTORS CAUSING STRESS AMONG FEMALE DOCTORS (A COMPARATIVE STUDY BETWEEN SELE...FACTORS CAUSING STRESS AMONG FEMALE DOCTORS (A COMPARATIVE STUDY BETWEEN SELE...
FACTORS CAUSING STRESS AMONG FEMALE DOCTORS (A COMPARATIVE STUDY BETWEEN SELE...Editor Jacotech
 
ANALYSIS AND DESIGN OF MULTIPLE WATERMARKING IN A VIDEO FOR AUTHENTICATION AN...
ANALYSIS AND DESIGN OF MULTIPLE WATERMARKING IN A VIDEO FOR AUTHENTICATION AN...ANALYSIS AND DESIGN OF MULTIPLE WATERMARKING IN A VIDEO FOR AUTHENTICATION AN...
ANALYSIS AND DESIGN OF MULTIPLE WATERMARKING IN A VIDEO FOR AUTHENTICATION AN...Editor Jacotech
 
The Impact of Line Resistance on the Performance of Controllable Series Compe...
The Impact of Line Resistance on the Performance of Controllable Series Compe...The Impact of Line Resistance on the Performance of Controllable Series Compe...
The Impact of Line Resistance on the Performance of Controllable Series Compe...Editor Jacotech
 
Security Strength Evaluation of Some Chaos Based Substitution-Boxes
Security Strength Evaluation of Some Chaos Based Substitution-BoxesSecurity Strength Evaluation of Some Chaos Based Substitution-Boxes
Security Strength Evaluation of Some Chaos Based Substitution-BoxesEditor Jacotech
 
Traffic detection system using android
Traffic detection system using androidTraffic detection system using android
Traffic detection system using androidEditor Jacotech
 
Performance analysis of aodv with the constraints of varying terrain area and...
Performance analysis of aodv with the constraints of varying terrain area and...Performance analysis of aodv with the constraints of varying terrain area and...
Performance analysis of aodv with the constraints of varying terrain area and...Editor Jacotech
 
Modeling of solar array and analyze the current transient response of shunt s...
Modeling of solar array and analyze the current transient response of shunt s...Modeling of solar array and analyze the current transient response of shunt s...
Modeling of solar array and analyze the current transient response of shunt s...Editor Jacotech
 
License plate recognition an insight to the proposed approach for plate local...
License plate recognition an insight to the proposed approach for plate local...License plate recognition an insight to the proposed approach for plate local...
License plate recognition an insight to the proposed approach for plate local...Editor Jacotech
 
Ant colony optimization based routing algorithm in various wireless sensor ne...
Ant colony optimization based routing algorithm in various wireless sensor ne...Ant colony optimization based routing algorithm in various wireless sensor ne...
Ant colony optimization based routing algorithm in various wireless sensor ne...Editor Jacotech
 
An efficient ant optimized multipath routing in wireless sensor network
An efficient ant optimized multipath routing in wireless sensor networkAn efficient ant optimized multipath routing in wireless sensor network
An efficient ant optimized multipath routing in wireless sensor networkEditor Jacotech
 
A mobile monitoring and alert sms system with remote configuration – a case s...
A mobile monitoring and alert sms system with remote configuration – a case s...A mobile monitoring and alert sms system with remote configuration – a case s...
A mobile monitoring and alert sms system with remote configuration – a case s...Editor Jacotech
 
Leader Election Approach: A Comparison and Survey
Leader Election Approach: A Comparison and SurveyLeader Election Approach: A Comparison and Survey
Leader Election Approach: A Comparison and SurveyEditor Jacotech
 
Leader election approach a comparison and survey
Leader election approach a comparison and surveyLeader election approach a comparison and survey
Leader election approach a comparison and surveyEditor Jacotech
 
Modeling of solar array and analyze the current transient
Modeling of solar array and analyze the current transientModeling of solar array and analyze the current transient
Modeling of solar array and analyze the current transientEditor Jacotech
 
Traffic detection system using android
Traffic detection system using androidTraffic detection system using android
Traffic detection system using androidEditor Jacotech
 
Performance analysis of aodv with the constraints of
Performance analysis of aodv with the constraints ofPerformance analysis of aodv with the constraints of
Performance analysis of aodv with the constraints ofEditor Jacotech
 
License plate recognition an insight to the proposed approach for plate local...
License plate recognition an insight to the proposed approach for plate local...License plate recognition an insight to the proposed approach for plate local...
License plate recognition an insight to the proposed approach for plate local...Editor Jacotech
 

More from Editor Jacotech (20)

Performance of Wideband Mobile Channel with Perfect Synchronism BPSK vs QPSK ...
Performance of Wideband Mobile Channel with Perfect Synchronism BPSK vs QPSK ...Performance of Wideband Mobile Channel with Perfect Synchronism BPSK vs QPSK ...
Performance of Wideband Mobile Channel with Perfect Synchronism BPSK vs QPSK ...
 
MOVIE RATING PREDICTION BASED ON TWITTER SENTIMENT ANALYSIS
MOVIE RATING PREDICTION BASED ON TWITTER SENTIMENT ANALYSISMOVIE RATING PREDICTION BASED ON TWITTER SENTIMENT ANALYSIS
MOVIE RATING PREDICTION BASED ON TWITTER SENTIMENT ANALYSIS
 
Non integer order controller based robust performance analysis of a conical t...
Non integer order controller based robust performance analysis of a conical t...Non integer order controller based robust performance analysis of a conical t...
Non integer order controller based robust performance analysis of a conical t...
 
FACTORS CAUSING STRESS AMONG FEMALE DOCTORS (A COMPARATIVE STUDY BETWEEN SELE...
FACTORS CAUSING STRESS AMONG FEMALE DOCTORS (A COMPARATIVE STUDY BETWEEN SELE...FACTORS CAUSING STRESS AMONG FEMALE DOCTORS (A COMPARATIVE STUDY BETWEEN SELE...
FACTORS CAUSING STRESS AMONG FEMALE DOCTORS (A COMPARATIVE STUDY BETWEEN SELE...
 
ANALYSIS AND DESIGN OF MULTIPLE WATERMARKING IN A VIDEO FOR AUTHENTICATION AN...
ANALYSIS AND DESIGN OF MULTIPLE WATERMARKING IN A VIDEO FOR AUTHENTICATION AN...ANALYSIS AND DESIGN OF MULTIPLE WATERMARKING IN A VIDEO FOR AUTHENTICATION AN...
ANALYSIS AND DESIGN OF MULTIPLE WATERMARKING IN A VIDEO FOR AUTHENTICATION AN...
 
The Impact of Line Resistance on the Performance of Controllable Series Compe...
The Impact of Line Resistance on the Performance of Controllable Series Compe...The Impact of Line Resistance on the Performance of Controllable Series Compe...
The Impact of Line Resistance on the Performance of Controllable Series Compe...
 
Security Strength Evaluation of Some Chaos Based Substitution-Boxes
Security Strength Evaluation of Some Chaos Based Substitution-BoxesSecurity Strength Evaluation of Some Chaos Based Substitution-Boxes
Security Strength Evaluation of Some Chaos Based Substitution-Boxes
 
Traffic detection system using android
Traffic detection system using androidTraffic detection system using android
Traffic detection system using android
 
Performance analysis of aodv with the constraints of varying terrain area and...
Performance analysis of aodv with the constraints of varying terrain area and...Performance analysis of aodv with the constraints of varying terrain area and...
Performance analysis of aodv with the constraints of varying terrain area and...
 
Modeling of solar array and analyze the current transient response of shunt s...
Modeling of solar array and analyze the current transient response of shunt s...Modeling of solar array and analyze the current transient response of shunt s...
Modeling of solar array and analyze the current transient response of shunt s...
 
License plate recognition an insight to the proposed approach for plate local...
License plate recognition an insight to the proposed approach for plate local...License plate recognition an insight to the proposed approach for plate local...
License plate recognition an insight to the proposed approach for plate local...
 
Ant colony optimization based routing algorithm in various wireless sensor ne...
Ant colony optimization based routing algorithm in various wireless sensor ne...Ant colony optimization based routing algorithm in various wireless sensor ne...
Ant colony optimization based routing algorithm in various wireless sensor ne...
 
An efficient ant optimized multipath routing in wireless sensor network
An efficient ant optimized multipath routing in wireless sensor networkAn efficient ant optimized multipath routing in wireless sensor network
An efficient ant optimized multipath routing in wireless sensor network
 
A mobile monitoring and alert sms system with remote configuration – a case s...
A mobile monitoring and alert sms system with remote configuration – a case s...A mobile monitoring and alert sms system with remote configuration – a case s...
A mobile monitoring and alert sms system with remote configuration – a case s...
 
Leader Election Approach: A Comparison and Survey
Leader Election Approach: A Comparison and SurveyLeader Election Approach: A Comparison and Survey
Leader Election Approach: A Comparison and Survey
 
Leader election approach a comparison and survey
Leader election approach a comparison and surveyLeader election approach a comparison and survey
Leader election approach a comparison and survey
 
Modeling of solar array and analyze the current transient
Modeling of solar array and analyze the current transientModeling of solar array and analyze the current transient
Modeling of solar array and analyze the current transient
 
Traffic detection system using android
Traffic detection system using androidTraffic detection system using android
Traffic detection system using android
 
Performance analysis of aodv with the constraints of
Performance analysis of aodv with the constraints ofPerformance analysis of aodv with the constraints of
Performance analysis of aodv with the constraints of
 
License plate recognition an insight to the proposed approach for plate local...
License plate recognition an insight to the proposed approach for plate local...License plate recognition an insight to the proposed approach for plate local...
License plate recognition an insight to the proposed approach for plate local...
 

Recently uploaded

Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3JemimahLaneBuaron
 
URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppCeline George
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introductionMaksud Ahmed
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)eniolaolutunde
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdfssuser54595a
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991RKavithamani
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeThiyagu K
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxSayali Powar
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactPECB
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxpboyjonauth
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingTechSoup
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionSafetyChain Software
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxiammrhaywood
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesFatimaKhan178732
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docxPoojaSen20
 

Recently uploaded (20)

Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3
 
URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website App
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
 
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptx
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory Inspection
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and Actinides
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docx
 
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptxINDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
 
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 

Design of airfoil using backpropagation training with mixed approach

  • 1. Journal of Advanced Computing and Communication Technologies (ISSN: 2347 - 2804) Volume No. 3 Issue No. 3, June 2015 44 DESIGN OF AIRFOIL USING BACKPROPAGATION TRAINING WITH MIXED APPROACH K.Thinakaran, Dr.R.Rajasekar Sri Venkateswara College of Engineering & Tech., Thiruvallur, Tamil Nadu, Excel Engineering College, Pallakkapalayam, Tamil Nadu, thina85@rediffmail.com ,rajasekar080564@yahoo.com ABSTRACT Levenberg-Marquardt back-propagation training method has some limitations associated with over fitting and local optimum problems. Here, we proposed a new algorithm to increase the convergence speed of Backpropagation learning to design the airfoil. The aerodynamic force coefficients corresponding to series of airfoil are stored in a database along with the airfoil coordinates. A feedforward neural network is created with aerodynamic coefficient as input to produce the airfoil coordinates as output. In the proposed algorithm, for output layer, we used the cost function having linear & nonlinear error terms then for the hidden layer, we used steepest descent cost function. Results indicate that this mixed approach greatly enhances the training of artificial neural network and may accurately predict airfoil profile. Keyword: linear error, nonlinear error, steepest descent method, airfoil design, neural networks, backpropagation, Levenberg-Marquardt. 1. INTRODUCTION The aim of this paper is to generate geometry for airfoil using inverse design method with high convergence speed and with minimal error. Hence we are supposed to enhance inverse design method which will converge fast and have minimal error. There are many inverse techniques in use, for example, hodo- graph methods for two-dimensional flows (1-3). The inverse design of backpropagation takes long time to converge. Some focused on better function and suitable learning rate and momentum (4-9). To design fast algorithm, Abid et al. proposed a new algorithm by minimizing sum of squares of linear and nonlinear errors for all output (10). Jeong et al. proposed learning algorithm based on first and second order derivatives of neural activation at hidden layers (11). Han et al. proposed two modified constrained learning algorithms - First new Learning Algorithm and Second new Learning Algorithm to obtain faster convergence rate. The additional cost terms of the first algorithm are selected based on the first derivatives of the activation functions of the hidden neurons and second order derivatives of the activation functions of the output neurons while the additional cost terms of second one are selected based on the first order derivatives of the activation functions of the output neurons and second order derivatives of the activation functions of the hidden neurons [12]. The notable differences of the accelerated modified LM method are that the line search for the approximate LM step [16]. The objective of this work is to show that using cost function having linear & nonlinear terms in output layer and using cost function of steepest descent [13-15] in hidden layer trains the network fast. This mixed approach trains the network in such a way that it converges fast with minimal error.One of the issues when designing a particular neural network is to calculate proper weight for neural activities. These are obtained from the training process of neural network. The process of obtaining appropriate weight in a neural network design utilizes two set of equations. First, the feedforward equation is used to calculate the error function. The feedback equation is next used to calculate the gradient vector. This gradient vector is used for defining search directions in order to calculate weight change. The gradient direction is the direction of steepest descent. But, the learning rate may be different in different directions. The linear and nonlinear error terms introduces certain modification in directions in order to speed up the convergence. Hence the mixed approach of using cost function having linear & non-linear error terms in output layer and using cost function of steepest descent in hidden layer results fast convergence. So, we are going to analyze this mixed approach in the proposed algorithm. 2. ANN TRAINING METHOD Figure-1 illustrates an airfoil profile which is described by a set of x- and y-coordinates. Figure-1. Flow field and airfoil data The aerodynamic force coefficients corresponding to series of airfoil are stored in a database along with the airfoil coordinates. A feedforward neural network is created with the inputs - aerodynamic coefficients Cl, Cd and X Coordinates to produce the outputs - the airfoil Y coordinates. Here, we considered an ANN with an input layer, hidden layer and an output layer as in Figure2. Each layer has its own
  • 2. Journal of Advanced Computing and Communication Technologies (ISSN: 2347 - 2804) Volume No. 3 Issue No. 3, June 2015 45 neurons which interconnect with neurons in the consecutive layer. The airfoil co-efficients and the „x‟ co-ordinates are considered as neurons in the input layer. We are aimed to generate the airfoil „y‟ co-ordinates. The input and the weight determine again network traverse happens till it meets threshold the output of each neuron in each layer. The actual output from the output layer is compared against targeted output. If it doesn‟t match the threshold value, then we need to find weight change based on the outputs and once value. This process is termed to be training the network. Training generally involves minimization of error with respect to the training set. Learning algorithms such as the back-propagation algorithm for feed- forward multilayer networks [5] help us to find set of weights by successive improvement from an arbitrary starting point. Once network is trained with trained data set, then using feedforward function, it is tested with test data to verify whether it generates the desired output – the airfoil „y‟ co-ordinates. Figure-2. ANN architecture used For our analysis, we considered the below sigmoidal activation function to generate the output. x e xf    1 1 )( (1) Abid et al. defines the linear output for a neuron j at output layer L as  H iji L j ywu (2) And Abid et al. defines the non linear output as L ju L j e uf    1 1 )( (3) The non linear error is given by L j L j L j yde 1 (4) Where and respectively is desired and current output for jth unit in the Lth layer. The linear error is given by L j L j L j ulde 2 (5) where is defined as below )(1 L j L j dfld   (Inversing the output layer) The learning function depends on the instantaneous value of the total error. As per the modified backpropagation by Abid et al, the new cost term to increase the convergence speed is defined as 2 2 2 1 )( 2 1 )( 2 1 L j L jp eeE  (6) Where  is Weighting coefficient Now, this new cost term can be interpreted as below Ep = (nonlinear error + linear error) Learning of the Output Layer Now, the weight update rule is derived by applying the gradient descent method to Ep. Hence we get weight update rule for output Layer L as ji p Lji w E w     where L is the network learning parameter. Hence, differentiating in equation (6) w.r.t jiw , we get ji L jL jL ji L jL jLji w u e w y ew       21  H i L jL ji L j j L jL jLji ye w u u y ew 21        H i L jL H i L j L jLji yeyufew 21 )(   (7 ) Learning of the hidden Layer Using Steepest Descent Algorithm The learning algorithm is the repeated application of the chain rule to compute the influence of each weight in the network with respect to an arbitrary error function E. Here the Error function is, 2 )( 2 1 okkk OTE  (8 ) And the weight change for hidden layer ( ) is,
  • 3. Journal of Advanced Computing and Communication Technologies (ISSN: 2347 - 2804) Volume No. 3 Issue No. 3, June 2015 46 ij k V E V     ij HJ HJ Hi Hi ok ok ok ok k ij k V I I O O I I O O E V E              By differentiating error function (8) w.r.to okO , we get )( okk ok k OT O E    To get the term ok ok I O   , we first calculate the output of kth output neuron okO using below equation )( 1 1 okokIok e O     1 )1(   okok eeO I ok  Now, differentiate okO w.r.to okI gives, 2 )1(     okokokok eeee I O II ok ok   )1( okok ok ok OO I O     Hence, kokokokk ok ok ok k dOOOT I O O E      )1()( ik Hi ok W O I    )1)(( HiHi HJ Hi OO I O     ij ij HJ I V I    ijHiHiikk ij k IOOWd V E    )1)(( iji ij k Id V E     where, )1)(( HiHiijki OOWdd   Hence the weight change for hidden layer,   dIV i (9) LEVENBERG-MARQUARDT ALGORITHM The Levenberg-Marquardt is one of the fastest and accurate learning algorithms for small to medium sized networks. The ad- vantage of the LM algorithm decreases as the number of network parameters increases. The Levenberg-Marquardt algorithm (LM) can find a solution of a system of non-linear equations, y = φx, by finding the parameters, φ, that link variables, y, to variables, x. The LM algorithm in (10), finding the appropriate change, Δφ, leading to smaller errors. The LM-algorithm depends on error, E, the Hessian matrix H, the gradient of the error,  J, a scalar µ which controls the trust region, and I is the identity matrix. JJH  EJJ  JIH  1 )(  (10) Proposed Mixed approach algorithm In the Proposed Algorithm the learning parameters µ and  are initialized according to the convergence of the problem. Here, the change of weight for output layer is determined using linear and nonlinear error and the hidden layer is trained with steepest descent method. Its mean square error is calculated and compared with threshold value. Based on the comparison, the network is declared as trained. Now the test data is passed to the trained network to get the desired output. Step1: Initialize the learning parameters to some random values Step2: Assign Threshold value to a fixed value based on the sigmoid function. Step3: Calculate linear output using equation (2) Step4: Calculate Non-Linear output using sigmoid function as in the equation (3)
  • 4. Journal of Advanced Computing and Communication Technologies (ISSN: 2347 - 2804) Volume No. 3 Issue No. 3, June 2015 47 Step5: Then calculate weight change for output layer using the equation (7) and calculate the weight change for hidden layer using the equation (9) Step6: Calculate the mean square error using the equation (8) Step7: The above calculated mean square error value is compared with threshold value. Step8: If the mean square error value is greater than threshold value, then the above steps from 3 to 6 is repeated. Step9: If the mean square error value is less than threshold value, then declare that the network is trained Step10: Now the test data is passed to trained network to produce the desired airfoil profile. 3. RESULTS AND DISCUSSION It is said that Leverberg-Marquardt is one of the fastest and accurate learning algorithms for small to medium sized net- works. But our proposed algorithm is faster than Levenberg- Marquardt algorithm in FNN with the structure of 2-2-1 to ad- dress XOR problem. From the below Table 1 observation, you can see the time taken to converge in mixed approach to solve XOR problem is less when compared to the Leven- berg-Marquardt algorithm. In general, the standard LM al- gorithm does not perform as well on pattern recognition problems as it does on function approximation problems. The advantage of the LM algorithm decreases as the number of network parameters increases. TABLE 1- Performance comparison in XOR problem Mixed Approach LMBP epoch MSE Time epoch MSE Time (ms) (ms) 0 0.108857 16 1 0.500698 109 480 0.124079 31 2 0.442961 111 840 0.124255 47 4 0.347800 114 1140 0.123867 62 6 0.314990 116 1500 0.106499 78 8 0.269975 119 1860 0.021690 82 10 0.192142 122 2280 0.000039 94 12 0.098765 125 If the Hessian is inaccurate this can greatly throw off Newton's Method. Higher lambda favors gradient descent, lower lambda favors Newton. Each cell in this matrix represents the second order derivative of the output of the neural network. Calculation of the Hessian accomplished by calculating the gradients. Actually the second derivative is element of the Hessian matrix. This is calculated with the following formula. Some of the LMA algorithm only supports a single output neuron. This is because the LMA algorithm has its roots in mathematical function approximation. So by the above we identify that Levenberg-Marquardt algorithm has some drawback that is the each epoch will take more time to complete the computation. In the Table I the levenberg- Marquardt algorithm take more time time to complete each epoch. In mixed approach for the output layer we use the steepest descent cost function to identify the error value but to increase the speed of convergence we use non linear approach in the hidden layer, so it will increase the convergence speed of backpropagation algorithm. In our investigation of neural network models for inverse design of airfoil sections, we found that satisfactory results were obtained by using the mixed approach of steepest descent and linear and non linear error terms. In our research, an airfoil is drawn with 26 x and y coordinates with corresponding coefficient of lift (CL) and the coefficient of drag (CD). We have database comprised of 78 NACA four series airfoils with different combinations of CL, CD, x & y coordinates. Here we are considering CL, CD and x coordinates as input values to the neurons in the input layer of ANN. Also we considered twenty hidden neurons in the hidden layer. The y coordinates are considered as output neurons in the output layer of ANN. We are going to calculate the y coordinates using the proposed algorithm for given x coordinates and CL & CD. The calculated y coordinates should match with the y coordinates stored in the database. Hence we can say that the main goal is to determine the y coordinates of airfoil profile for given x coordinates and CL & CD. This is the "inverse" problem for airfoil design. We trained the network with 60 airfoil sets. Then we tested the network for the remaining 18 sets. We gave CL, CD & x coordinates and our ANN generated its corresponding y coordinates which equals the y coordinates stored in our database at optimized speed. In Table 2, we have given the values of stored y coordinates, the values of calculated y coordinates (calculated using our algorithm) for a pattern NACA1017 and also the difference between these values. Just for sample, we have given 8 coordinates out of 26 coordinates for the pattern NACA1017. From this table, we can say that the computed profiles generated during the
  • 5. Journal of Advanced Computing and Communication Technologies (ISSN: 2347 - 2804) Volume No. 3 Issue No. 3, June 2015 48 test process show good agreement with the actual profiles. Table 2 - Profile comparison between calculated & stored Y coordinates Y coordinate in database Y coordinate calculated using proposed algorithm in test phase Difference 0.00350 0.003086 0.000414 0.00993 0.009320 0.000610 0.03173 0.030728 0.001002 0.04468 0.044099 0.000581 0.05721 0.056115 0.001095 0.07540 0.074828 0.000572 0.07835 0.078136 0.000214 0.07593 0.075784 0.000146 The network was trained with 60 patterns to minimize error. Then network was tested with a test set comprising 18 patterns which was not used in the training process. The new computed airfoil x and y coordinates from the testing process are passed to XFoil tool. This tool creates the airfoil and generates its corresponding CL, CD. Four such airfoils from XFoil tool are shown in Figure-3. The CL, CD from XFoil tool and the CL, CD from our database are compared and found that it matches. This shows that our ANN is trained with minimal error. A measure of the accuracy of the results obtained can be inferred from examination of error which is defined as 100 )()(    ratiothicknessairfoil yy Error computediactuali Where yi(actual) is the actual y-coordinate of the section at location i, yi(computed) is the computed y-coordinate. We computed the error using the above formula considering the y coordinate we derived from our mixed approach as yi(computed) and the y coordinate from the database as yi(actual) and tabulated in Table-3 and this table shows the maximum error in percentage for the airfoils shown in Figure-3. From this table, we can conclude that the mixed approach predicated comparatively the correct airfoil profiles Table 3- Maximum error for airfoils shown in Fig3 Airfoil Maximum error (%) NACA0012 0.00194 NACA0014 0.00265 NACA2013 0.00113 NACA2017 0.00238 Figure-3. Airfoil profiles from XFoil tool Figure-4 shows how the training decreases mean square Error (MSE) with the epoch. The red dotted line indicates the error in linear and nonlinear approach. The green line (line with hyphen and dot ) indicates the error in steepest descent. And the blue line indicates that of mixed approach. From the figure, it is obvious that mixed NACA0012 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 NACA0014 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 NACA2013 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 NACA2017 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -0.3 -0.2 -0.1 0 0.1 0.2 0.3
  • 6. Journal of Advanced Computing and Communication Technologies (ISSN: 2347 - 2804) Volume No. 3 Issue No. 3, June 2015 49 approach converges quickly compared to steepest descent or nonlinear approach. Also the error is minimal for mixed approach compared to steepest descent and nonlinear approach. Figure-4. Convergence comparison The results obtained from steepest descent, non-linear and mixed approach are compared and tabulated in Table-4. When you check the table, you can easily find that our proposed algorithm converges quickly than the other two approaches. This table clearly proves that the mixed approach results in less error and takes less time to predict the airfoil for the given CL, CD. The time is denoted by number of clock ticks in the table. Table 4- Comparison Table Epoch * 1000 error in steepest descent error in non linear approach Mixed Approach (Proposed Algorithm) 0 3.005668 3.20248 3.011418 2 0.000261 0.00013 0.000131 4 0.00102 0.001444 0.001125 6 0.002218 0.003933 0.000961 8 0.000116 0.00014 0.000101 10 0.000545 0.001281 0.000041 12 0.001216 0.002752 0.000141 14 0.00007 0.000065 0.000061 16 0.000315 0.00020 0.000021 18 0.000806 0.001336 0.000093 20 0.000064 0.000294 0.000056 22 0.000229 0.000106 0.000018 24 0.000623 0.000926 0.000068 26 0.000067 0.000499 0.000016 28 0.00018 0.000039 Converged & the aim attained 30 0.000518 0.000202 - 32 0.000068 0.000064 - 34 0.000148 0.000024 - 36 0.000446 0.000063 - 38 0.000066 0.000294 - 40 0.000126 0.000034 - 42 0.000391 0.000538 - 44 0.000064 0.000096 - 46 0.000111 0.000172 - 48 0.000347 0.000245 - 50 0.000061 0.000067 - 52 0.000099 0.000151 - 54 0.000312 0.000187 - 56 0.000058 0.000051 - 58 0.00009 0.000138 - 60 0.000282 0.000136 - 62 0.000055 0.000039 - 64 0.000082 0.000134 - 66 0.000257 0.000099 - 68 0.000053 0.000037 - 70 0.000076 0.00012 - 72 0.000235 0.000095 - 74 0.000051 0.000041 - 76 0.000071 0.000098 - 78 0.000216 0.000103 - 80 0.000049 0.000047 - 82 0.000066 0.000056 - 84 0.00020 Converged & the aim attained - 86 0.000047 - 4. CONCLUSIONS In this paper, we have used an inverse design methodology in artificial neural networks for the design of airfoil. We did a comparison between mixed algorithm and Levenberg- Marquardt algorithm. The results show that mixed algorithm is better than Levenberg-Marquardt algorithm in terms of convergence speed. The results indicate that our proposed mixed approach will converge at comparatively high speed with fewer errors. In the proposed algorithm for output layer we used the cost function having linear and nonlinear error terms then for the hidden layer steepest descent cost function is used. Results indicate that
  • 7. Journal of Advanced Computing and Communication Technologies (ISSN: 2347 - 2804) Volume No. 3 Issue No. 3, June 2015 50 optimally trained artificial neural networks using proposed algorithm may accurately predict airfoil profile at a faster pace without oscillation in learning. Annexure The surface direction accurately estimated using Partial derivatives. To find the non linear error we take inverse of f(x). )(xf is defined as below, x e xf    1 1 )( Hence inverse of )(xf , x exf  1)(1 To find the accurate direction we need the partial derivatives of f. 2 )1( u u e e f     This f is used in the equation (7) and the weight change is derived using it. The complete results obtained using our mixed approach is tabulated in Table-IV. REFERENCES [1]. Garabedian, P.R. and Korn, D.G.,"Numerical Design of Transonic Airfoils," Numerical Solution of Differential Equations - II, Academic Press, 1971 [2]. Hoist, T.L., Sloof, J.W. Yoshihara, H. and all-haus Jr.,W.F.,z" Applied computational Transonic Aerodynamics" AGARD-AG-266,1982. [3]. Sloof, J.W., "Computational Procedures in Transonic Aerodynamic Design," Lecture presented at ITCS Short Course on Computational Methods in potential Aerodynamics, Amalfi,Italy, 1982 (also NLR MP 82020 U) [4]. Hertz, J., Krogh, A. and Palmer, R.G., Introduction to the Theory of Neural Computation, Addison-Wesley Publishing Co., California, 1991. [5]. Riedmiller, M. and Braun, H., "A Direct Adaptive Method for Faster Backpropagation Learning: the RPROP Algorithm," Proceedings of the IEEE International Conference on Neural Networks, San Francisco, CA, March 28-April 1, 1993. [6]. L.Behera, S. Kumar, A. Patnaik “On adaptive learning rate that guarantees convergence in feedforward networks”, IEEE Trans. Neural Networks 17, 1116-1125, (2006). [7]. R. A. Jacobs “Increased rates of convergence through learning rate adaptation”. Neural Networks 1, 295-307, (1988). [8]. G.D.Magoulas,V.P.Plagianakos, M.N.Vrahatis “Globally convergent algorithm with local learning rate” IEEE Trans. Neural Network 13, 774-779,(2002) [9]. X. H. Yu, G. A. Chen, S. X. Cheng “Acceleration of backpropagation learning using optimized learning rate and momentum”, Electron. Lett. 29, 1288-1289, (1993). [10]. S. Abid, F. Fnaiech, M. Najim “A fast feedforward training algorithm using a modified form of the standard backpropagation algorithm”. IEEE Trans. Neural Networks 12, 424-430, (2001). [11]. Jeong S.Y, Lee S.Y, (2000) “Adaptive learning algorithms to incorporate additional functional constraints into neural networks”, Neuro computing 35,73-90,(2000) [12]. F.Han, Q.H Ling, d.S.Huang “Modified constrained learning algorithms incorporating additional functional constraints into neural networks”, Information Sciences 178,907- 919,(2008). [13]. Dilip Sarkar, ”Methods to speed up Error Back- propagation Learning Algorithm” , ACM Computing surveys, Vol.27.No4,December 1995. [14]. Thiang, Handry Khoswanto, Rendy Pangaldus ,Artificial Neural Network with Steepest Descent Backpropagation Training Algorithm for Modeling Inverse Kinematics of Manipulator, World Academy of Science, Engineering and Technology 36 ,2009. [15]. Richard S.Sutton, Two problems with backpropagation and other steepest-descent learning procedure for networks, GTE Laboratories. [16]. Jinyan Fan, Accelerating the modified Levenberg- Marquardt method for nonlinear equations “Math. Comp”, 1173-1187 ,(2014).
  • 8. Journal of Advanced Computing and Communication Technologies (ISSN: 2347 - 2804) Volume No. 3 Issue No. 3, June 2015 51 ABOUT THE AUTHORS K.Thinakaran received M.E., degree in computer science from Mahendra Engineering College which is affiliated to Anna University, Coimbatore, Tamilnadu in 2009. He is currently working toward the Ph.D. degree at the Anna University. He is currently an Associate Professor in Computer Science Engineering, Sri Venkateswara College of Engineering & Technology, Thiruvallur, India. His current research interests include Neural Network and Data Mining. Dr.R.Rajasekar received his doctorate from Department of Aeronautics, Imperial College, London, UK. His aeronautical masters‟ degree was from IIT, Madras. He is currently working as the Professor and Head of Aeronautical Engineering Department, Excel Engineering College, Pallakkapalayam, Tamil Nadu. (an affiliated college under Anna University, Chennai). His specialization and research interests are aerodynamics and its applications.