SlideShare a Scribd company logo
1 of 6
Download to read offline
Application Research Based on Artificial Neural Network(ANN) to Estimate the
                    Weight of Main Material for Transformers


  Amit Kr. Yadav,Abdul Azeem,Akhilesh Singh                                               O.P. Rahi
           Electrical Engineering Department                       Assistant Professor, Electrical Engineering Department
            National Institute Of Technology                                  National Institute Of Technology
                  Hamirpur, H.P. India                                              Hamirpur, H.P. India
         e-mail: amit1986.529@rediffmail.com                                    e-mail: oprahi2k@gmail.com


Abstract—Transformer is one of the vital components in             oil). These are the main components which have been used
electrical network which play important role in the power          for designing and cost estimation process.
system. The continuous performance of transformers is               In following artificial neural networks with Levenberg-
necessary for retaining the network reliability, forecasting its   Marquard back propagation algorithm have been used to
costs for manufacturer and industrial companies. The major
                                                                   estimate the main material’s weight of transformers. The
amount of transformer costs are related to its raw materials, so
the cost estimation process of transformers are based on           extracted data from transformer manufacturing company has
amount of used raw material.                                       been used to train the ANN and the best parameters for this
          This paper presents a new method to estimate the         network have been presented graphically. Finally result
weight of main materials for transformers. The method is           given by trained neural network have been compared with
based on Multilayer Perceptron Neural Network (MPNN) with          actual manufactured transformer prove the accuracy of
sigmoid transfer function. The Levenberg-Marquard (LM)             presented method to estimate the amount of raw materials,
algorithm is used to adjust the parameters of MPNN. The            used in this transformer manufacturing company (in various
required training data are obtained from transformer               installation temperature and altitude, various short circuit
company.
                                                                   impedance and various volt per turn)
                                                                            II.   ARTIFICIAL NEURAL NETWORK
    Keywords-Artificial Neural Network (ANN),Levenberg
Marquard(LM)algorithm,estimatingweight,design,powersystem,         Neural networks are a relatively new artificial intelligence
transformer.                                                       technique. In most cases an ANN is an adaptive system that
                                                                   changes its structure based on external or internal
                    I.   INTRODUCTION                              information that flows through the network during the
                                                                   learning phase. The learning procedure tries is to find a set
The most important components in electrical network are
                                                                   of connections w that gives a mapping that fits the training
transformers which have an important role in electrification.
                                                                   set well. Furthermore, neural networks can be viewed as
The continuous performance of transformers is necessary
                                                                   highly nonlinear functions with the basic the form
for retaining the network reliability, forecasting its costs for
manufacturer and industrial companies. Since the major                                    F ( x, w)  y
amount of transformers costs is related to its raw materials,      Where x is the input vector presented to the network, w are
so having amount of used raw material in transformers is an        the weights of the network, and y is the corresponding
important task [1]. The aim of the transformer design is to        output vector approximated or predicted by the network.
completely obtain the dimensions of all the parts of the           The weight vector w is commonly ordered first by layer,
transformer based on the desired characteristics, available        then by neurons, and finally by the weights of each neuron
standards, and access to lower cost, lower weight, lower           plus its bias. This view of network as an parameterized
size, and better performance [2-3]. Various methods have           function will be the basis for applying standard function
been studied and some techniques have been used.                   optimization methods to solve the problem of neural
Artificial Neural Network is one of methods that mostly            network training.
have been used in the recent years, in this field. Transformer
insulation aging diagnoses, the time left from the life of
transformers oil, transformers protection and selection of         A. ANN Structure
winding material in order to reduce the cost, are few topics        A neural network is determined by its architecture, training
that have been performed [4-8].                                    method and exciting function. Its architecture determines
In this paper Artificial Neural Network based method have          the pattern of connections among neurons. Network training
been used to estimate the weight of main materials for             changes the values of weights and biases (network
transformer (weight of copper, weight of iron and weight of
parameters) in each step in order to minimize the mean
square of output error.
 Multi-Layer Perceptron (MLP) has been used in load
forecasting, nonlinear control, system identification and
pattern recognition [9], thus in this paper multi-layer
perceptron network (with four inputs, three outputs and a
hidden layer) with Levenberg-Marquardt training algorithm
have been used.
 In general, on function approximation problems, for
network that contain up to a few hundred weights, the
Levenberg-Marquardt algorithm have the fastest
convergence. This advantage is especially noticeable if very
accurate training is required. In many cases, trainlm is used
to obtain lower mean square error than any other algorithms
tested. As the number of weights in the network increases,
                                                                              Figure 2: Schematic of Inputs and Outputs
the advantage of trainlm decreases. In addition trainlm
performance is relatively poor on pattern recognition           C. Training of ANN
problems. The storage requirements of trainlm are larger            The major justification for the use of ANNs is their
than the other algorithm tested.                                ability to learn relationships in complex data sets that may
                                                                not be easily perceived by engineers. An ANN performs this
                                                                function as a result of training that is a process of repetitively
                                                                presenting a set of training data (typically a representative
                                                                subset of the complete set of data available) to the network
                                                                and adjusting the weights so that each input data set produces
                                                                the desired output.
                                                                Unsupervised and supervised learning process can be used
                                                                to adjust the weights in an ANN. Supervised learning
                                                                process requires both input/output pairs to train the network
                                                                but supervised learning process requires only input pairs to
                                                                train the network.Unsupervised learning can be
                                                                characterized as a fast, but potentially inaccurate, method of
                                                                adjusting the weights. On the other hand, supervised
                                                                learning typically requires longer learning times and can be
                                                                more accurate. There is no way to tell beforehand which
                Figure 1: Artificial Neural Network
                                                                learning method will work best for a given application. For
B. Input and Outputs of ANN                                     this reason, we concentrate on the very popular supervised
A neural network is a data modeling tool that is capable to     learning approach based on the backpropagation training
represent complex input/output relationships. ANN typically     algorithm, which has been shown to produce good results
consists of a set of processing elements called neurons that    for a large number of different problems.
interact by sending signals to one another along weighted       The back propagation training algorithm is a method of
connections. The required data are the data which have been     iteratively adjusting the neural network weights until the
accumulated by Transformer Company. In last recent four         desired accuracy level is achieved. It is based on a gradient-
years, are used for estimating the iron, copper and oil         search optimization method applied to an error function.
weights of transformers and consequently transformer costs      Typical error functions include the mean square error shown
are estimated by proposed method (in various installation       in (1), where N is the total number of input/output pairs
height and temperature with different short-circuit             (which can be vector quantities) used for training:
impedance and volt per turn). The schematic of the
presented method can be shown by Figure 2.                              1 N
                                                                    mse    [OUTforecast ,i  OUTactual ,i ]2 (1)
                                                                        N i 1
                                                                Where OUT forecast ,i and OUTactual ,i are the output
                                                                forecast by the neural network and the actual (desired)
                                                                output, respectively, of the ith training example. The set of
                                                                training examples (input/output pairs) defines the training
                                                                set or learning set. For best results, the training set should
adequately represent all expected variations in the complete       to change the weights. The learning rate is typically selected
set of data.                                                       between 0.01 and 1.0. The coefficient in m (2) is called
A recursive algorithm for adjusting the weights can be             momentum and allows the weight updates at one iteration to
developed, such that the error defined by (1) is minimized.        utilize information from previous error values. The
The equations (2) and (3) are recursive training equations         momentum term helps avoid settling into a local minimum
based on the generalized delta rule and the corresponding          and is selected between 0.01 and 1.0.
algorithm is called gradient descent back propagation.             The recursive training algorithm (set n= n+1) is executed
                                                                   until the network satisfactorily predicts the output values.
     wpj ,qk (n  1)  lr. qk .OUTpj  m.wpj ,qk (n) (2)        Common stopping criteria for the training algorithm involve
                                                                   monitoring either the mean square error or the maximum
     wpj ,qk (n  1)  wpj ,qk (n)  wpj ,qk (n  1)     (3)      error or both and stopping when the value is less than a
Where :                                                            specified tolerance. The selected tolerance is very problem
n : the no. of current iteration of the training algorithm         dependent and may or may not be actually achievable.
                                                                   There is no mathematical proof that the back propagation
wpj ,qk (n) : the value of weight that connects the neuron p       training algorithm will ever converge within a given
            Of layer j with the neuron q of layer k during         tolerance. The only guarantee is that any changes of the
            Iteration n.                                           weights will not increase the total error. Note that the
wpj ,qk (n) : the variation in the value of weight wpj ,qk (n)    inclusion of the momentum term may allow the error as
                                                                   defined in (1) to temporarily increase if the optimization
               during the iteration n.                             process is moving away from the local minimum.
 qk         : the value of  (delta coefficient) for the neuron
              q of layer k.
OUTpj :the output for the neuron p of layer j.                        III.   LEVENBERG MARQUARD FORMULATION
lr          : the learning rate.                                                  FOR TRANSFORMER
m          :the momentum.                                          The LM algorithm has been used in function approximation.
                                                                   Basically it consists in solving the equation:
The value of d is calculated differently depending on the                        ( J t J   I )  J t E       (6)
specific location of the weight under consideration (4) is the
formula for calculating d for any weight connected from a          Where J is the Jacobian matrix for the system, λ is the
hidden      layer     neuron     to    an    output      layer     Levenberg's damping factor, δ is the weight update vector
neuron:                                                            that we want to find and E is the error vector containing the
       2                                                           output errors for each input vector used on training the
 qk    .OUTqk .(1  OUTqk ).(OUTactualqk  OUTqk ) (4)           network. The δ tell us by how much we should change our
      N                                                            network weights to achieve a (possibly) better solution. The
where layer k is the output layer, OUTactualqk is the actual       JtJ matrix can also be known as the approximated Hessian.
(desired) output of any neuron q of the output layer k , and       The λ damping factor is adjusted at each iteration, and
N is the number of training examples of the training set.          guides the optimization process. If reduction of E is rapid, a
The values in (4) are known from the training set. The             smaller value can be used, bringing the algorithm closer to
calculated output of the network is compared to the actual         the Gauss Newton algorithm, whereas if an iteration gives
value to generate an error signal. The error signal is             insufficient reduction in the residual, λ can be increased,
propagated back through the neural network to adjust the           giving a step closer to the gradient descent direction.
weights, as shown in (2) and (3).                                  Algorithm:-
For neurons in any other than an output layer, however, an             1. Compute the Jacobian (by using finite differences
error value is not directly obtainable because no desired                  or the chain rule)
output value is given for these internal neurons as a part of          2. Compute the error gradient
the training set. The error values for any neurons other than                   ( i) g = JtE
the output neurons are calculated as weighted sums of the              3. Approximate the Hessian using the cross product
output layer errors:                                                       Jacobian .
                                      Q                                          ( i )H = JtJ
        pj  OUTpj .(1  OUTpj ).  qk .wpj ,qk        (5)           4. Solve (H + λI)δ = g to find δ .
                                     q 1                              5. Update the network weights w using δ
where Q is the number of neurons of the output layer.                  6. Recalculate the sum of squared errors using the
The coefficient lr in (2) is called learning rate and directly             updated weights
controls how much the calculated error values are allowed              7. If the sum of squared errors has not decreased,
(i)Discard the new weights, increase λ             T17              27500             11110      7387
                      using different values and go to step 4.           T18              25600             10630      2780
                                                                         T19               9500              7250      6300
       8.    Else decrease λ using different values and stop.            T20              16700              8900      8479
                                                                         T21              27000             12250      5199
                                                                         T22              15550              8550      7550
                        IV.   SIMULATION                                 T23              25400             12250     17450
For network learning, some input vectors (P) and some                    T24              53100              2550      8479
output vectors (T) are needed. By considering extracted data
the belong type of 63/20kV transformer from transformer              A two layer feed-forward network with sigmoid hidden
manufacturing during last 4 years, simulating has been               neurons and linear output neurons has been used. The
performed in the following case. In Table II and III, 24             network has been trained with Levenberg-Marquard back
inputs vector and 24 outputs vector that are used for                propagation algorithm. The number of neurons in hidden
network learning.                                                    layer is twenty.
                              TABLE II
                  INPUTS FOR TRASFORMER 63/20 KV
Inputs        Short circuit  Installation Volt per     Environment
               Impedance       height        turn      temperature                  V.     RESULT AND DISCUSSION
                 percent
 P1                 8           1000       87.719          50
 P2                10           1000       76.336          55
 P3                10           1000       84.034          50
 P4                10           1000        60.79          45
 P5                10           1000       68.027          40
 P6                12           1500       68.027          40
 P7                12           2200       54.795          50
 P8               12.5          1364       99.502          40
 P9               12.5          1500       54.201          40
 P10              12.5          1500        67.34          40
 P11              12.5          1500       76.923          50
 P12              12.5          1500      106.952          45
 P13              12.5          1700       97.087          45
 P14              12.5          1900       49.948          39
 P15               13           2000        66.67          50
 P16              13.5          1000        79.94          47
 P17              13.5          1500        75.76          45
 P18              13.5          1500       75.785          40
 P19              13.5          1700        37.88          40
 P20              13.5          1700        47.17          55
 P21              13.5          1700       66.007          42
 P22              13.5          2000        46.62          50
 P23              13.7          1500       75.753          55
 P24               14           1000      121.212          45

                                                                                         Figure3: Mean Square Error
                              TABLE III
               OUTPUTS FOR TRANSFORMER 63/20 KV
                                                                     The performance curve is shown in Figure 1. In this figure
    Outputs        Weight of iron   Weight of Oil Weight of          mean squared error have become small by increasing the
                                                   Copper            number of epoch. The test set error and the validation set
       T1             31200             13500       7700             error has similar characteristics and no significant over
       T2             25600             11500       7770             fitting has occurred by iteration 6(where best validation
       T3             27000             11400       7094
                                                                     performance has occurred).
       T4             22100              8500       7000
       T5             22700              9900       6894
       T6             24000             10600       8000
       T7             15100              7500       4124
       T8             38000             18000      11720
       T9             15250              7700       8667
       T10            22400             10300       6891
       T11            30160             13700      10000
       T12            45470             19000       9765
       T13            33200             14100       6600
       T14            17450              8800       9700
       T15            25562             11820       8950
       T16            28750             11850       9500
Figure6: Regression plot of weight of main material of transformer
   Figure4: Prediction of weight of main material of transformer during      The output tracks the targets very well for training, testing,
                            training analysis.                               and validation, and the R-value is over 0.95 for the total
The output has tracked the targets very well for training, in                response.
estimation of weight of main material in transformer. The
value of regression is one which indicates a close correlation                                           CONCLUSIONS
between outputs and targets.                                                    The major amount of transformers costs is related to its
                                                                             raw materials, so having the amount of used raw material in
                                                                             various conditions in transformers has been used in costs
                                                                             analysis process.    This paper presented a new method to
                                                                             estimate the weight of main material (weight of copper,
                                                                             weight of iron and weight of oil) for 63/20kV transformers.
                                                                             The method is based on two layer feed-forward network
                                                                             with sigmoid transfer function in hidden layer and linear
                                                                             transfer function in output neuron. The Levenberg-
                                                                             Marquard (LM) algorithm is used to adjust the parameters
                                                                             of MPNN. The required training data for MPNN are the
                                                                             obtained information from the transformers manufacturing
                                                                             company during last 4 years.         The advantage of using
                                                                             ANN in the design and optimization is that ANN is required
                                                                             to be trained only once. After the completion of training, the
                                                                             ANN gives the transformers weight without any iterative
                                                                             process. Thus, this model can be used confidently for the
                                                                             design, cost estimating and development of transformers.
                                                                             Developed model has very fast, reliable and robust
                                                                             structure.


                                                                                                            REFERENCES

                                                                             [1]    P. S. Georgilakis, "Recursive genetic algorithm-finite element method
                                                                                    technique for the solution of transformer manufacturing cost
                                                                                    minimization problem”, IET Electr. Power Appl., 2009, Vol. 3, Iss. 6,
 Figure5: Prediction of weight of main material of transformer during test          pp. 514–519.
                                 analysis
[2]   Pavlos S. Georgilakis, Marina A. Tsili and Athanassios T. Souflaris,
      ”A Heuristic Solution to the Transformer Manufacturing Cost
      Optimization Problem”, JAPMED’4-4th Japanese-Mediterranean
      Workshop on Applied Electromagnetic Engineering for Magnetic,
      Superconducting and Nano Materials, Poster Session, Paper
      103_PS_1 nology (e-JST), September 2005, pp. 83-84.
[3]   Manish Kumar Srivastava,”An innovative method for design of
      distribution transformer”, e-Journal of Science & Technology (e-
      JST), April 2009, pp. 49-54.
[4]   Geromel, Luiz H., Souza, Carlos R., ”The application of intelligent
      systems in power transformer design”, IEEE Conference, 2002, pp.
      1504-1509.
[5]   Yang Qiping, Xue Wude, LanZida, “Transformer Insulation Aging
      Diagnosis and Service Life Evaluation”, Transformer [J], No.2, Vol.
      41,
[6]   Tetsuro Matsui, Yasuo Nakahara, Kazuo Nishiyama, Noboru Urabe
      and Masayoshi Itoh, ”Development of Remaining Life Assessment
      for Oilimmersed Transformer Using Structured Neural Networks”, I
      CROSSICE International Joint Conference, August 2009, pp. 1855-
      1858.
[7]   M. R. Zaman and M. A. Rahman, “Experimental testing of the
      artificial neural network based protection of power transformers”,
      IEEE Trans. Power Del., vol. 13, no. 2, pp. 510–517, Apr. 1998.
[8]   Eleftherios I. Amoiralis, Pavlos S. Georgilakis and Alkiviadis T.
      Gioulekas, "An Artificial Neural Network for the Selection of
      Winding Material in Power Transformers", Springer-Verlag Berlin
      Heidelberg, 2006, pp. 465-468.
[9]   Khaled Shaban, Ayman EL-Hag and Andrei Matveev, ”Predicting
      Transformers Oil Parameters”, IEEE Electrical Insulation
      Conference, Montreal, QC, Canada, 31 May - 3 June 2009, pp. 196-
      199.

More Related Content

What's hot

Text independent speaker recognition using combined lpc and mfc coefficients
Text independent speaker recognition using combined lpc and mfc coefficientsText independent speaker recognition using combined lpc and mfc coefficients
Text independent speaker recognition using combined lpc and mfc coefficientseSAT Publishing House
 
Chain Based Wireless Sensor Network Routing Using Hybrid Optimization (HBO An...
Chain Based Wireless Sensor Network Routing Using Hybrid Optimization (HBO An...Chain Based Wireless Sensor Network Routing Using Hybrid Optimization (HBO An...
Chain Based Wireless Sensor Network Routing Using Hybrid Optimization (HBO An...IJEEE
 
An Improved Deterministic Energy Efficient Clustering Protocol for Wireless S...
An Improved Deterministic Energy Efficient Clustering Protocol for Wireless S...An Improved Deterministic Energy Efficient Clustering Protocol for Wireless S...
An Improved Deterministic Energy Efficient Clustering Protocol for Wireless S...IJERA Editor
 
Artificial Neural Network Implementation on FPGA – a Modular Approach
Artificial Neural Network Implementation on FPGA – a Modular ApproachArtificial Neural Network Implementation on FPGA – a Modular Approach
Artificial Neural Network Implementation on FPGA – a Modular ApproachRoee Levy
 
Implementation of Feed Forward Neural Network for Classification by Education...
Implementation of Feed Forward Neural Network for Classification by Education...Implementation of Feed Forward Neural Network for Classification by Education...
Implementation of Feed Forward Neural Network for Classification by Education...ijsrd.com
 
Performance Analysis of Bus Topology in Fiber Optic Communication
Performance Analysis of Bus Topology in Fiber Optic CommunicationPerformance Analysis of Bus Topology in Fiber Optic Communication
Performance Analysis of Bus Topology in Fiber Optic Communicationijceronline
 
Hpcc euler
Hpcc eulerHpcc euler
Hpcc eulerrhuzefa
 
Multi-core programming talk for weekly biostat seminar
Multi-core programming talk for weekly biostat seminarMulti-core programming talk for weekly biostat seminar
Multi-core programming talk for weekly biostat seminarUSC
 
An energy efficient protocol with static clustering for wsn
An energy efficient protocol with static clustering for wsnAn energy efficient protocol with static clustering for wsn
An energy efficient protocol with static clustering for wsnambitlick
 
Iaetsd multi-view and multi band face recognition
Iaetsd multi-view and multi band face recognitionIaetsd multi-view and multi band face recognition
Iaetsd multi-view and multi band face recognitionIaetsd Iaetsd
 
IRJET- Congestion Avoidance and Qos Improvement in Base Station with Femt...
IRJET-  	  Congestion Avoidance and Qos Improvement in Base Station with Femt...IRJET-  	  Congestion Avoidance and Qos Improvement in Base Station with Femt...
IRJET- Congestion Avoidance and Qos Improvement in Base Station with Femt...IRJET Journal
 
Efficient de cvpr_2020_paper
Efficient de cvpr_2020_paperEfficient de cvpr_2020_paper
Efficient de cvpr_2020_papershanullah3
 
Java networking 2012 ieee projects @ Seabirds ( Chennai, Bangalore, Hyderabad...
Java networking 2012 ieee projects @ Seabirds ( Chennai, Bangalore, Hyderabad...Java networking 2012 ieee projects @ Seabirds ( Chennai, Bangalore, Hyderabad...
Java networking 2012 ieee projects @ Seabirds ( Chennai, Bangalore, Hyderabad...SBGC
 
Saptashwa_Mitra_Sitakanta_Mishra_Final_Project_Report
Saptashwa_Mitra_Sitakanta_Mishra_Final_Project_ReportSaptashwa_Mitra_Sitakanta_Mishra_Final_Project_Report
Saptashwa_Mitra_Sitakanta_Mishra_Final_Project_ReportSitakanta Mishra
 
Particle Swarm Optimization Based QoS Aware Routing for Wireless Sensor Networks
Particle Swarm Optimization Based QoS Aware Routing for Wireless Sensor NetworksParticle Swarm Optimization Based QoS Aware Routing for Wireless Sensor Networks
Particle Swarm Optimization Based QoS Aware Routing for Wireless Sensor Networksijsrd.com
 
Different Resource Allocation in Femtocell
Different Resource Allocation in Femtocell Different Resource Allocation in Femtocell
Different Resource Allocation in Femtocell toha ardi nugraha
 
Simulation of Single and Multilayer of Artificial Neural Network using Verilog
Simulation of Single and Multilayer of Artificial Neural Network using VerilogSimulation of Single and Multilayer of Artificial Neural Network using Verilog
Simulation of Single and Multilayer of Artificial Neural Network using Verilogijsrd.com
 
Collaborative, Context Based Activity Control Method for Camera Networks
Collaborative, Context Based Activity Control Method for Camera NetworksCollaborative, Context Based Activity Control Method for Camera Networks
Collaborative, Context Based Activity Control Method for Camera NetworksMarek Kraft
 

What's hot (20)

Text independent speaker recognition using combined lpc and mfc coefficients
Text independent speaker recognition using combined lpc and mfc coefficientsText independent speaker recognition using combined lpc and mfc coefficients
Text independent speaker recognition using combined lpc and mfc coefficients
 
Chain Based Wireless Sensor Network Routing Using Hybrid Optimization (HBO An...
Chain Based Wireless Sensor Network Routing Using Hybrid Optimization (HBO An...Chain Based Wireless Sensor Network Routing Using Hybrid Optimization (HBO An...
Chain Based Wireless Sensor Network Routing Using Hybrid Optimization (HBO An...
 
An Improved Deterministic Energy Efficient Clustering Protocol for Wireless S...
An Improved Deterministic Energy Efficient Clustering Protocol for Wireless S...An Improved Deterministic Energy Efficient Clustering Protocol for Wireless S...
An Improved Deterministic Energy Efficient Clustering Protocol for Wireless S...
 
Artificial Neural Network Implementation on FPGA – a Modular Approach
Artificial Neural Network Implementation on FPGA – a Modular ApproachArtificial Neural Network Implementation on FPGA – a Modular Approach
Artificial Neural Network Implementation on FPGA – a Modular Approach
 
Implementation of Feed Forward Neural Network for Classification by Education...
Implementation of Feed Forward Neural Network for Classification by Education...Implementation of Feed Forward Neural Network for Classification by Education...
Implementation of Feed Forward Neural Network for Classification by Education...
 
Performance Analysis of Bus Topology in Fiber Optic Communication
Performance Analysis of Bus Topology in Fiber Optic CommunicationPerformance Analysis of Bus Topology in Fiber Optic Communication
Performance Analysis of Bus Topology in Fiber Optic Communication
 
Hpcc euler
Hpcc eulerHpcc euler
Hpcc euler
 
Multi-core programming talk for weekly biostat seminar
Multi-core programming talk for weekly biostat seminarMulti-core programming talk for weekly biostat seminar
Multi-core programming talk for weekly biostat seminar
 
An energy efficient protocol with static clustering for wsn
An energy efficient protocol with static clustering for wsnAn energy efficient protocol with static clustering for wsn
An energy efficient protocol with static clustering for wsn
 
Iaetsd multi-view and multi band face recognition
Iaetsd multi-view and multi band face recognitionIaetsd multi-view and multi band face recognition
Iaetsd multi-view and multi band face recognition
 
IRJET- Congestion Avoidance and Qos Improvement in Base Station with Femt...
IRJET-  	  Congestion Avoidance and Qos Improvement in Base Station with Femt...IRJET-  	  Congestion Avoidance and Qos Improvement in Base Station with Femt...
IRJET- Congestion Avoidance and Qos Improvement in Base Station with Femt...
 
Efficient de cvpr_2020_paper
Efficient de cvpr_2020_paperEfficient de cvpr_2020_paper
Efficient de cvpr_2020_paper
 
Java networking 2012 ieee projects @ Seabirds ( Chennai, Bangalore, Hyderabad...
Java networking 2012 ieee projects @ Seabirds ( Chennai, Bangalore, Hyderabad...Java networking 2012 ieee projects @ Seabirds ( Chennai, Bangalore, Hyderabad...
Java networking 2012 ieee projects @ Seabirds ( Chennai, Bangalore, Hyderabad...
 
Saptashwa_Mitra_Sitakanta_Mishra_Final_Project_Report
Saptashwa_Mitra_Sitakanta_Mishra_Final_Project_ReportSaptashwa_Mitra_Sitakanta_Mishra_Final_Project_Report
Saptashwa_Mitra_Sitakanta_Mishra_Final_Project_Report
 
Particle Swarm Optimization Based QoS Aware Routing for Wireless Sensor Networks
Particle Swarm Optimization Based QoS Aware Routing for Wireless Sensor NetworksParticle Swarm Optimization Based QoS Aware Routing for Wireless Sensor Networks
Particle Swarm Optimization Based QoS Aware Routing for Wireless Sensor Networks
 
Different Resource Allocation in Femtocell
Different Resource Allocation in Femtocell Different Resource Allocation in Femtocell
Different Resource Allocation in Femtocell
 
Simulation of Single and Multilayer of Artificial Neural Network using Verilog
Simulation of Single and Multilayer of Artificial Neural Network using VerilogSimulation of Single and Multilayer of Artificial Neural Network using Verilog
Simulation of Single and Multilayer of Artificial Neural Network using Verilog
 
neural-control-drone
neural-control-droneneural-control-drone
neural-control-drone
 
Ce35461464
Ce35461464Ce35461464
Ce35461464
 
Collaborative, Context Based Activity Control Method for Camera Networks
Collaborative, Context Based Activity Control Method for Camera NetworksCollaborative, Context Based Activity Control Method for Camera Networks
Collaborative, Context Based Activity Control Method for Camera Networks
 

Similar to teste

Artificial Neural Network Seminar Report
Artificial Neural Network Seminar ReportArtificial Neural Network Seminar Report
Artificial Neural Network Seminar ReportTodd Turner
 
A simplified design of multiplier for multi layer feed forward hardware neura...
A simplified design of multiplier for multi layer feed forward hardware neura...A simplified design of multiplier for multi layer feed forward hardware neura...
A simplified design of multiplier for multi layer feed forward hardware neura...eSAT Publishing House
 
Switchgear and protection.
Switchgear and protection.Switchgear and protection.
Switchgear and protection.Surabhi Vasudev
 
friction factor modelling.pptx
friction factor modelling.pptxfriction factor modelling.pptx
friction factor modelling.pptxOKORIE1
 
Development of a virtual linearizer for correcting transducer static nonlinea...
Development of a virtual linearizer for correcting transducer static nonlinea...Development of a virtual linearizer for correcting transducer static nonlinea...
Development of a virtual linearizer for correcting transducer static nonlinea...ISA Interchange
 
Application of nn to power system
Application of nn to power systemApplication of nn to power system
Application of nn to power systemjulio shimano
 
Artificial neural network for load forecasting in smart grid
Artificial neural network for load forecasting in smart gridArtificial neural network for load forecasting in smart grid
Artificial neural network for load forecasting in smart gridEhsan Zeraatparvar
 
Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...
Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...
Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...Cemal Ardil
 
Electricity Price Forecasting Using ELM-Tree Approach
Electricity Price Forecasting Using ELM-Tree ApproachElectricity Price Forecasting Using ELM-Tree Approach
Electricity Price Forecasting Using ELM-Tree ApproachIRJET Journal
 
IRJET- Artificial Neural Network: Overview
IRJET-  	  Artificial Neural Network: OverviewIRJET-  	  Artificial Neural Network: Overview
IRJET- Artificial Neural Network: OverviewIRJET Journal
 
COMPARATIVE STUDY OF BACKPROPAGATION ALGORITHMS IN NEURAL NETWORK BASED IDENT...
COMPARATIVE STUDY OF BACKPROPAGATION ALGORITHMS IN NEURAL NETWORK BASED IDENT...COMPARATIVE STUDY OF BACKPROPAGATION ALGORITHMS IN NEURAL NETWORK BASED IDENT...
COMPARATIVE STUDY OF BACKPROPAGATION ALGORITHMS IN NEURAL NETWORK BASED IDENT...ijcsit
 
Artificial Neural Network ANN
Artificial Neural Network ANNArtificial Neural Network ANN
Artificial Neural Network ANNAbdullah al Mamun
 
A survey research summary on neural networks
A survey research summary on neural networksA survey research summary on neural networks
A survey research summary on neural networkseSAT Publishing House
 
Deep learning notes.pptx
Deep learning notes.pptxDeep learning notes.pptx
Deep learning notes.pptxPandi Gingee
 
Live to learn: learning rules-based artificial neural network
Live to learn: learning rules-based artificial neural networkLive to learn: learning rules-based artificial neural network
Live to learn: learning rules-based artificial neural networknooriasukmaningtyas
 
Ann model and its application
Ann model and its applicationAnn model and its application
Ann model and its applicationmilan107
 

Similar to teste (20)

Artificial Neural Network Seminar Report
Artificial Neural Network Seminar ReportArtificial Neural Network Seminar Report
Artificial Neural Network Seminar Report
 
A simplified design of multiplier for multi layer feed forward hardware neura...
A simplified design of multiplier for multi layer feed forward hardware neura...A simplified design of multiplier for multi layer feed forward hardware neura...
A simplified design of multiplier for multi layer feed forward hardware neura...
 
A040101001006
A040101001006A040101001006
A040101001006
 
Switchgear and protection.
Switchgear and protection.Switchgear and protection.
Switchgear and protection.
 
friction factor modelling.pptx
friction factor modelling.pptxfriction factor modelling.pptx
friction factor modelling.pptx
 
Iv3515241527
Iv3515241527Iv3515241527
Iv3515241527
 
Development of a virtual linearizer for correcting transducer static nonlinea...
Development of a virtual linearizer for correcting transducer static nonlinea...Development of a virtual linearizer for correcting transducer static nonlinea...
Development of a virtual linearizer for correcting transducer static nonlinea...
 
Application of nn to power system
Application of nn to power systemApplication of nn to power system
Application of nn to power system
 
Artificial neural network for load forecasting in smart grid
Artificial neural network for load forecasting in smart gridArtificial neural network for load forecasting in smart grid
Artificial neural network for load forecasting in smart grid
 
N ns 1
N ns 1N ns 1
N ns 1
 
Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...
Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...
Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...
 
Electricity Price Forecasting Using ELM-Tree Approach
Electricity Price Forecasting Using ELM-Tree ApproachElectricity Price Forecasting Using ELM-Tree Approach
Electricity Price Forecasting Using ELM-Tree Approach
 
IRJET- Artificial Neural Network: Overview
IRJET-  	  Artificial Neural Network: OverviewIRJET-  	  Artificial Neural Network: Overview
IRJET- Artificial Neural Network: Overview
 
COMPARATIVE STUDY OF BACKPROPAGATION ALGORITHMS IN NEURAL NETWORK BASED IDENT...
COMPARATIVE STUDY OF BACKPROPAGATION ALGORITHMS IN NEURAL NETWORK BASED IDENT...COMPARATIVE STUDY OF BACKPROPAGATION ALGORITHMS IN NEURAL NETWORK BASED IDENT...
COMPARATIVE STUDY OF BACKPROPAGATION ALGORITHMS IN NEURAL NETWORK BASED IDENT...
 
Artificial Neural Network ANN
Artificial Neural Network ANNArtificial Neural Network ANN
Artificial Neural Network ANN
 
A survey research summary on neural networks
A survey research summary on neural networksA survey research summary on neural networks
A survey research summary on neural networks
 
Deep learning notes.pptx
Deep learning notes.pptxDeep learning notes.pptx
Deep learning notes.pptx
 
Live to learn: learning rules-based artificial neural network
Live to learn: learning rules-based artificial neural networkLive to learn: learning rules-based artificial neural network
Live to learn: learning rules-based artificial neural network
 
APPLICATIONS OF WAVELET TRANSFORMS AND NEURAL NETWORKS IN EARTHQUAKE GEOTECHN...
APPLICATIONS OF WAVELET TRANSFORMS AND NEURAL NETWORKS IN EARTHQUAKE GEOTECHN...APPLICATIONS OF WAVELET TRANSFORMS AND NEURAL NETWORKS IN EARTHQUAKE GEOTECHN...
APPLICATIONS OF WAVELET TRANSFORMS AND NEURAL NETWORKS IN EARTHQUAKE GEOTECHN...
 
Ann model and its application
Ann model and its applicationAnn model and its application
Ann model and its application
 

More from lealtran

Madame tussaudwien gab
Madame tussaudwien gabMadame tussaudwien gab
Madame tussaudwien gablealtran
 
Fotosnanosdocorpohumano fj
Fotosnanosdocorpohumano fjFotosnanosdocorpohumano fj
Fotosnanosdocorpohumano fjlealtran
 
Amigosda internet
Amigosda internetAmigosda internet
Amigosda internetlealtran
 
Alfabetoamigo
AlfabetoamigoAlfabetoamigo
Alfabetoamigolealtran
 
Showdomilhaoparaloiras
ShowdomilhaoparaloirasShowdomilhaoparaloiras
Showdomilhaoparaloiraslealtran
 
Almas gemeas hh
Almas gemeas hhAlmas gemeas hh
Almas gemeas hhlealtran
 
Prostata.gaucha
Prostata.gauchaProstata.gaucha
Prostata.gauchalealtran
 
Cidade subterranea de derinkuyu
Cidade subterranea de derinkuyuCidade subterranea de derinkuyu
Cidade subterranea de derinkuyulealtran
 
Primeiros socorros
Primeiros socorrosPrimeiros socorros
Primeiros socorroslealtran
 
Por que amamos os animais
Por que amamos os animaisPor que amamos os animais
Por que amamos os animaislealtran
 
Oporteirodoprostbulo
OporteirodoprostbuloOporteirodoprostbulo
Oporteirodoprostbulolealtran
 
Foram garfos
Foram garfosForam garfos
Foram garfoslealtran
 
Perfeiçao[hll]
Perfeiçao[hll]Perfeiçao[hll]
Perfeiçao[hll]lealtran
 
Felicidade
FelicidadeFelicidade
Felicidadelealtran
 
A urgencia de_viver_som
A urgencia de_viver_somA urgencia de_viver_som
A urgencia de_viver_somlealtran
 

More from lealtran (20)

Madame tussaudwien gab
Madame tussaudwien gabMadame tussaudwien gab
Madame tussaudwien gab
 
Fotosnanosdocorpohumano fj
Fotosnanosdocorpohumano fjFotosnanosdocorpohumano fj
Fotosnanosdocorpohumano fj
 
Tempo...
Tempo...Tempo...
Tempo...
 
Amigosda internet
Amigosda internetAmigosda internet
Amigosda internet
 
Alfabetoamigo
AlfabetoamigoAlfabetoamigo
Alfabetoamigo
 
Showdomilhaoparaloiras
ShowdomilhaoparaloirasShowdomilhaoparaloiras
Showdomilhaoparaloiras
 
Almas gemeas hh
Almas gemeas hhAlmas gemeas hh
Almas gemeas hh
 
Prostata.gaucha
Prostata.gauchaProstata.gaucha
Prostata.gaucha
 
Cidade subterranea de derinkuyu
Cidade subterranea de derinkuyuCidade subterranea de derinkuyu
Cidade subterranea de derinkuyu
 
Primeiros socorros
Primeiros socorrosPrimeiros socorros
Primeiros socorros
 
Por que amamos os animais
Por que amamos os animaisPor que amamos os animais
Por que amamos os animais
 
P vapvi
P vapviP vapvi
P vapvi
 
Cavalo
CavaloCavalo
Cavalo
 
Oporteirodoprostbulo
OporteirodoprostbuloOporteirodoprostbulo
Oporteirodoprostbulo
 
Bbb 2012
Bbb 2012Bbb 2012
Bbb 2012
 
Foram garfos
Foram garfosForam garfos
Foram garfos
 
Perfeiçao[hll]
Perfeiçao[hll]Perfeiçao[hll]
Perfeiçao[hll]
 
Felicidade
FelicidadeFelicidade
Felicidade
 
Amanhecer
AmanhecerAmanhecer
Amanhecer
 
A urgencia de_viver_som
A urgencia de_viver_somA urgencia de_viver_som
A urgencia de_viver_som
 

teste

  • 1. Application Research Based on Artificial Neural Network(ANN) to Estimate the Weight of Main Material for Transformers Amit Kr. Yadav,Abdul Azeem,Akhilesh Singh O.P. Rahi Electrical Engineering Department Assistant Professor, Electrical Engineering Department National Institute Of Technology National Institute Of Technology Hamirpur, H.P. India Hamirpur, H.P. India e-mail: amit1986.529@rediffmail.com e-mail: oprahi2k@gmail.com Abstract—Transformer is one of the vital components in oil). These are the main components which have been used electrical network which play important role in the power for designing and cost estimation process. system. The continuous performance of transformers is In following artificial neural networks with Levenberg- necessary for retaining the network reliability, forecasting its Marquard back propagation algorithm have been used to costs for manufacturer and industrial companies. The major estimate the main material’s weight of transformers. The amount of transformer costs are related to its raw materials, so the cost estimation process of transformers are based on extracted data from transformer manufacturing company has amount of used raw material. been used to train the ANN and the best parameters for this This paper presents a new method to estimate the network have been presented graphically. Finally result weight of main materials for transformers. The method is given by trained neural network have been compared with based on Multilayer Perceptron Neural Network (MPNN) with actual manufactured transformer prove the accuracy of sigmoid transfer function. The Levenberg-Marquard (LM) presented method to estimate the amount of raw materials, algorithm is used to adjust the parameters of MPNN. The used in this transformer manufacturing company (in various required training data are obtained from transformer installation temperature and altitude, various short circuit company. impedance and various volt per turn) II. ARTIFICIAL NEURAL NETWORK Keywords-Artificial Neural Network (ANN),Levenberg Marquard(LM)algorithm,estimatingweight,design,powersystem, Neural networks are a relatively new artificial intelligence transformer. technique. In most cases an ANN is an adaptive system that changes its structure based on external or internal I. INTRODUCTION information that flows through the network during the learning phase. The learning procedure tries is to find a set The most important components in electrical network are of connections w that gives a mapping that fits the training transformers which have an important role in electrification. set well. Furthermore, neural networks can be viewed as The continuous performance of transformers is necessary highly nonlinear functions with the basic the form for retaining the network reliability, forecasting its costs for manufacturer and industrial companies. Since the major F ( x, w)  y amount of transformers costs is related to its raw materials, Where x is the input vector presented to the network, w are so having amount of used raw material in transformers is an the weights of the network, and y is the corresponding important task [1]. The aim of the transformer design is to output vector approximated or predicted by the network. completely obtain the dimensions of all the parts of the The weight vector w is commonly ordered first by layer, transformer based on the desired characteristics, available then by neurons, and finally by the weights of each neuron standards, and access to lower cost, lower weight, lower plus its bias. This view of network as an parameterized size, and better performance [2-3]. Various methods have function will be the basis for applying standard function been studied and some techniques have been used. optimization methods to solve the problem of neural Artificial Neural Network is one of methods that mostly network training. have been used in the recent years, in this field. Transformer insulation aging diagnoses, the time left from the life of transformers oil, transformers protection and selection of A. ANN Structure winding material in order to reduce the cost, are few topics A neural network is determined by its architecture, training that have been performed [4-8]. method and exciting function. Its architecture determines In this paper Artificial Neural Network based method have the pattern of connections among neurons. Network training been used to estimate the weight of main materials for changes the values of weights and biases (network transformer (weight of copper, weight of iron and weight of
  • 2. parameters) in each step in order to minimize the mean square of output error. Multi-Layer Perceptron (MLP) has been used in load forecasting, nonlinear control, system identification and pattern recognition [9], thus in this paper multi-layer perceptron network (with four inputs, three outputs and a hidden layer) with Levenberg-Marquardt training algorithm have been used. In general, on function approximation problems, for network that contain up to a few hundred weights, the Levenberg-Marquardt algorithm have the fastest convergence. This advantage is especially noticeable if very accurate training is required. In many cases, trainlm is used to obtain lower mean square error than any other algorithms tested. As the number of weights in the network increases, Figure 2: Schematic of Inputs and Outputs the advantage of trainlm decreases. In addition trainlm performance is relatively poor on pattern recognition C. Training of ANN problems. The storage requirements of trainlm are larger The major justification for the use of ANNs is their than the other algorithm tested. ability to learn relationships in complex data sets that may not be easily perceived by engineers. An ANN performs this function as a result of training that is a process of repetitively presenting a set of training data (typically a representative subset of the complete set of data available) to the network and adjusting the weights so that each input data set produces the desired output. Unsupervised and supervised learning process can be used to adjust the weights in an ANN. Supervised learning process requires both input/output pairs to train the network but supervised learning process requires only input pairs to train the network.Unsupervised learning can be characterized as a fast, but potentially inaccurate, method of adjusting the weights. On the other hand, supervised learning typically requires longer learning times and can be more accurate. There is no way to tell beforehand which Figure 1: Artificial Neural Network learning method will work best for a given application. For B. Input and Outputs of ANN this reason, we concentrate on the very popular supervised A neural network is a data modeling tool that is capable to learning approach based on the backpropagation training represent complex input/output relationships. ANN typically algorithm, which has been shown to produce good results consists of a set of processing elements called neurons that for a large number of different problems. interact by sending signals to one another along weighted The back propagation training algorithm is a method of connections. The required data are the data which have been iteratively adjusting the neural network weights until the accumulated by Transformer Company. In last recent four desired accuracy level is achieved. It is based on a gradient- years, are used for estimating the iron, copper and oil search optimization method applied to an error function. weights of transformers and consequently transformer costs Typical error functions include the mean square error shown are estimated by proposed method (in various installation in (1), where N is the total number of input/output pairs height and temperature with different short-circuit (which can be vector quantities) used for training: impedance and volt per turn). The schematic of the presented method can be shown by Figure 2. 1 N mse   [OUTforecast ,i  OUTactual ,i ]2 (1) N i 1 Where OUT forecast ,i and OUTactual ,i are the output forecast by the neural network and the actual (desired) output, respectively, of the ith training example. The set of training examples (input/output pairs) defines the training set or learning set. For best results, the training set should
  • 3. adequately represent all expected variations in the complete to change the weights. The learning rate is typically selected set of data. between 0.01 and 1.0. The coefficient in m (2) is called A recursive algorithm for adjusting the weights can be momentum and allows the weight updates at one iteration to developed, such that the error defined by (1) is minimized. utilize information from previous error values. The The equations (2) and (3) are recursive training equations momentum term helps avoid settling into a local minimum based on the generalized delta rule and the corresponding and is selected between 0.01 and 1.0. algorithm is called gradient descent back propagation. The recursive training algorithm (set n= n+1) is executed until the network satisfactorily predicts the output values. wpj ,qk (n  1)  lr. qk .OUTpj  m.wpj ,qk (n) (2) Common stopping criteria for the training algorithm involve monitoring either the mean square error or the maximum wpj ,qk (n  1)  wpj ,qk (n)  wpj ,qk (n  1) (3) error or both and stopping when the value is less than a Where : specified tolerance. The selected tolerance is very problem n : the no. of current iteration of the training algorithm dependent and may or may not be actually achievable. There is no mathematical proof that the back propagation wpj ,qk (n) : the value of weight that connects the neuron p training algorithm will ever converge within a given Of layer j with the neuron q of layer k during tolerance. The only guarantee is that any changes of the Iteration n. weights will not increase the total error. Note that the wpj ,qk (n) : the variation in the value of weight wpj ,qk (n) inclusion of the momentum term may allow the error as defined in (1) to temporarily increase if the optimization during the iteration n. process is moving away from the local minimum.  qk : the value of  (delta coefficient) for the neuron q of layer k. OUTpj :the output for the neuron p of layer j. III. LEVENBERG MARQUARD FORMULATION lr : the learning rate. FOR TRANSFORMER m :the momentum. The LM algorithm has been used in function approximation. Basically it consists in solving the equation: The value of d is calculated differently depending on the ( J t J   I )  J t E (6) specific location of the weight under consideration (4) is the formula for calculating d for any weight connected from a Where J is the Jacobian matrix for the system, λ is the hidden layer neuron to an output layer Levenberg's damping factor, δ is the weight update vector neuron: that we want to find and E is the error vector containing the 2 output errors for each input vector used on training the  qk  .OUTqk .(1  OUTqk ).(OUTactualqk  OUTqk ) (4) network. The δ tell us by how much we should change our N network weights to achieve a (possibly) better solution. The where layer k is the output layer, OUTactualqk is the actual JtJ matrix can also be known as the approximated Hessian. (desired) output of any neuron q of the output layer k , and The λ damping factor is adjusted at each iteration, and N is the number of training examples of the training set. guides the optimization process. If reduction of E is rapid, a The values in (4) are known from the training set. The smaller value can be used, bringing the algorithm closer to calculated output of the network is compared to the actual the Gauss Newton algorithm, whereas if an iteration gives value to generate an error signal. The error signal is insufficient reduction in the residual, λ can be increased, propagated back through the neural network to adjust the giving a step closer to the gradient descent direction. weights, as shown in (2) and (3). Algorithm:- For neurons in any other than an output layer, however, an 1. Compute the Jacobian (by using finite differences error value is not directly obtainable because no desired or the chain rule) output value is given for these internal neurons as a part of 2. Compute the error gradient the training set. The error values for any neurons other than ( i) g = JtE the output neurons are calculated as weighted sums of the 3. Approximate the Hessian using the cross product output layer errors: Jacobian . Q ( i )H = JtJ  pj  OUTpj .(1  OUTpj ).  qk .wpj ,qk (5) 4. Solve (H + λI)δ = g to find δ . q 1 5. Update the network weights w using δ where Q is the number of neurons of the output layer. 6. Recalculate the sum of squared errors using the The coefficient lr in (2) is called learning rate and directly updated weights controls how much the calculated error values are allowed 7. If the sum of squared errors has not decreased,
  • 4. (i)Discard the new weights, increase λ T17 27500 11110 7387 using different values and go to step 4. T18 25600 10630 2780 T19 9500 7250 6300 8. Else decrease λ using different values and stop. T20 16700 8900 8479 T21 27000 12250 5199 T22 15550 8550 7550 IV. SIMULATION T23 25400 12250 17450 For network learning, some input vectors (P) and some T24 53100 2550 8479 output vectors (T) are needed. By considering extracted data the belong type of 63/20kV transformer from transformer A two layer feed-forward network with sigmoid hidden manufacturing during last 4 years, simulating has been neurons and linear output neurons has been used. The performed in the following case. In Table II and III, 24 network has been trained with Levenberg-Marquard back inputs vector and 24 outputs vector that are used for propagation algorithm. The number of neurons in hidden network learning. layer is twenty. TABLE II INPUTS FOR TRASFORMER 63/20 KV Inputs Short circuit Installation Volt per Environment Impedance height turn temperature V. RESULT AND DISCUSSION percent P1 8 1000 87.719 50 P2 10 1000 76.336 55 P3 10 1000 84.034 50 P4 10 1000 60.79 45 P5 10 1000 68.027 40 P6 12 1500 68.027 40 P7 12 2200 54.795 50 P8 12.5 1364 99.502 40 P9 12.5 1500 54.201 40 P10 12.5 1500 67.34 40 P11 12.5 1500 76.923 50 P12 12.5 1500 106.952 45 P13 12.5 1700 97.087 45 P14 12.5 1900 49.948 39 P15 13 2000 66.67 50 P16 13.5 1000 79.94 47 P17 13.5 1500 75.76 45 P18 13.5 1500 75.785 40 P19 13.5 1700 37.88 40 P20 13.5 1700 47.17 55 P21 13.5 1700 66.007 42 P22 13.5 2000 46.62 50 P23 13.7 1500 75.753 55 P24 14 1000 121.212 45 Figure3: Mean Square Error TABLE III OUTPUTS FOR TRANSFORMER 63/20 KV The performance curve is shown in Figure 1. In this figure Outputs Weight of iron Weight of Oil Weight of mean squared error have become small by increasing the Copper number of epoch. The test set error and the validation set T1 31200 13500 7700 error has similar characteristics and no significant over T2 25600 11500 7770 fitting has occurred by iteration 6(where best validation T3 27000 11400 7094 performance has occurred). T4 22100 8500 7000 T5 22700 9900 6894 T6 24000 10600 8000 T7 15100 7500 4124 T8 38000 18000 11720 T9 15250 7700 8667 T10 22400 10300 6891 T11 30160 13700 10000 T12 45470 19000 9765 T13 33200 14100 6600 T14 17450 8800 9700 T15 25562 11820 8950 T16 28750 11850 9500
  • 5. Figure6: Regression plot of weight of main material of transformer Figure4: Prediction of weight of main material of transformer during The output tracks the targets very well for training, testing, training analysis. and validation, and the R-value is over 0.95 for the total The output has tracked the targets very well for training, in response. estimation of weight of main material in transformer. The value of regression is one which indicates a close correlation CONCLUSIONS between outputs and targets. The major amount of transformers costs is related to its raw materials, so having the amount of used raw material in various conditions in transformers has been used in costs analysis process. This paper presented a new method to estimate the weight of main material (weight of copper, weight of iron and weight of oil) for 63/20kV transformers. The method is based on two layer feed-forward network with sigmoid transfer function in hidden layer and linear transfer function in output neuron. The Levenberg- Marquard (LM) algorithm is used to adjust the parameters of MPNN. The required training data for MPNN are the obtained information from the transformers manufacturing company during last 4 years. The advantage of using ANN in the design and optimization is that ANN is required to be trained only once. After the completion of training, the ANN gives the transformers weight without any iterative process. Thus, this model can be used confidently for the design, cost estimating and development of transformers. Developed model has very fast, reliable and robust structure. REFERENCES [1] P. S. Georgilakis, "Recursive genetic algorithm-finite element method technique for the solution of transformer manufacturing cost minimization problem”, IET Electr. Power Appl., 2009, Vol. 3, Iss. 6, Figure5: Prediction of weight of main material of transformer during test pp. 514–519. analysis
  • 6. [2] Pavlos S. Georgilakis, Marina A. Tsili and Athanassios T. Souflaris, ”A Heuristic Solution to the Transformer Manufacturing Cost Optimization Problem”, JAPMED’4-4th Japanese-Mediterranean Workshop on Applied Electromagnetic Engineering for Magnetic, Superconducting and Nano Materials, Poster Session, Paper 103_PS_1 nology (e-JST), September 2005, pp. 83-84. [3] Manish Kumar Srivastava,”An innovative method for design of distribution transformer”, e-Journal of Science & Technology (e- JST), April 2009, pp. 49-54. [4] Geromel, Luiz H., Souza, Carlos R., ”The application of intelligent systems in power transformer design”, IEEE Conference, 2002, pp. 1504-1509. [5] Yang Qiping, Xue Wude, LanZida, “Transformer Insulation Aging Diagnosis and Service Life Evaluation”, Transformer [J], No.2, Vol. 41, [6] Tetsuro Matsui, Yasuo Nakahara, Kazuo Nishiyama, Noboru Urabe and Masayoshi Itoh, ”Development of Remaining Life Assessment for Oilimmersed Transformer Using Structured Neural Networks”, I CROSSICE International Joint Conference, August 2009, pp. 1855- 1858. [7] M. R. Zaman and M. A. Rahman, “Experimental testing of the artificial neural network based protection of power transformers”, IEEE Trans. Power Del., vol. 13, no. 2, pp. 510–517, Apr. 1998. [8] Eleftherios I. Amoiralis, Pavlos S. Georgilakis and Alkiviadis T. Gioulekas, "An Artificial Neural Network for the Selection of Winding Material in Power Transformers", Springer-Verlag Berlin Heidelberg, 2006, pp. 465-468. [9] Khaled Shaban, Ayman EL-Hag and Andrei Matveev, ”Predicting Transformers Oil Parameters”, IEEE Electrical Insulation Conference, Montreal, QC, Canada, 31 May - 3 June 2009, pp. 196- 199.