• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
teste
 

teste

on

  • 350 views

 

Statistics

Views

Total Views
350
Views on SlideShare
350
Embed Views
0

Actions

Likes
0
Downloads
1
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    teste teste Document Transcript

    • Application Research Based on Artificial Neural Network(ANN) to Estimate the Weight of Main Material for Transformers Amit Kr. Yadav,Abdul Azeem,Akhilesh Singh O.P. Rahi Electrical Engineering Department Assistant Professor, Electrical Engineering Department National Institute Of Technology National Institute Of Technology Hamirpur, H.P. India Hamirpur, H.P. India e-mail: amit1986.529@rediffmail.com e-mail: oprahi2k@gmail.comAbstract—Transformer is one of the vital components in oil). These are the main components which have been usedelectrical network which play important role in the power for designing and cost estimation process.system. The continuous performance of transformers is In following artificial neural networks with Levenberg-necessary for retaining the network reliability, forecasting its Marquard back propagation algorithm have been used tocosts for manufacturer and industrial companies. The major estimate the main material’s weight of transformers. Theamount of transformer costs are related to its raw materials, sothe cost estimation process of transformers are based on extracted data from transformer manufacturing company hasamount of used raw material. been used to train the ANN and the best parameters for this This paper presents a new method to estimate the network have been presented graphically. Finally resultweight of main materials for transformers. The method is given by trained neural network have been compared withbased on Multilayer Perceptron Neural Network (MPNN) with actual manufactured transformer prove the accuracy ofsigmoid transfer function. The Levenberg-Marquard (LM) presented method to estimate the amount of raw materials,algorithm is used to adjust the parameters of MPNN. The used in this transformer manufacturing company (in variousrequired training data are obtained from transformer installation temperature and altitude, various short circuitcompany. impedance and various volt per turn) II. ARTIFICIAL NEURAL NETWORK Keywords-Artificial Neural Network (ANN),LevenbergMarquard(LM)algorithm,estimatingweight,design,powersystem, Neural networks are a relatively new artificial intelligencetransformer. technique. In most cases an ANN is an adaptive system that changes its structure based on external or internal I. INTRODUCTION information that flows through the network during the learning phase. The learning procedure tries is to find a setThe most important components in electrical network are of connections w that gives a mapping that fits the trainingtransformers which have an important role in electrification. set well. Furthermore, neural networks can be viewed asThe continuous performance of transformers is necessary highly nonlinear functions with the basic the formfor retaining the network reliability, forecasting its costs formanufacturer and industrial companies. Since the major F ( x, w)  yamount of transformers costs is related to its raw materials, Where x is the input vector presented to the network, w areso having amount of used raw material in transformers is an the weights of the network, and y is the correspondingimportant task [1]. The aim of the transformer design is to output vector approximated or predicted by the network.completely obtain the dimensions of all the parts of the The weight vector w is commonly ordered first by layer,transformer based on the desired characteristics, available then by neurons, and finally by the weights of each neuronstandards, and access to lower cost, lower weight, lower plus its bias. This view of network as an parameterizedsize, and better performance [2-3]. Various methods have function will be the basis for applying standard functionbeen studied and some techniques have been used. optimization methods to solve the problem of neuralArtificial Neural Network is one of methods that mostly network training.have been used in the recent years, in this field. Transformerinsulation aging diagnoses, the time left from the life oftransformers oil, transformers protection and selection of A. ANN Structurewinding material in order to reduce the cost, are few topics A neural network is determined by its architecture, trainingthat have been performed [4-8]. method and exciting function. Its architecture determinesIn this paper Artificial Neural Network based method have the pattern of connections among neurons. Network trainingbeen used to estimate the weight of main materials for changes the values of weights and biases (networktransformer (weight of copper, weight of iron and weight of
    • parameters) in each step in order to minimize the meansquare of output error. Multi-Layer Perceptron (MLP) has been used in loadforecasting, nonlinear control, system identification andpattern recognition [9], thus in this paper multi-layerperceptron network (with four inputs, three outputs and ahidden layer) with Levenberg-Marquardt training algorithmhave been used. In general, on function approximation problems, fornetwork that contain up to a few hundred weights, theLevenberg-Marquardt algorithm have the fastestconvergence. This advantage is especially noticeable if veryaccurate training is required. In many cases, trainlm is usedto obtain lower mean square error than any other algorithmstested. As the number of weights in the network increases, Figure 2: Schematic of Inputs and Outputsthe advantage of trainlm decreases. In addition trainlmperformance is relatively poor on pattern recognition C. Training of ANNproblems. The storage requirements of trainlm are larger The major justification for the use of ANNs is theirthan the other algorithm tested. ability to learn relationships in complex data sets that may not be easily perceived by engineers. An ANN performs this function as a result of training that is a process of repetitively presenting a set of training data (typically a representative subset of the complete set of data available) to the network and adjusting the weights so that each input data set produces the desired output. Unsupervised and supervised learning process can be used to adjust the weights in an ANN. Supervised learning process requires both input/output pairs to train the network but supervised learning process requires only input pairs to train the network.Unsupervised learning can be characterized as a fast, but potentially inaccurate, method of adjusting the weights. On the other hand, supervised learning typically requires longer learning times and can be more accurate. There is no way to tell beforehand which Figure 1: Artificial Neural Network learning method will work best for a given application. ForB. Input and Outputs of ANN this reason, we concentrate on the very popular supervisedA neural network is a data modeling tool that is capable to learning approach based on the backpropagation trainingrepresent complex input/output relationships. ANN typically algorithm, which has been shown to produce good resultsconsists of a set of processing elements called neurons that for a large number of different problems.interact by sending signals to one another along weighted The back propagation training algorithm is a method ofconnections. The required data are the data which have been iteratively adjusting the neural network weights until theaccumulated by Transformer Company. In last recent four desired accuracy level is achieved. It is based on a gradient-years, are used for estimating the iron, copper and oil search optimization method applied to an error function.weights of transformers and consequently transformer costs Typical error functions include the mean square error shownare estimated by proposed method (in various installation in (1), where N is the total number of input/output pairsheight and temperature with different short-circuit (which can be vector quantities) used for training:impedance and volt per turn). The schematic of thepresented method can be shown by Figure 2. 1 N mse   [OUTforecast ,i  OUTactual ,i ]2 (1) N i 1 Where OUT forecast ,i and OUTactual ,i are the output forecast by the neural network and the actual (desired) output, respectively, of the ith training example. The set of training examples (input/output pairs) defines the training set or learning set. For best results, the training set should
    • adequately represent all expected variations in the complete to change the weights. The learning rate is typically selectedset of data. between 0.01 and 1.0. The coefficient in m (2) is calledA recursive algorithm for adjusting the weights can be momentum and allows the weight updates at one iteration todeveloped, such that the error defined by (1) is minimized. utilize information from previous error values. TheThe equations (2) and (3) are recursive training equations momentum term helps avoid settling into a local minimumbased on the generalized delta rule and the corresponding and is selected between 0.01 and 1.0.algorithm is called gradient descent back propagation. The recursive training algorithm (set n= n+1) is executed until the network satisfactorily predicts the output values. wpj ,qk (n  1)  lr. qk .OUTpj  m.wpj ,qk (n) (2) Common stopping criteria for the training algorithm involve monitoring either the mean square error or the maximum wpj ,qk (n  1)  wpj ,qk (n)  wpj ,qk (n  1) (3) error or both and stopping when the value is less than aWhere : specified tolerance. The selected tolerance is very problemn : the no. of current iteration of the training algorithm dependent and may or may not be actually achievable. There is no mathematical proof that the back propagationwpj ,qk (n) : the value of weight that connects the neuron p training algorithm will ever converge within a given Of layer j with the neuron q of layer k during tolerance. The only guarantee is that any changes of the Iteration n. weights will not increase the total error. Note that thewpj ,qk (n) : the variation in the value of weight wpj ,qk (n) inclusion of the momentum term may allow the error as defined in (1) to temporarily increase if the optimization during the iteration n. process is moving away from the local minimum. qk : the value of  (delta coefficient) for the neuron q of layer k.OUTpj :the output for the neuron p of layer j. III. LEVENBERG MARQUARD FORMULATIONlr : the learning rate. FOR TRANSFORMERm :the momentum. The LM algorithm has been used in function approximation. Basically it consists in solving the equation:The value of d is calculated differently depending on the ( J t J   I )  J t E (6)specific location of the weight under consideration (4) is theformula for calculating d for any weight connected from a Where J is the Jacobian matrix for the system, λ is thehidden layer neuron to an output layer Levenbergs damping factor, δ is the weight update vectorneuron: that we want to find and E is the error vector containing the 2 output errors for each input vector used on training the qk  .OUTqk .(1  OUTqk ).(OUTactualqk  OUTqk ) (4) network. The δ tell us by how much we should change our N network weights to achieve a (possibly) better solution. Thewhere layer k is the output layer, OUTactualqk is the actual JtJ matrix can also be known as the approximated Hessian.(desired) output of any neuron q of the output layer k , and The λ damping factor is adjusted at each iteration, andN is the number of training examples of the training set. guides the optimization process. If reduction of E is rapid, aThe values in (4) are known from the training set. The smaller value can be used, bringing the algorithm closer tocalculated output of the network is compared to the actual the Gauss Newton algorithm, whereas if an iteration givesvalue to generate an error signal. The error signal is insufficient reduction in the residual, λ can be increased,propagated back through the neural network to adjust the giving a step closer to the gradient descent direction.weights, as shown in (2) and (3). Algorithm:-For neurons in any other than an output layer, however, an 1. Compute the Jacobian (by using finite differenceserror value is not directly obtainable because no desired or the chain rule)output value is given for these internal neurons as a part of 2. Compute the error gradientthe training set. The error values for any neurons other than ( i) g = JtEthe output neurons are calculated as weighted sums of the 3. Approximate the Hessian using the cross productoutput layer errors: Jacobian . Q ( i )H = JtJ  pj  OUTpj .(1  OUTpj ).  qk .wpj ,qk (5) 4. Solve (H + λI)δ = g to find δ . q 1 5. Update the network weights w using δwhere Q is the number of neurons of the output layer. 6. Recalculate the sum of squared errors using theThe coefficient lr in (2) is called learning rate and directly updated weightscontrols how much the calculated error values are allowed 7. If the sum of squared errors has not decreased,
    • (i)Discard the new weights, increase λ T17 27500 11110 7387 using different values and go to step 4. T18 25600 10630 2780 T19 9500 7250 6300 8. Else decrease λ using different values and stop. T20 16700 8900 8479 T21 27000 12250 5199 T22 15550 8550 7550 IV. SIMULATION T23 25400 12250 17450For network learning, some input vectors (P) and some T24 53100 2550 8479output vectors (T) are needed. By considering extracted datathe belong type of 63/20kV transformer from transformer A two layer feed-forward network with sigmoid hiddenmanufacturing during last 4 years, simulating has been neurons and linear output neurons has been used. Theperformed in the following case. In Table II and III, 24 network has been trained with Levenberg-Marquard backinputs vector and 24 outputs vector that are used for propagation algorithm. The number of neurons in hiddennetwork learning. layer is twenty. TABLE II INPUTS FOR TRASFORMER 63/20 KVInputs Short circuit Installation Volt per Environment Impedance height turn temperature V. RESULT AND DISCUSSION percent P1 8 1000 87.719 50 P2 10 1000 76.336 55 P3 10 1000 84.034 50 P4 10 1000 60.79 45 P5 10 1000 68.027 40 P6 12 1500 68.027 40 P7 12 2200 54.795 50 P8 12.5 1364 99.502 40 P9 12.5 1500 54.201 40 P10 12.5 1500 67.34 40 P11 12.5 1500 76.923 50 P12 12.5 1500 106.952 45 P13 12.5 1700 97.087 45 P14 12.5 1900 49.948 39 P15 13 2000 66.67 50 P16 13.5 1000 79.94 47 P17 13.5 1500 75.76 45 P18 13.5 1500 75.785 40 P19 13.5 1700 37.88 40 P20 13.5 1700 47.17 55 P21 13.5 1700 66.007 42 P22 13.5 2000 46.62 50 P23 13.7 1500 75.753 55 P24 14 1000 121.212 45 Figure3: Mean Square Error TABLE III OUTPUTS FOR TRANSFORMER 63/20 KV The performance curve is shown in Figure 1. In this figure Outputs Weight of iron Weight of Oil Weight of mean squared error have become small by increasing the Copper number of epoch. The test set error and the validation set T1 31200 13500 7700 error has similar characteristics and no significant over T2 25600 11500 7770 fitting has occurred by iteration 6(where best validation T3 27000 11400 7094 performance has occurred). T4 22100 8500 7000 T5 22700 9900 6894 T6 24000 10600 8000 T7 15100 7500 4124 T8 38000 18000 11720 T9 15250 7700 8667 T10 22400 10300 6891 T11 30160 13700 10000 T12 45470 19000 9765 T13 33200 14100 6600 T14 17450 8800 9700 T15 25562 11820 8950 T16 28750 11850 9500
    • Figure6: Regression plot of weight of main material of transformer Figure4: Prediction of weight of main material of transformer during The output tracks the targets very well for training, testing, training analysis. and validation, and the R-value is over 0.95 for the totalThe output has tracked the targets very well for training, in response.estimation of weight of main material in transformer. Thevalue of regression is one which indicates a close correlation CONCLUSIONSbetween outputs and targets. The major amount of transformers costs is related to its raw materials, so having the amount of used raw material in various conditions in transformers has been used in costs analysis process. This paper presented a new method to estimate the weight of main material (weight of copper, weight of iron and weight of oil) for 63/20kV transformers. The method is based on two layer feed-forward network with sigmoid transfer function in hidden layer and linear transfer function in output neuron. The Levenberg- Marquard (LM) algorithm is used to adjust the parameters of MPNN. The required training data for MPNN are the obtained information from the transformers manufacturing company during last 4 years. The advantage of using ANN in the design and optimization is that ANN is required to be trained only once. After the completion of training, the ANN gives the transformers weight without any iterative process. Thus, this model can be used confidently for the design, cost estimating and development of transformers. Developed model has very fast, reliable and robust structure. REFERENCES [1] P. S. Georgilakis, "Recursive genetic algorithm-finite element method technique for the solution of transformer manufacturing cost minimization problem”, IET Electr. Power Appl., 2009, Vol. 3, Iss. 6, Figure5: Prediction of weight of main material of transformer during test pp. 514–519. analysis
    • [2] Pavlos S. Georgilakis, Marina A. Tsili and Athanassios T. Souflaris, ”A Heuristic Solution to the Transformer Manufacturing Cost Optimization Problem”, JAPMED’4-4th Japanese-Mediterranean Workshop on Applied Electromagnetic Engineering for Magnetic, Superconducting and Nano Materials, Poster Session, Paper 103_PS_1 nology (e-JST), September 2005, pp. 83-84.[3] Manish Kumar Srivastava,”An innovative method for design of distribution transformer”, e-Journal of Science & Technology (e- JST), April 2009, pp. 49-54.[4] Geromel, Luiz H., Souza, Carlos R., ”The application of intelligent systems in power transformer design”, IEEE Conference, 2002, pp. 1504-1509.[5] Yang Qiping, Xue Wude, LanZida, “Transformer Insulation Aging Diagnosis and Service Life Evaluation”, Transformer [J], No.2, Vol. 41,[6] Tetsuro Matsui, Yasuo Nakahara, Kazuo Nishiyama, Noboru Urabe and Masayoshi Itoh, ”Development of Remaining Life Assessment for Oilimmersed Transformer Using Structured Neural Networks”, I CROSSICE International Joint Conference, August 2009, pp. 1855- 1858.[7] M. R. Zaman and M. A. Rahman, “Experimental testing of the artificial neural network based protection of power transformers”, IEEE Trans. Power Del., vol. 13, no. 2, pp. 510–517, Apr. 1998.[8] Eleftherios I. Amoiralis, Pavlos S. Georgilakis and Alkiviadis T. Gioulekas, "An Artificial Neural Network for the Selection of Winding Material in Power Transformers", Springer-Verlag Berlin Heidelberg, 2006, pp. 465-468.[9] Khaled Shaban, Ayman EL-Hag and Andrei Matveev, ”Predicting Transformers Oil Parameters”, IEEE Electrical Insulation Conference, Montreal, QC, Canada, 31 May - 3 June 2009, pp. 196- 199.