“Neural Networks with Self-Organizing Maps applied to Process Control”
Upcoming SlideShare
Loading in...5
×
 

“Neural Networks with Self-Organizing Maps applied to Process Control”

on

  • 3,398 views

New methodology for process control with neural networks. Compare the results with traditional CUSUM EWMA and Shewarts Charts vs Neural Netwoks with auto-organizing Maps.

New methodology for process control with neural networks. Compare the results with traditional CUSUM EWMA and Shewarts Charts vs Neural Netwoks with auto-organizing Maps.

Statistics

Views

Total Views
3,398
Views on SlideShare
3,398
Embed Views
0

Actions

Likes
0
Downloads
69
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft Word

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

“Neural Networks with Self-Organizing Maps applied to Process Control” “Neural Networks with Self-Organizing Maps applied to Process Control” Document Transcript

  • “Neural Networks with Self-Organizing Maps applied to Process Control” (1) Juan Cevallos Ampuero ABSTRACT This research allowed to compare the Artificial Neural Networks ANN process control with Self Organizing Maps SOM versus Shewart, CUSUM and EWMA Charts. An example was selected which has applied both the traditional control charts techniques, and the ANN-SOM. We analyzed the different types of neural networks and found that ANN with Self Organizing Maps SOM give results very interesting to learn about the processes. As the CUSUM and EWMA Charts have some advantages over Shewart Charts and vice versa, it happens that the ANN with Self Organizing Maps have advantages that justify its use. The control with ANN-SOM allows to know many aspects that can not be known with the previous techniques, as well to work with two and three variables at once, which can not be done with the traditional control Charts, that work only with one variable KEYWORD: Process Control, Control Charts of Shewart, CUSUM chart, EWMA chart, Neural networks with Self Organizing Maps, Kohonen ANN. INTRODUCTION Process control needs to be improved in order to obtain better quality and lower costs. For this reason it is seeking to refine the tools being used today such as the Shewart Charts, CUSUM charts (Cumulative Sum) and EWMA Charts (Exponentialy Weighted Moving Average). Artificial Neural Networks are tools that have great versatility for solving problems and for this reason can be applied for process control. Precisely with this objective has been realized this study. Our hypothesis is that the use of ANNs should allow better process control than Shewart, CUSUM and EWMA Charts. CONTROL WITH SHEWART CHARTS. Shewart Charts are based on the normal probability distribution and the central limit theorem. Given the data in samples of 4 to 6 units (generally) are estimated averages and ranges from about 20 to 30 samples and are calculated the control limits in order to determine the presence of assignable causes, which produce a variance greater than three standard deviations (± 3σ, equal to an error of 0.27%). Control limits for the Chart of Averages are determined using the following equations: k Center Line: ∑x i ; Upper and Lower Control Limits: UCL, L = x ± A2 R CL = x = i =1 K where K is the number of samples, A2 is a constant from a table, R is the range of each sample. Graphically represented as follows: 130 X GRAPH 120 HOURS 110 100 90 80 70 60 1 3 5 7 9 11 13 15 17 19 21 23 25 SAMPLES Figure 1. Shewart Chart The process is out of control when there are points outside the control limits, which represent the values of the mean ± 3σ. Many academics and authors add seven (7) out of control at some particular groups within the control limits. __________________________________________________________________ 1 Ph.D. in Engineering. Professor at the Faculty of Industrial Engineering. UNMSM. He is a Member Senior of ASQ. E mail. jceval@terra.com.pe/ jcevallosa@unmsm.edu.pe 1
  • CONTROL WITH CUSUM CHARTS. Faced with the limitations that have the Shewart charts to determine small process variations have been developed CUSUM and EWMA charts. CUSUM charts or Graphics on Cumulative Sums, which are chronological chart of the cumulative sum of deviations away from a reference value. For the algorithmic CUSUM Chart are detected with standard deviations: u ' = u 0 ± δσ Being the cumulative value for each i C i+ = max[0, {C i+ 1 + ( xi − u 0 )} − k ] − Ci+ = max[0,{Ci+ 1 − ( xi − u 0 )} − k ] − where: u1 − u 2 δσ k= = 2 2 Control limits are a distance H = hσ (example: H = 5h). The optimum ratio between k and h is given by the Hawkins algorithm, which is summarized in the following table (for ARL average with length of run = 0): k= 0.25 0.5 0.75 1 1.25 1.5 h= 8.01 4.77 3.34 2.52 1.99 1.61 In the case of CUSUM mask (consisting of a pointer with 2 angled control limits). The mask is placed on the chart so that the peak P, is plotting the last point. The distance d and angle θ are the design parameters of the mask, as the following graph: w A d Figure 2. CUSUM Mask Where: A = Horizontal distance between successive points of the horizontal axis measures in units of the vertical axis. Depends on the scale of the graph. .δ = Sensitivity of the analysis. It is the degree of mismatch that we want to identify: Number of σ. Example: to detect an mean increase of u0 to u1: u1 = u0 + δσ .α = Probability of Type I error (observed when the process is under control). .β = Probability of Type II error (failing to detect a deviation δ). ñ = average sample sizes. If the chart is of individual observations: ñ = 1. Note: Authors like Montgomery recommend the algorithm CUSUM. The CUSUM Charts have more sensitivity than Shewart Charts, that we can see in the following table of ARL (δ) for CUSUM and Shewart charts comparable. Table 1. Table of ARL CUSUM and Shewart Charts 2
  • . δ (σ) CUSUM Shewart 0 500 500 .5 30 214.5 1 10.5 65.2 1.5 5.4 18.2 2 3.4 7.35 3 2.6 2.2 CONTROL WITH EWMA CHARTS- EXPONENTIALLY WEIGHTED MOVING AVERAGE. This Chart is similar to CUSUM Chart: it is accumulated in each period the values of past observations. The variable that is represented in each period is an average of contemporary observation and previous observations, which gives more weight to the latest observations. The moving average is defined as: Yi = λX i + (1 − λ )Yi −1 Where Yo = u0 or Yo = x and where the λ parameters vary between 0< λ<=1. If replaced in the time we have: i =1 Yi = λ ∑ (1 − λ ) j xi − j + (1 − λ ) i Y0 j =0 Besides: E(Yi) = u0 λ[1 − (1 − λ ) 2i ] Var (Yi ) = σ 2 2−λ The control limits are: UCL = E (Yi ) + K Var (Yi ) CL = E (Yi ) LCL = E (Yi ) − K Var (Yi ) For K=3 we have: λ[1 − 1 − λ ) 2i ] UCL = u 0 + 3σ 2−λ CL = u 0 λ[1 − 1 − λ ) 2i ] LCL = u0 − 3σ 2−λ If it is very high we can approach: λ UCL = u 0 − 3σ 2−λ CL = u 0 λ LCL = u0 − 3σ 2−λ The most commonly used λ: 0.05, 0.10 and 0.20, the smaller λ the EWMA chart is more sensitive. In the EWMA chart below shows data from a process. 3
  • Figure 3. EWMA Chart The EWMA chart is less sensitive than the CUSUM chart for small deviations. The following table shows a comparison of ARL (δ) of three types of charts: Table 2. Table of ARL EWMA, CUSUM and Shewart Charts .δ EWMA CUSUM SHEWART 0 500 500 500 .5 36 30 214.5 1 10.20 10.5 65.2 1.5 6.0 5.4 18.2 2 4.05 3.4 7.35 3 2.63 2.6 2.2 CONTROL CHARTS WITH RNA AND CONTROL OF PROCESSES WITH RNA In the case of ANN were analyzed various types of ANN has been selected the SOM (Self Organizing Map), that is the Self Organizing Maps of ANN, these ANN are characterized by a type of neural network unsupervised, competitive, distributed at regular intervals on a grid of 1, 2 or more dimensions. In the training the data vectors are introduced into each neuron and are compared with the characteristic vector of each neuron. Neuron that is less difference between its weight vector and the data vector is the winning neuron (BMU or Best Matching Unit) and she and her neighbors will see their will see their vectors of weights modified. To measure the similarity is usually use the Euclidean distance. Are used to process the information of the process, it will be assigned during the training, thus building an ANN SOM that is then used with new points and places them according to groups that have (sorts) and so knowing to what group owns the new date we can know how close or far is from the central values of the process and from the specifications of the customer. The equations that governing ANN SOM are: W (q )= iW (q − 1) + α ( p (q )− iW ( q − 1)) i = (1 − α ) iW ( q − 1) + αp (q ) where i E Ni *(d) Ie, Ni *(d) contains the indices of all neurons located at a radius d from the winning neuron i*: Ni (d) = {j,dij <=d}. PROCESSES CONTROL WITH ANN In the process of control must have a standard to meet and have to obtain data to compare what goes against the standard. In case you have not a standard because it is a process without historical information to have the standard, you can get the data and analyze them to interpret what is happening. One way to do this is by using the criteria of the Normal Distribution and the Central Limit Theorem, which is what the Shewart Charts. That is not the only way, other ways are the use 4
  • of the CUSUM and EWMA Charts; also with the same propose we can use the ANN for perform this control, but bearing in mind the peculiarities that have the ANN. Our hypothesis is that if we use the ANN SOM, we can made a similar to the other tools but in this case the data are grouped around certain centers. Some centers are nearby to the target value others are near others are in the average of the process; and others who are far or very far from the target value. Obviously we wanted to take the points near or close by. Having separate points of the target value or from the mean, we must take corrective actions. The advantage of using ANN is that it can have any kind of Distribution of Probabilities and work with several variables. This is exactly what is presented below, which is illustrated with an example. METHODOLOGY Was selected a data set of a process, it were applied the four types of tools and we compared the results. The data of the example are: An automatic machine full flour packages whose weight interest monitor. The nominal value of the package weight is 50 grams and are interested in detecting changes in level of 0.2 grams. Every half hour is a random draw of 4 packs and weighed. The mean of 4 packs of each sample are: 49.90, 49.91, 49.89, 50.05, 49.94, 49.95, 49.88, 49.96, 50.27, 49.87, 49.87, 50.04, 50.04, 50.04, 50.23, 50.23, 50.23, 50.18, 50.13, 50.05, 50.15. From historical information is known that the standard deviation of the packets is 0.1 grams. RESULTS CONTROL WITH SHEWART CHARTS. Was obtained: Central Line = 50.034; Upper Limit = 50.334; Lower Limit = 49.734. Also, as shown in the accompanying Chart, obtained with Minitab, the process is under control. Xbar Chart of C1 50.4 UCL=50.334 50.3 50.2 a pe e n S ml M a 50.1 _ _ X=50.034 50.0 49.9 49.8 LCL=49.734 49.7 1 3 5 7 9 11 13 15 17 19 Sample Figure 4. Shewart Chart. Example. CONTROL WITH CUSUM CHARTS Was obtained: Central Line= 0.0; Upper Limit =0.4; Lower Limit = -0.4. Also, as shown in the accompanying Chart, obtained with Minitab, the points 17,18,19 y 20 are out of control, the process is not under control. CUSUM Chart of C1 0.75 0.50 UCL=0.4 u u tive S m u 0.25 C m la 0.00 0 -0.25 LCL=-0.4 -0.50 1 3 5 7 9 11 13 15 17 19 Sample Figure 5. CUSUM Chart. Example 5
  • CONTROL WITH EWMA CHARTS. Was obtained: Central Line = 50.034; Upper Limit =50.1840; Lower Limit = 49.8840. Also, as shown in the accompanying Chart, obtained with Minitab, the point 17 is out of control, the process is not under control. (we work with λ=0.4). EWMA Chart of C1 50.20 UCL=50.1840 50.15 50.10 WA E M 50.05 _ _ X=50.034 50.00 49.95 49.90 LCL=49.8840 1 3 5 7 9 11 13 15 17 19 Sample Figure 6. EWMA Chart. Example CONTROL OF PROCESSES WITH ANN Working with MATLAB, and with ANN SOM (Self Organizing Maps) we obtained: datrna' = ans Columns 1 through 8 49.9000 49.9100 49.8900 50.0500 49.9400 49.9500 49.8800 49.9600 Columns 9 through 16 50.2700 49.8700 49.8700 50.0400 50.0400 50.0400 50.2300 50.2300 Columns 17 through 20 50.2800 50.1300 50.0500 50.1500 >> max(datrna) ans = 50.2800 >> min(datrna) ans = 49.8700 >> mean(datrna) ans = 50.0340 >> std(datrna) ans = 0.1394 >> mean(datrna)+ 3*std(datrna) ans = 50.4522 >> mean(datrna)- 3*std(datrna) ans = 49.6158 >> % then add the 3 new points calculated: mean and both ends (total is now 23 points): >> %With 9 centers, and the 23 points; we have: >> datrna' ans = Columns 1 through 8 49.9000 49.9100 49.8900 50.0500 49.9400 49.9500 49.8800 49.9600 Columns 9 through 16 50.2700 49.8700 49.8700 50.0400 50.0400 50.0400 50.2300 50.2300 Columns 17 through 23 6
  • 50.2800 50.1300 50.0500 50.1500 50.0340 50.4522 49.6158 >>%Now, we create the ANN SOM, with the next command. >> net=newsom([49.6158 50.4522],[9]); >> wts=net.IW{1,1} wts = 50.0340 50.0340 50.0340 50.0340 50.0340 50.0340 50.0340 50.0340 50.0340 >> s=sim(net,datrna') s= (1,1) 1 (1,2) 1 (1,3) 1 (1,4) 1 (1,5) 1 (1,6) 1 (1,7) 1 (1,8) 1 (1,9) 1 (1,10) 1 (1,11) 1 (1,12) 1 (1,13) 1 (1,14) 1 (1,15) 1 (1,16) 1 (1,17) 1 (1,18) 1 (1,19) 1 (1,20) 1 (1,21) 1 (1,22) 1 (1,23) 1 >>%Now we train the ANN with the next command. >> net=train(net,datrna') >> wts=net.IW{1,1} wts = 50.3141 50.2425 50.1216 50.0592 50.0128 49.9402 49.9060 49.8504 49.7623 >> a=sim(net,datrna') a= (7,1) 1 (7,2) 1 7
  • (7,3) 1 (4,4) 1 (6,5) 1 (6,6) 1 (7,7) 1 (6,8) 1 (2,9) 1 (8,10) 1 (8,11) 1 (4,12) 1 (4,13) 1 (4,14) 1 (2,15) 1 (2,16) 1 (1,17) 1 (3,18) 1 (4,19) 1 (3,20) 1 (5,21) 1 (1,22) 1 (9,23) 1 >> ac=vec2ind(a) ac = Columns 1 through 14 7 7 7 4 6 6 7 6 2 8 8 4 4 4 Columns 15 through 23 2 2 1 3 4 3 5 1 9 where we see that in point 21 (the mean) is in Group 5, the upper (mean plus 3 sigma) point 22 is grouped with the center wts 50.3215 in group 1 and the lower (mean minus 3 sigma) point 23 is grouped with the center wts 49.7303 in Group 9. The 20 points in the process are located in clusters ranging from 2-8. From the 1-11 points: 10 are located below the mean (points 10 and 11 are in group 8) and the 12-20 points are located above the mean and the points 15, 16 and 17 are in group 2. Therefore it follows that the points from the 12-20 could indicate a change in the process, which needs to be investigated. It is also clear that any point out of control will fall into groups 1 or 9. With the methodology presented is obtained different information than the Shewart, CUSUM and EWMA charts, but not contradictory, we can say complementary. A more accurate picture of the behavior of the process is obtained by the graphical plotsomhits of MTALAB, which is presented below: 8
  • Figure 7. Plotsomhits Graph. Example with one variable. FOR TWO VARIABLES AT ONCE In the case of 20 data above plus 20 of other characteristic data of the same product, will be the following: >> datdos' ans = Columns 1 through 8 49.9000 49.9100 49.8900 50.0500 49.9400 49.9500 49.8800 49.9600 26.4000 26.2000 25.2000 26.2000 25.8000 24.2000 26.4000 25.6000 Columns 9 through 16 50.2700 49.8700 49.8700 50.0400 50.0400 50.0400 50.2300 50.2300 24.8000 25.8000 26.6000 24.4000 24.8000 25.6000 24.2000 25.8000 Columns 17 through 20 50.2800 50.1300 50.0500 50.1500 26.4000 26.8000 29.2000 27.4000 >>% Calculating the maximum, minimum and mean values and extremes of each group of data (xx and yy): >> min(datdosxx) ans = 49.8700 >> max(datdosxx) ans = 50.2800 >> mean(datdosxx) ans = 50.0340 >> std(datdosxx) ans = 0.1394 >> mean(datdosxx)+3*std(datdosxx) ans = 50.4522 >> mean(datdosxx)-3*std(datdosxx) ans = 49.6158 >> min(datdosyy) ans = 9
  • 24.2000 >> max(datdosyy) ans = 29.2000 >> mean(datdosyy) ans = 25.8900 >> std(datdosyy) ans = 1.1814 >> mean(datdosyy)-3*std(datdosyy) ans = 22.3458 >> mean(datdosyy)+3*std(datdosyy) ans = 29.4342 >>% As in the previous case we have 23 data for each set of data (mean and top and bottom: 21, 22 and 23 points). >> datdos' ans = Columns 1 through 8 49.9000 49.9100 49.8900 50.0500 49.9400 49.9500 49.8800 49.9600 26.4000 26.2000 25.2000 26.2000 25.8000 24.2000 26.4000 25.6000 Columns 9 through 16 50.2700 49.8700 49.8700 50.0400 50.0400 50.0400 50.2300 50.2300 24.8000 25.8000 26.6000 24.4000 24.8000 25.6000 24.2000 25.8000 Columns 17 through 23 50.2800 50.1300 50.0500 50.1500 50.0340 50.4522 49.6158 26.4000 26.8000 29.2000 27.4000 25.8900 29.4342 22.3458 >>%Now, we create the ANN SOM, with the next command. >> net=newsom([49.6158 50.4522; 22.3458 29.4342],[9 9]); >> wts=net.IW{1,1} wts = 50.0340 25.8900 ....... ....... 50.0340 25.8900 (81 valores: 9x9) >> s=sim(net,datdos') s= (1,1) 1 (1,2) 1 (1,3) 1 (1,4) 1 (1,5) 1 (1,6) 1 (1,7) 1 (1,8) 1 (1,9) 1 (1,10) 1 (1,11) 1 (1,12) 1 (1,13) 1 (1,14) 1 (1,15) 1 (1,16) 1 (1,17) 1 10
  • (1,18) 1 (1,19) 1 (1,20) 1 (1,21) 1 (1,22) 1 (1,23) 1 >>%Now we train the ANN with the next command. >> net=train(net,datdos'); >> wts=net.IW{1,1} wts = Grupo Num 50.2093 25.5237 1 50.2162 25.8073 2 50.2003 25.9097 3 50.2010 26.3140 4 50.2261 26.4299 5 50.1885 26.6768 6 50.1615 27.4251 7 50.2096 28.6128 8 50.2415 29.2569 9 50.2127 25.3783 10 50.1979 25.6279 11 50.1208 26.0144 12 50.1395 26.2451 13 50.1433 26.4352 14 50.1406 26.9450 15 50.1485 27.4070 16 50.2010 28.5147 17 50.2119 28.8968 18 50.2224 24.9255 19 50.2068 25.0337 20 50.0998 25.6217 21 50.0576 25.9386 22 50.0579 26.1144 23 50.0969 26.4862 24 50.1332 27.0380 25 50.1308 27.5553 26 50.1080 27.7163 27 50.2401 24.7878 28 50.1499 25.1080 29 50.0539 25.6611 30 50.0425 25.8938 31 50.0489 26.0473 32 50.0799 26.5213 33 50.0675 27.0274 34 49.9874 26.8354 35 49.9487 26.7563 36 50.1449 24.5733 37 50.1386 24.6470 38 50.0778 25.2420 39 50.0383 25.6873 40 50.0290 25.8319 41 50.0111 25.9986 42 49.9564 26.4859 43 49.9072 26.4947 44 49.8915 26.5623 45 50.0956 24.3634 46 11
  • 50.1109 24.4487 47 50.0347 25.0623 48 50.0054 25.4881 49 49.9734 25.6726 50 49.9257 26.0891 51 49.9012 26.3885 52 49.8892 26.4421 53 49.8901 26.4769 54 50.0709 24.1858 55 50.1125 24.2351 56 50.0814 24.5216 57 49.9809 25.0046 58 49.9446 25.4518 59 49.9506 25.6447 60 49.9224 26.0546 61 49.8985 26.2931 62 49.8962 26.3164 63 49.9362 23.5911 64 50.0412 24.2945 65 49.9792 24.7681 66 49.9394 25.1509 67 49.9354 25.4972 68 49.9249 25.7326 69 49.9087 25.9658 70 49.9086 26.1893 71 49.9009 26.2739 72 49.6874 22.6618 73 49.8264 23.4251 74 49.9487 24.2351 75 49.9475 24.8816 76 49.9252 25.4200 77 49.9221 25.7224 78 49.9090 25.8045 79 49.9099 25.9552 80 49.9133 26.0886 81 They are 81 values which are numbered from 1 to 81 (9x9), being the ends of wts which are around plotsomhits graph. In this case the points: 1-9 and 73-81 and 10, 18, 19, 27, 28, 36, 37, 45, 46, 54, 55, 63, 64, 72. This is for the construction of the respective grid. The respective minimum and maximum are: >> min(wts) ans = 49.6874 22.6618 >> max(wts) ans = 50.2415 29.2569 Correspond to the wts: 9 for the highest (9.1 and 9.2) and 73 for the minimum (73.1 and 73.2). Shows that 9 and 73 centers inside the ends (points 22 and 23). Let's see how they behave. >> a=sim(net,datdos') a= (52,1) 1 (71,2) 1 (67,3) 1 (23,4) 1 (79,5) 1 (75,6) 1 12
  • (52,7) 1 (60,8) 1 (28,9) 1 (79,10) 1 (45,11) 1 (46,12) 1 (66,13) 1 (30,14) 1 (56,15) 1 (2,16) 1 (5,17) 1 (6,18) 1 (9,19) 1 (16,20) 1 (31,21) 1 (9,22) 1 (73,23) 1 >> ac=vec2ind(a) ac = Columns 1 through 14 52 71 67 23 79 75 52 60 28 79 45 46 66 30 Columns 15 through 23 56 2 5 6 9 16 31 9 73 The values are grouped into 81 groups, which have 81 centers (the wts) The location of those we seen in the plotsomhits chart attached: Figure 8. Plotsomhits Graph. Example with two variables. ANALYSIS OF RESULTS An analysis of the results shows that the points 22 and 23 are at the limits, because are the values of groups 9 and 73. Other values close to the limit are: 5, 10 (group 79), 11 (group 45), 16 (group 2), 17 (group 5), 18 (group 6) and 19 (group 9). Therefore we should review the points: 5, 10, 11, 16, 17, 18 and 19. The points 5 (49.9400 and 25.8000), 10 (49.8700 and 25,800): Because the values of the first variable are very low (49.9400 and 49.8700). The point 11 (49.8700 and 26.6000) Because the values of the first variable are very low. 13
  • The point 16 (50.23 and 25.8): Because the value of the first variable is very low. The point 17 (50.28 and 26.4): Because the value of the first variable is very high. The point 18 (50.13 and 26.8): Because the value of the first variable is very high. The point 19 (50.05 and 29.2): Because the value of the second variable is very high. In conclusion: none of the 20 points were located in groups 9 or 73 and therefore both characteristics are under control. However, points 5, 10, 11, 16, 17 and 18 must be checked for having extreme values of the first variable. And also, the point 19 which corresponds to an extreme value of the second variable. Similarly one can work with three or more variables, but for simplicity could be reduced to 7 or 5 the number of cluster for each features. CONCLUSIONS. 1. The ANN SOM Neural Networks of Self Organizing Maps can be used for process control. The basic idea is that the values that belong to the process are grouped around centers near the center and the values that are not belong to the process are grouped in centers abroad. 2. If extreme values are added, the respective centers are ends and the values that are grouped in these groups will be to investigate because are estimated that responds to assignable causes. 3. The results lead obtain the ANN SOM are to the complementary and similar to those obtained with Shewart, CUSUM and EWMA Charts. Besides, the ANN SOM offer the advantage of working with several features of a product at a time. REFERENCES 1.- HAGAN, Martín T., DEMUTH, Howard B., BEALE, Mark. (1996). Neural Network Design. Editorial Thomson. China. 2.- HAYKIN, S. (2005). Neural Networks. Editorial Macmillan College Publishing Company. EEUU. 3.- ISASI, Pedro y GALVAN, Inés. (2004). Redes de Neuronas Artificiales. Editorial Pearson– Prentice Hall. España. 4.- MONTGOMERY, Douglas. (2004 ). “Design and analysis of experiments”. Editorial Limusa Wiley. México. 5.- WASSERMAN, P. (1993). “Advanced Methods in Neural Computing”. Editorial Van Nostrand Reinhold. EEUU. 14