• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Basics Of Neural Network Analysis
 

Basics Of Neural Network Analysis

on

  • 508 views

Neural network analysis can be used to predict the performance characteristics of formulations or multi-step processes -- even when there are a large number of variables with complex interactions.

Neural network analysis can be used to predict the performance characteristics of formulations or multi-step processes -- even when there are a large number of variables with complex interactions.

Statistics

Views

Total Views
508
Views on SlideShare
508
Embed Views
0

Actions

Likes
0
Downloads
3
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Basics Of Neural Network Analysis Basics Of Neural Network Analysis Presentation Transcript

    • BASICS OFNEURAL NETWORK ANALYSIS Multilayer Perceptron Neural Networks Kohonen Neural Networks
    • CONTENTS•  What  is  a  Neural  Network?  •  What  can  it  do  for  me?  •  Advantages  and  Disadvantages  •  Two  Common  Types  of  Neural  Networks   –  Mul?layer  Perceptron     •  A  “black  box”  model  predicts  output  values   –  Kohonen  Classifica?on     •  Experimental  cases  are  classified  into  groups  •  Training  Neural  Networks    
    • What  is  a  Neural  Network?  A  neuron  is  a  func?on,  Y=f(X),  with  input  X  and  output  Y:     X   Y     Y  =  f(X)    Neurons  are  connected  by  synapses.  A  synapse  mul?plies  the  output  by  a  weigh?ng  factor,  W:   X   WY   Z   Y  =  f(X)   Z  =  g(WY)  
    • The  func?on  in  a  neuron  can  be  linear  or  nonlinear.  A  typical  nonlinear  func?on  is  the  Sigmoid  func?on:        
    • Neural  networks  are  trained  with  cases  •  What  is  a  case?   –  A  case  is  an  experiment  with  one  or  more  inputs   (controlled  variables)  and  one  or  more  outputs  (results  or   observa?ons)   –  Example   •  Inputs:  temp  298°K,  ini?al  concentra?on  1.0  g/l,  ?me  7  days;   Outputs:  final  concentra?on  0.9  g/l,  degrada?on  product  0.15  g/l  
    • When  a  neural  network  is  “trained”  with  different  cases,  the  parameters  of  the  neuronal  func?ons  and  synap?c  weigh?ng  factors  are  adjusted  for  the  best  “fit”:              The  inputs  are  x1  thru  xp.  The  outputs  are  y1  thru  ym.    The  w-­‐values  are  the  synap?c  weigh?ng  factors.  The  u-­‐values  are  sums  of  weigh?ng  factors.  
    • What  can  a  neural  network  do  for  me?  •  Analyze  data  with  a  large  number  of  variables  with   complex  rela?onships.  •  Develop  formula?ons  or  mul?-­‐step  processes.  •  Compare  performance  characteris?cs  of  mul?ple   formula?ons  or  processes.  •  Analyze  experimental  data  even  when  data  points  are   missing  or  not  in  a  balanced  design.  
    • Advantages  •  No  need  to  propose  a  model  prior  to  data  analysis.  •  Can  handle  variables  with  very  complex  interac?ons.  •  No  assump?on  that  inputs  and  outputs  are  normally   distributed.  •  More  robust  to  noise.  •  No  need  to  pre-­‐determine  important  variables  and   interac?ons  with  a  Design  of  Experiments  
    • Disadvantages  •  Need  a  lot  of  data.   –  (Number  of  Training  Cases)  ≈  10  x  (Number  of  Synapses)  •  Output  variables  are  not  expressed  as  analy?c  func?ons  of   input  variables.    
    • Training  Kohonen  Neural  Networks     and     Mul?layer  Perceptron  Neural  Networks  •  A  por?on  of  the  cases  are  randomly  selected  to  be   training  cases  –  typically  about  70%.  •  A  por?on  of  the  cases  are  randomly  selected  to  be   verifica?on  cases  –  typically  about  20%.  •  The  remainder  are  test  cases  –  typically  about  10%.    
    • Teaching  the  neural  network  with  just  the  training  cases  will  result  in  “over-­‐fieng”  the  data:  
    • So,  the  verifica?on  cases  are  added:  
    • Then  the  network  is  “retrained”  with  the  verifica?on  cases  and  the  final  model  is  the  result:  
    • Finally,  the  test  cases  are  used  to  determine  how  well  the  “black  box”  model  predicts  the  outputs.  The  outputs  of  a  Kohonen  Neural  Network  will  be  the  different  “classes”  into  which  the  cases  have  been  classified.  The  outputs  of  a  Mul?layer  Perceptron  Neural  Network  will  be  con?nuous  variables  represen?ng  the  performance  characteris?cs  of  all  the  formula?ons  or  all  the  mul?-­‐step  processes.  (Remember,  each  formula?on  or  process  is  a  “case”.)