SlideShare a Scribd company logo
1 of 23
Download to read offline
www.studentyogi.com                         www.studentyogi.com
                                                       Artificial Neural Network
                         INTRODUCTION

BACKGROUND:




                       co om
               Many task which seem simple for us, such as reading a
handwritten note or recognizing a face, are difficult task for even the most




                          m
advanced computer. In an effort to increase the computer ability to perform
such task, programmers began designing software to act more like the
human brain, with its neurons and synaptic connections. Thus the field of




                    gi. .c
“Artificial neural network” was born. Rather than employ the traditional
method of one central processor (a Pentium) to carry out many instructions
one at a time, the Artificial neural network software analyzes data by passing




                  oogi
it through several simulated processos which are interconnected with
synaptic like “weight”
             Once we have collected several record of the data we wish to
analyze, the network will run through them and “learn ” the input of each
record may be related to the result. After training on a few doesn’t cases the
               ntyy
network begin to organize and refines its on own architecture to feed the
             eent
data to much the human brain; learn from example.
       This “REVERSE ENGINEERING” technology were once regarded
as the best kept secret of large corporate, government an academic
researchers.
       The field of neural network was pioneered by BERNARD
        t t dd


WIDROW of Stanford University in 1950’s.
     ssuu
   w. .
   w
ww
ww




www.studentyogi.com                         www.studentyogi.com1
www.studentyogi.com                        www.studentyogi.com
                                                      Artificial Neural Network
Why would anyone want a `new' sort of computer?

What are (everyday) computer systems good at... .....and not so good at?
Good at                                  Not so good at




                                                     om
                                         Interacting with noisy data or data
Fast arithmetic
                                         from the environment
Doing precisely what the programmer
                                    Massive parallelism
programs them to do




                                                  i.c
                                    Massive parallelism
                                    Fault tolerance
                                    Adapting to circumstances




                                    og
Where can neural network systems help?

   •   where we can't formulate an algorithmic solution.
                                 nty
   •   where we can get lots of examples of the behaviour we require.
   •   where we need to pick out the structure from existing data.

What is a neural network?
Neural Networks are a different paradigm for computing:
                          de

   •   Von Neumann machines are based on the processing/memory
       abstraction of human information processing.
                  stu


   •   Neural networks are based on the parallel architecture of animal
       brains.

Neural networks are a form of multiprocessor computer system, with
          w.




   •   Simple processing elements
   •   A high degree of interconnection
   •   Simple scalar messages
   •   Adaptive interaction between elements
ww




       Artificial neural network (ANNs) are programs designed to solve
any problem by trying to mimic structure and function of our nervous
system. Neural network are based on simulated neurons. Which are joined
together in a variety of ways to form networks. Neural network resembles
the human brain in the following two ways: -
www.studentyogi.com                        www.studentyogi.com2
www.studentyogi.com                         www.studentyogi.com
                                                       Artificial Neural Network
A neural network acquires knowledge through learning
A neural network’s knowledge is stored with in the interconnection strengths
known as synaptic weight.
             Neural network are typically organized in layers. Layers are




                                                      om
made up of a number of interconnected ‘nodes’, which contain an ‘activation
function’. Patterns are presented to the network via the ‘input layer’, which
communicates to one or more ‘hidden layers’ where the actual processing is
done via a system of weighted ‘connections’. The hidden layers then link to
an ‘output layer’ where the answer is output as shown in the graphic below.




                                                   i.c
                                          Hidden Layers




                                    og
Connections                      nty
                          de

            Input layer                                       Output layer


                          Basic structure of neural network
                 stu



Each layer of neural makes independent computation on data that it receives
and passes the result to the next layers(s). The next layer may in turn make
independent computation and pass data further or it may end the
         w.




computation and give the output of the overall computation .The first layer is
the input layer and the last one, the output layer. The layers that are placed
within these two are the middle or hidden layers.
              A neural network is a system that emulates the cognitive
ww




abilities of the brain by establishing recognition of particular inputs and
producing the appropriate output. Neural networks are not “hard-wired” in
particular way; they are trained using presented inputs to establish their own
internal weights and relationships guided by feedback. Neural networks are
free to form their own internal working and adapt on their own.
     Commonly neural network are adjusted, or trained so that a particular
input leads to a specific target output
www.studentyogi.com                         www.studentyogi.com3
www.studentyogi.com                         www.studentyogi.com
                                                        Artificial Neural Network
                                                              Target


                   Neural networksddss
                 Neural network




                                                      om
                 Including
       Input     connections                                    Compare
                 (called weights)




                                                   i.c
                   Figure showing adjust of neural network



                                    og
There, the network is adjusted based on a comparison of the output and the
target, until the network output matches the target. Typically many such
input/target pairs are used to train network.
                                 nty
Once a neural network is ‘trained’ to a satisfactory level it may be used as an
analytical tool on other data. To do this, the user no longer specifies any
training runs and instead allows the network to work in forward propagation
mode only. New inputs are presented to the input pattern where they filter
                          de

into and are processed by the middle layers as though training were taking
place, however, at this point the output is retained and no back propagation
occurs.
The structure of Nervous system
                 stu


Nervous system of a human brain consists of neurons, which are
interconnected to each other in a rather complex way. Each neuron can be
thought of as a node and interconnection between them are edge, which has
a weight associates with it, which represents how mach the tow neuron
         w.




which are connected by it can it interact.

                                                           Node (neuron)
ww




                                                                  Edge
                                                           (interconnection)

www.studentyogi.com                         www.studentyogi.com4
www.studentyogi.com                           www.studentyogi.com
                                                          Artificial Neural Network
Functioning of A Nervous System
The natures of interconnections between 2 neurons can such that – one
neuron can either stimulate or inhibit the other. An interaction can take place
only if there is an edge between 2 neurons. If neuron A is connected to




                                                        om
neuron B as below with a weight w, then if A is stimulated sufficiently, it
sends a signal to B. The signal depends on

                                   w




                                                     i.c
                           A                  B
The weight w, and the nature of the signal, whether it is stimulating or
inhibiting. This depends on whether w is positive or negative. If its
stimulation is more than its threshold. Also if it sends a signal, it will send it



                                      og
to all nodes to which it is connected. The threshold for different neurons
may be different.
If many neurons send signal to A, the combined stimulus may be more than
the threshold.
                                   nty
                    Next if B is stimulated sufficiently, it may trigger a signal
to all neurons to which it is connected.
                      Depending on the complexity of the structure, the
overall functioning may be very complex but the functioning of individual
                           de

neurons is as simple as this. Because of this we may dare to try to simulate
this using software or even special purpose hardware.

Major components Of Artificial Neuron
                  stu


             This section describes the seven major components, which
make up an artificial neuron. These components are valid whether the
neuron is used for input, output, or is in one of the hidden layers.
Component 1. Weighing factors:
          w.




           A neuron usually receives many simultaneous inputs. Each input
has its own relative weight, which gives the input the impact that it needs on
the processing elements summation function. These weights perform the
same type of function, as do the varying synaptic strengths of biological
ww




neurons. In bath cases, some input are made more important than others so
that they have a greater effect on the processing element as they combine to
produce a neuron response.
       Weights are adaptive coefficients within the network that determine
the intensity of the input signal as registered by the artificial neuron. They
are a measure of an input’s connection strength. These strengths can be

www.studentyogi.com                           www.studentyogi.com5
www.studentyogi.com                          www.studentyogi.com
                                                        Artificial Neural Network
modified in response to various training sets and according to a network’s
specific topology or through its learning rules.
Component 2. Summation Function:
        The first step in a processing element’s operation is to compute the




                                                      om
weighted sum of all of the inputs. Mathematically, the inputs and the
corresponding weights are vectors which can be represented as (i1,
i2,………….in) and (w1, w2,……………….wn). The total input signal is
the dot, or inner, product of these two vectors. This simplistic summation
function is found by multiplying each component of the i vector by the




                                                   i.c
corresponding component of the w vector and then adding up all the
products. Input1 = i1*w1, input2=i2*w2, etc., are added as
input1+input2+………..+inputn. The result is a single number, not a multi-
element vector.



                                    og
  Geometrically, the inner product of two vectors can be considered a
measure of their similarity. If the vector point in the same direction, the
inner product is maximum; if the vectors points in opposite direction (180
                                 nty
degrees out of phase), their inner product is minimum.
   The summation function can be more complex than just the simple input
and weight sum of products. The input and weighting coefficients can be
combined in many different ways before passing on to the transfer function.
In addition to a simple product summing, the summation function can select
                          de

the minimum, maximum, majority, product, or several normalizing
algorithms. The specific algorithm for combining neural inputs is
determined by the chosen network architecture and paradigm.
Component 3: Transfer Function:
                 stu


      The result of the summation function, almost always the weighted sum,
is transformed to a working out put through an algorithm process known as
the transfer function. In transfer function the summation total can be
compared with some threshold to determine the neural output. If the sum is
         w.




greater than the threshold value, the processing element generates a signal. If
the sum of the input and weight product is less than the threshold, no signal
(no some inhibitory signal) is generated. Both types of response are
significant.
ww




      A simple binary neuron with 3 inputs ( X [1], X [2] and X [3]) and 2
outputs (O [1] and O[2] ). Every neuron has a particular threshold. A neuron
fires only if the weighted sum of its input exceeds the threshold.
   Sum=summation (W i * Xi )
If sum>=T then the output is
Output O[1] and O[2]= 1 if ( ∑ X[i] * W[i] >=T )

www.studentyogi.com                          www.studentyogi.com6
www.studentyogi.com                                 www.studentyogi.com
                                                               Artificial Neural Network
                           =0         else
      X [1]
                   W [1]




                                                           om
    X [2]              W [2]                  (synaptic connection)
                                                                  O [1]

   X [3]               W [3]
                                                (neuron processing node)




                                                        i.c
                       McCulloch-Pitts Neuron Model




                                           og
We can use different kind of threshold functions namely:
Sigmoid function     : f (x)=1/(1+exp(-x))
Step function       : f (x) =0 if x<T
                            =k if x>=T
                                        nty
Ramp funtion       : f (x)=ax+b

      The transfer function could be something as simple as depending upon
whether the result of the summation function is positive or negative. The
                               de

network could output zero and one, and minus one, or other numeric
combinations. The transfer function would then be a “hard limiter” or step
function.
                   stu


              Hard limiter                                     Ramping function
              Y                                                      y 1
               1                                                              x
                       x
                 -1                                                      -1
           w.




                x<0, y=-1                                            x<0, y=0
                x>=0, y=1                                      0<=x<=1, y=x
                                                                 x>1, y=1
ww




                   y            sigmoid functions          y
               1                                           1
                                  x                                        x
                                                                 -1,0
               y = 1/(1+e-x)                                   x>=0, y=1-1(1+x)
                                                               x<0, y=-1+1(1-x)


                       Figure : Sample Transfer Functions
www.studentyogi.com                                 www.studentyogi.com7
www.studentyogi.com                          www.studentyogi.com
                                                        Artificial Neural Network

Component 4: Scaling and Limiting:
          After the processing element’s transfer function, the result can pass
through additional processes which scale and limit. This scaling simply




                                                       om
multiplies a scale factor times the transfer value, and then adds an offset.
Limiting is the mechanism, which ensures that the scaled result does not
exceed, and upper or lower bound. This limiting is in addition to the hard
limits that the original transfer function may have performed.
Component 5: Output function (competition)




                                                    i.c
         At Each processing element is allowed one output signal, which it
may output to hundreds of other neurons. This is the just like the biological
neuron, where there are many inputs and only one output action. Normally,
the output is directly equivalent to the transfer function’s result. Some



                                     og
network topologies, however, modify the transfer result to incorporate
competition among neighboring processing elements. Neurons are allowed
to complete with each other, inhibiting processing elements unless they have
great strength. Competition can occur one or both of two levels. First,
                                  nty
competition determines which artificial neuron will be active, are provides
an output. Second, competitive inputs help determine which processing
elements will participate in the learning or adaptation process.
Component 6: Error function and back-propagated value:
                          de

        In most learning networks the difference between the current output
and the desired output is calculated. This raw error is then transformed by
the error function to match particular network architecture. The most basic
architectures use this error directly, but some square the error while retaining
                 stu


its sign, some cube the error, other paradigms modify the raw error to fit
their specific purposes. The artificial neuron’s error is then typically
propagated into the learning function of another processing element. This
error term is sometimes called the current error.
         w.




    The current error is typically propagated backwards to a previous layer.
Yet, this back-propagated value can be either the current error, the current
error scaled in some manner (obtained by the derivative of the transfer
function), or some desired output depending on the network type. Normally,
ww




this back-propagated value, after being scaled by the learning function, is
multiplied against each of the incoming connection weights to modify them
before the next learning cycle.
Component 7: Learning function :
     The purpose of the learning function is to modify the variable
connection weights on the inputs of each processing elements according to
some neural base algorithm. This process of changing the weights of the
www.studentyogi.com                          www.studentyogi.com8
www.studentyogi.com                        www.studentyogi.com
                                                        Artificial Neural Network
inputs connections to achieve some desired results could also be called the
adaption function, as well as the learning mode.

Paradigms of learning—




                                                     om
    There are three broad paradigms of learning:
   • Supervised
   • Unsupervised (or self-organised)
   • Reinforcement learning ( a special case of supervised learning ).




                                                  i.c
   Supervised learning:
          The vast majority of the artificial neural network solutions have
   been trained with supervision. In this mode, the actual output of a neural




                                   og
   network is compared to the desired output. The network then adjusts
   weights, which are usually randomly set to begin with, so that the next
   iteration, or cycle, will produce a closer match between the desired and
   the actual output. This learning method tries to minimize the current
                                nty
   errors of all processing elements. This global error reduction is created
   over time by continuously modifying the input weights until acceptable
   network accuracy is reached.

                         Adaptive
                         de

                         network                                  O
       X                   W
                stu


                                            Distant           Learning signal
                                            generater                D

                         Ρ [d, 0]
                    Distance measure
           w.




                        Block dig. Of supervised learning
ww




   Unsupervised learning:
      In supervised learning the system directly compares the network
   output with a known correct or desired answer, whereas in unsupervised
   learning the output is not known. Unsupervised training allows the
   neurons to compete with each other until winner emerge. The resulting
   values of the winner neurons determine the class to which a particular

www.studentyogi.com                        www.studentyogi.com9
www.studentyogi.com                         www.studentyogi.com
                                                       Artificial Neural Network
   data set belongs. Unsupervised learning is the great promise of the future.
   It shouts that computers could some day learn on their own in a true
   robotic sense. Currently, this learning method is limited to networks
   known as self organizing map. These kinds of networks are not in wide




                                                      om
   spread use.

                           Adaptive
                           network
       x                      W                              o




                                                   i.c
                                    og
             Block diagram of unsupervised learning
                                 nty
   Reinforcement learning:
           Reinforcement learning is a form of supervised learning where
   adopted neuron receives feedback from the environment that directly
   influences learning.
                          de

   Learning law :
 The following general learning rules is adopted in the neural network
studies:
                 stu


The weight vector wi =[Wi1, Wi2……………………………….Win] t
increases proportion to the product of input x and learning signal r. The
learning signal r is a function of wi x, and sometimes of the teacher’s signal
di.
    Hence we have, r =r (Wi, X, di) and increment in weight vector produced
           w.




by the learning step at time t is ∆Wi (t) =cr [Wi (t), X (t), di (t)] X (t)
Where c is learning constant.
Thus Wi (t+1) =Wi (t)+ cr [Wi (t), X (t), di (t)] X (t)
ww




Various learning rules are their assist the learning process.
They are :
   1. Hebbian learning rule :
            This rule represents purely feed forward, unsupervised learning



                                                               10
www.studentyogi.com                         www.studentyogi.com
www.studentyogi.com                          www.studentyogi.com
                                                         Artificial Neural Network
                                       th
             X1         Wi1           i neuron

      X2          Wi2                                                        Oi




                                                         om
                  Wi3                  ∆Wi
            X3                                        Learning                    di
                   Win        x                       signal
                                               r      generater
            Xn




                                                      i.c
                                     c
                           Hebbian Learning Rule

 According to this rule, we have



                                     og
 r=f(Wti, X)
 and increment of weight becomes
 ∆Wi=c f(Wti, X) X
                                  nty
 In this learning, the initial weight of neuron is.
 2. Perceptron learning rule:
            This learning is supervised and learning signal is equal to
 r =di-Oi
 Where Oi=sgn(Wti, X) and di is the desired response.
                          de

 Weight adjustment in this method is
 ∆Wi=c[di- sgn(Wti, X)] X
                  stu


 in this method of learning, initial weight can have any value and neuron
 much be binary bipolar are binary unipolar.

            X1
           X2        Wi1                                TLU
       w.




                   Wi2                      neti
                                  +                                               oi
ww




      x3         Wi3                     ∆Wi

                   Win                                      di-oi        +        di
       Xn                         X

                                            C
                        Perception Learning Rule
                                                                11
www.studentyogi.com                          www.studentyogi.com
www.studentyogi.com                         www.studentyogi.com
                                                        Artificial Neural Network
3. Delta learning rule:
   This rule is valid for continuous activation functions and in the
   supervised training mode. The learning signal is called as delta and is
   given as:




                                                      om
   r=[di – f(Wti X)] f ′ (Wti X)
   The adjustment for the single weight in this rule is given as:
   ∆Wi = c (di-Oi) f ′ (net i) X
   In this method of learning, the initial weight can have any value and the
   neuron must be continuous.




                                                   i.c
                   X1
                       Wi1          continuous perception




                                    og
              X2    wi2    f(neti)                    Oi
                    Win
            Xn       ∆Wi        f΄(net i)
                                                    +
                                 nty
          X
                            r            di - oi    +             di

                     c
                         Delta Learning Rule
                         de

  4. Windrow-Hoff learning rule:
   This is applicable for the supervised training of neural networks and is
                 stu


  independent of activation function.
  Learning signal is given as :
  r = di – Wti X
  The weight vector increment under this learning rule is :
  ∆Wi=c (di - Wti X) X
        w.




  5. Correlation learning rule :
   Substituting r = di in general learning rule, we obtain correlation learning
  rule. The adjustment for the weight vector is given by:
ww




  ∆Wi = c di X
  6. Winner-take all learning rule:
  This rule is applicable for an ensemble of neurons, let’s say being
  arranged in a layer of p units. This learning is base on the premise that
  one of the neurons in the layer, say the mth, has the maximum response
  due to input x, as shown in fig. This neuron is declared the winner. As a
  result of this winning event, the weight vector Wm containing weights
                                                               12
www.studentyogi.com                         www.studentyogi.com
www.studentyogi.com                            www.studentyogi.com
                                                        Artificial Neural Network
                                  W11
                     X1           Wm1                                  O1
                            Wp1                             “winning neuron”
                            W1j
                     Xj           Wmj                                     Om




                                                       om
                            Wpj

                          W1n     Wmn
                    Xn                  Wpn                               Op




                                                    i.c
                         Winner –take –All learning Rule
 highlight in the fig. , by double headed arrow, is the only one adjusted in
 the given unsupervised learning. It’s increment is computed as follows:
 ∆Wm = α (X - Wm)



                                     og
 7.outstar learning rule:
 This is an another learning rule that is best explained when the neurons
 are arranged in layers. This rule is designed to produce a desired response
                                  nty
 d of the layer of p neurons as shown in fig… This rule is concerned with
 the supervised learning and the weight adjustment is computed as:
 ∆Wj = β (X - Wj)
                          de

     X     W11                                                                 o1
             Wm1
                 stu


           Wp1                                ∆Wi                     +
                                                                +         d1
           W1j                                          β
   Xj        Wmj                                                               om
           Wpj
         w.




                                          ∆Wi                         +
                                                                +         dm
         W1n       Wmn                                  β
ww




   Xn                                                                          op
           Wpn                            ∆Wi                         +
                                                                +         dp
                                                    β

                          Outstar Learning Rule


                                                                  13
www.studentyogi.com                            www.studentyogi.com
www.studentyogi.com                           www.studentyogi.com
                                                         Artificial Neural Network
   Training a neural network:
             Since the output of the neural network may not be what is
   expected, the network needs to be trained. Training involves altering the
   interconnection weights between the neurons. A criterion is needed to




                                                       om
   specify when to change the weights and how to change them. Training is
   an external process whiling learning is the process that takes place
   internal to the network. The following guideline will be of help as a step
   methodology for training a network.
      • Choosing the number of neurons




                                                    i.c
           The number of hidden neurons affect how well the network is able
   to separate the data. A large number of hidden neurons will ensure
   correct learning and the network is able to correctly predict the data it has




                                     og
   been trained on, but its performance on new data, its ability to generalize,
   is compromised. With too few hidden neurons, the network may be
   unable to learn the relationship amongst the data and the error will fail to
   fall below an acceptable level. Thus, selection of the number of hidden
                                  nty
   neurons is a crucial decision. Often a trial and error approach is taken
   starting with a modest number of hidden neurons and gradually
   increasing the number if the network fails to reduce its error. A much
   used approximation for the number of hidden neurons for a three layered
   network is N=1/2(j + k)+v P, where J and K are the number of input
                           de

   neurons and P is the number of patterns in the training set.
       • Choosing the initial weights
   The learning algorithm uses steepest descent technique, which rolls
                 stu


   straight downhill in weight space until the first valley is reached. This
   valley may not correspond to a zero error for the resulting network. This
   makes the choice of initial starting point in the multidimensional weight
   space critical. However, there are no recommended rules for this
   selection except trying several different weight values to see if the
          w.




   network results are improved.
       • Choosing the learning rate
            Learning rate effectively controls the size of the step that is taken
ww




in multidimensional weight space when each weight is modified. If the
selected learning rate is too large then the local minimum may be over
stopped constantly, resulting in oscillations and slow convergence to lower
error state. If the learning rate is too low, the number of iterations requires
may be too large, resulting in slow performance. Usually the default value of
most commercial neural network packages are in the range 0.1-0.3 providing

                                                                 14
www.studentyogi.com                           www.studentyogi.com
www.studentyogi.com                         www.studentyogi.com
                                                        Artificial Neural Network
a satisfactory balance between the two reducing the learning rate may help
improve convergence to a local minimum of the error surface.
       • Choosing the activation function
          The learning signal is a function of the error multiplied by the




                                                      om
          gradient of the activation function df/d (net). The larger the value
          of the gradient, the more weight the learning will receive. For
          training, the activation function must be monotonically increasing
          from left to right, differentiable and smooth.




                                                   i.c
   Models Of artificial Neural Networks
     There are different kinds of neural network models that can be used.
   Some of the common ones are:
   1. Perception model:


                                    og
          This is a very simple model and consists of a single ‘trainable’
   neuron. Trainable means that its threshold and input each input has a
   desired output (determined by us). If the neuron doesn’t gives the desired
                                 nty
   output, then it has made a mistake. To rectify this, its threshold and/or
   input weights must be changed. How this change is to be calculated is
   determined by the learning algorithm.
       The output of the perceptron is constrained to Boolean values – (true,
   false), (1, 0), (1, -1) or whatever. This is not a limitation because if the
                          de

   output of the perceptron were to be the input for something else, then the
   output edge could be made to have a weight. Then the output would be
   dependant on this weight.
   The perceptron looks like –
                 stu


                X1
                          W1
            X2     W2
         w.




            X3     W2
                                                           y

              Xn Wn
ww




   X1, X2, …………., Xn are inputs. These could be real numbers or Boolean
   values depending on the problem.
   y is the output and is Boolean.
   w 1, w2, …………, wn are weights of the edges and are real valued.
   T is the threshold and is a real valued.
   The output y is 1 if the net input which is :
   w1 x1 + w2 x2 + …….+ wn xn
                                                               15
www.studentyogi.com                         www.studentyogi.com
www.studentyogi.com                       www.studentyogi.com
                                                     Artificial Neural Network
 is greater than the threshold T. Otherwise the output is zero.
 2. Feed – Forward Model:
            Elementary feed forward architecture of m neurons and
 receiving n inputs is shown in the figure. Its output and input vectors are




                                                    om
 respectfully.
                   O = [O] O2 ………Om]
                   X = [x] x2………..xn]
 Weight Wij connects the ith neuron with the jth input. Hence activation
 value net i for the ith neuron is




                                                 i.c
                   Net i = j=1∑n Wij Xj            for i=1, 2, 3,……, n
 Hence, the output is given as
                   Oi = f(Wti X)            for i = 1, 2, 3, ……….., m
 Where Wi is the weight vector containing the weights leading towards



                                  og
 the ith output node and defined as
                   Wi = [Wi1 Wi2 …………. Win]
 If  is the nonlinear matrix operarte, the mapping of input space X to
                               nty
     output space O implemented by the network can be expressed
                                 O = WX
 Where W is the weight matrix, also called the connection matrix.
            The generic feedforward network is characterized by the lack of
     feedback. This type network can be connected in cascade to create a
                         de

     multiplayer network.
                         W11
            X1           W21                                          O1
                                      W12                1
                stu



           X2                           W22
       w.




                                         W2n                          O2
                                           Wm1               2
                   W1n                  Wm2
ww




            Xn                 Wmn                  m                 On


            X(t)               ┌ [Wx]                   O(t)


 Single –layer Feed forward network:interconnection scheme & block dig.
                                                             16
www.studentyogi.com                       www.studentyogi.com
www.studentyogi.com                      www.studentyogi.com
                                                  Artificial Neural Network


                 W11     W12 ……..………W1n
                 W21       W22 ……..………W2n




                                                 om
                 .       .     ……………. .
       W=        .       .     …………… .
                 .       .     …………… .

                 Wm1        …………….Wmn




                                              i.c
                  f(.)    O…………..O
                  O       f (.) ………..O



                                 og
       ┌ [.] =
                  .        . ………… .
                  .        . ………… .
                              nty
                  O       O………….f(.)

 3. Feed-Back model:
           A feedback network can be obtained from the feedforawrd
                         de

 network by connecting the neuron’s outputs to their inputs, as shown in
 the fig. The essence of closing the feedback loop is to hold control of
 output Oi through outputs Oj; for j =1, 2, ……, m. or controlling the
 output at instant t+∆ by the output at instant t. This delay ∆ is
                 stu


 introduced by the delay element in the feedback loop. The mapping of
 O(t) into O(t+ ∆) can now be written as
                 O(t+ ∆) = ┌[W o(t)]
       w.
ww




                                                            17
www.studentyogi.com                      www.studentyogi.com
www.studentyogi.com                                        www.studentyogi.com
                                                                   Artificial Neural Network
                             O2 (t)
                                                 ∆
                                O1 (t)
                                W11              ∆




                                                                    om
                        X1      W21                                              O1(t+ ∆)
                                                     W12




                        x2                                 W22                   O2(t+ ∆)




                                                                 i.c
                                                             W2n
                                                                         Lag free neuron
                             W1n                                 Wm1
                                                           Wm2



                                              og
                   Xn                                Wmn                         O3(t+ ∆)
                                                                          m
                                                 ∆
                                Om(t)
                                           nty
                                   Instantaneous
   X (0)                           network                             O (t+∆)
                                       ┌ [W o(t)]
                                de

           O (t)
                   stu


                                         Delay
                                           ∆


 Single –layer feedback network : interconnection scheme & block dig.
           w.




Notations Used:
              M1 is a 2-D matrix where M1[i] [j] represents the weight on the
connection from the ith input neuron to the jth neuron in the hidden layer.
ww




         M2 is a 2 –D matrix where M2[i][j] denotes the weight on the
connection from the ith hidden layer neuron to the jth output layer neuron.
       x, y and z are used to denote the outputs of neurons from the input
layer, hidden layer and output layer respectively.
       If there are m input data, then (x1, x2, ……., xm). P denotes the desired
output pattern with components (p1, p2, ………, pr) for r outputs.
Let the number of hidden layer neurons be n.
                                                                              18
www.studentyogi.com                                        www.studentyogi.com
www.studentyogi.com                       www.studentyogi.com
                                                     Artificial Neural Network
βo = learning parameter for the output layer.
βh = learning parameter for the hidden layer.
α = momentum term
θj = threshold value (bias) for the jth hidden layer neuron.
τj = threshold value for the jth output layer neuron.




                                                    om
ej = error at the jth output layer neuron.
tj = error at the jth hidden layer neuron.
Threshold function = sigmoid function : F(x) = 1/(1 + exp(x)).
Mathematical expressions:




                                                 i.c
        Output of jth hidden layer neuron : yj = f((Σi xi M1[i][j]) + θj)
Output of jth output layer neuron : zj = f((Σi yi M2[i][j]) + τj).
ith component of vector of output differences:
        desired value – computed value = Pj – zj



                                   og
 th
i component of output error at the output layer:
                 ej = pj – zj.
 th
i component of output error at the hidden layer:
        ti = yi (1 – yi ) (Σj M2[i][j] ej )
                                nty
Adjustment of weights between the ith neuron in the hidden layer and jth
output neuron:
        ∆M2[i][j] (t) = β0 yi ej + α ∆M2 [i][j] (t - 1)
Adjustment of weights between the ith input neuron and jth neuron in the
                         de

hidden layer :
                 ∆M1[i][j] = βh xi tj + α ∆M1 [i][j] (t - 1)
Adjustment of the threshold value for the jth output neuron:
                 ∆ τj = β0 ej
                stu


Adjustment of the threshold value for the jth hidden layer neuron:
                 ∆ θj = βh ej

Neural network applications:
         w.




Aerospace
    • High performance aircraft autopilot, flight path simulation, aircraft
ww




        control systems, autopilot enhancements, aircraft component
        simulation, aircraft component fault detection.
Automobile control
     • Automobile automatic guidance system, warranty activity analysis.
Banking


                                                             19
www.studentyogi.com                       www.studentyogi.com
www.studentyogi.com                  www.studentyogi.com
                                              Artificial Neural Network
     • Check and other document reading credit application
        evaluation.
Credit card activity checking
     • Neural networks are used to spot unusual credit card




                                              om
        activity that might possibly be associated with loss of a
        credit card
Defense
     • Weapon steering, target tracking, object discrimination,




                                           i.c
        facial recognition, new kinds of sensors, sonar, radar and
        image signal processing including data compression,
        feature extraction and noise suppression, signal/ image



                               og
        identification.
Electronics
     • Code sequence prediction, integrated circuit chip laying,
                            nty
        process control, chip failure analysis, machine vision
        voice synthesis, nonlinear modeling.
Entertainment
     • Animation, special effects, market forecasting.
                      de

Financial
     • Real estate appraisals, loan advisor, mortgage screening,
        corporate bond rating, credit-line use analysis, and
              stu


        portfolio trading program, corporate financial analysis,
        and currency price prediction.
Industrial
     • Neural networks are being trained to predict the output
        w.




        gasses of furnaces and other industrial process. They then
        replace complex and costly equipment used for this
        purpose in the past.
ww




Insurance
     • Neural networks are used in policy application evaluation,
        product optimization.
Manufacturing
     • Neural networks are used in manufacturing process
        control, product design and analysis, process and machine
                                                        20
www.studentyogi.com                  www.studentyogi.com
www.studentyogi.com                   www.studentyogi.com
                                                Artificial Neural Network
        diagnosis, real-time particle identification, visual quality
        analysis, paper quality prediction, computer – chip quality
        analysis, analysis of grinding operations, chemical product
        design analysis, machine maintenance analysis, project




                                               om
        bidding, planning and management, dynamic modeling of
        chemical process system.
Medical
     • Neural networks are used in breast cancer cell analysis,




                                            i.c
        EEG and ECG analysis, prosthesis design, optimization of
        transplant times, hospital expense reduction, hospital
        quality improvement, and emergency room test



                                og
        advisement.
Oil and Gas
     • Neural networks are used in exploration of oil and gas.
                             nty
Robotics
     • Neural networks are used in trajectory control, forklift
        robot, manipulator controllers, vision systems
Other application
     • Artificial intelligence
                      de

     • Character recognition
     • Image understanding
     • Logistics
              stu



     • Optimization
     • Quality Control
     • Visualization
        w.




Advantages and disadvantages Of ANN:
Advantage: --
ww




  1. It involve human like thinking.
  2. They handle noisy or missing data.
  3. They create their own relationship amongst information – no
     equations!
  4. They can work with large number of variables or parameters.
  5. They provide general solutions with good predictive accuracy.
  6. System has got property of continuous learning.
                                                         21
www.studentyogi.com                   www.studentyogi.com
www.studentyogi.com                          www.studentyogi.com
                                                        Artificial Neural Network
   7. They deal with the non-linearity in the world in which we live.
Disadvantage –
  1. In learning process time required may be in several months or years.
  2. It is hard to implement.




                                                      om
Conclusion –
             In this present world of automation to automate systems neural
network and fuzzy logic systems are required. Fuzzy logic deals with
vagueness or uncertainty and neural network related to human like thinking.




                                                   i.c
If we use only fuzzy systems or only NN complete automation is never
possible. The combination is suitable because fuzzy logic has tolerance for
impression of data, while neural networks have tolerance for noisy data. As
there are different advantages and disadvantages usage of NN thus we use



                                    og
combination of neural networks and Fuzzy logic in order to implement a real
world system without manual interference.
      Artificial neural network, in the present scenario is novel in it’s
technological field and still we have to witness a large of it’s development in
                                 nty
the upcoming era’s, whose speculations are not required, as it will speak for
themselves.
                          de

                          Bibliography
                 stu


References books:--
  • “Introduction to Artificial Neural Network”, by Jacek M. Zurada;
     Jaico Publishing House, 1999.
  • “An Introduction to Neural Network”, by James A. Anderson; PHI,
         w.




     1999.
  • “Elements of Artificial Neural Network”, K. Mehrotra, C.K. Mohan
     and Sanjay ranka, MIT Press, 1997.
  • “Neural Network nad Fuzzy System”, by Bart Kosko; PHI, 1992.
ww




  • “Neural Network – A comprehensive foundation”, Simon Haykin,
     Macmillan Publishing Co., Newyork, 1993.
Reference sites –
   • http://www.cs.stir.ac.uk/~lss/NNInro/invSlides.htm
   • http://www.bitstar.com/nnet.htm

                                                                22
www.studentyogi.com                          www.studentyogi.com
www.studentyogi.com              www.studentyogi.com
                                       Artificial Neural Network
 • http://www.pmsi.fr/sxcxmpa.htm
 • http://www.pmsi.fr/neurinia.htm




                                        om
                                     i.c
                           og
                        nty
                   de
            stu
      w.
ww




                                                    23
www.studentyogi.com              www.studentyogi.com

More Related Content

What's hot

Seminar Neuro-computing
Seminar Neuro-computingSeminar Neuro-computing
Seminar Neuro-computingAniket Jadhao
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural NetworkMuhammad Ishaq
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural NetworkNainaBhatt1
 
Feed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descentFeed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descentMuhammad Rasel
 
Activation function
Activation functionActivation function
Activation functionAstha Jain
 
Intro to Neural Networks
Intro to Neural NetworksIntro to Neural Networks
Intro to Neural NetworksDean Wyatte
 
Adaptive Resonance Theory
Adaptive Resonance TheoryAdaptive Resonance Theory
Adaptive Resonance TheoryNaveen Kumar
 
Neural network & its applications
Neural network & its applications Neural network & its applications
Neural network & its applications Ahmed_hashmi
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural NetworkPrakash K
 
Artificial Neural Networks - ANN
Artificial Neural Networks - ANNArtificial Neural Networks - ANN
Artificial Neural Networks - ANNMohamed Talaat
 
Artificial intelligence NEURAL NETWORKS
Artificial intelligence NEURAL NETWORKSArtificial intelligence NEURAL NETWORKS
Artificial intelligence NEURAL NETWORKSREHMAT ULLAH
 
Convolution Neural Network (CNN)
Convolution Neural Network (CNN)Convolution Neural Network (CNN)
Convolution Neural Network (CNN)Suraj Aavula
 
Perceptron (neural network)
Perceptron (neural network)Perceptron (neural network)
Perceptron (neural network)EdutechLearners
 
backpropagation in neural networks
backpropagation in neural networksbackpropagation in neural networks
backpropagation in neural networksAkash Goel
 

What's hot (20)

Seminar Neuro-computing
Seminar Neuro-computingSeminar Neuro-computing
Seminar Neuro-computing
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Perceptron & Neural Networks
Perceptron & Neural NetworksPerceptron & Neural Networks
Perceptron & Neural Networks
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Feed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descentFeed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descent
 
Activation function
Activation functionActivation function
Activation function
 
Intro to Neural Networks
Intro to Neural NetworksIntro to Neural Networks
Intro to Neural Networks
 
Adaptive Resonance Theory
Adaptive Resonance TheoryAdaptive Resonance Theory
Adaptive Resonance Theory
 
Neural network & its applications
Neural network & its applications Neural network & its applications
Neural network & its applications
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
 
Artificial Neural Networks - ANN
Artificial Neural Networks - ANNArtificial Neural Networks - ANN
Artificial Neural Networks - ANN
 
Artificial intelligence NEURAL NETWORKS
Artificial intelligence NEURAL NETWORKSArtificial intelligence NEURAL NETWORKS
Artificial intelligence NEURAL NETWORKS
 
Convolution Neural Network (CNN)
Convolution Neural Network (CNN)Convolution Neural Network (CNN)
Convolution Neural Network (CNN)
 
Art network
Art networkArt network
Art network
 
Perceptron (neural network)
Perceptron (neural network)Perceptron (neural network)
Perceptron (neural network)
 
backpropagation in neural networks
backpropagation in neural networksbackpropagation in neural networks
backpropagation in neural networks
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Neural networks introduction
Neural networks introductionNeural networks introduction
Neural networks introduction
 
HOPFIELD NETWORK
HOPFIELD NETWORKHOPFIELD NETWORK
HOPFIELD NETWORK
 

Viewers also liked

Smart machines presentation, Oct 2014
Smart machines presentation, Oct 2014Smart machines presentation, Oct 2014
Smart machines presentation, Oct 2014Immo Salo
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural networkDEEPASHRI HK
 
Gesture recognition using artificial neural network,a technology for identify...
Gesture recognition using artificial neural network,a technology for identify...Gesture recognition using artificial neural network,a technology for identify...
Gesture recognition using artificial neural network,a technology for identify...NidhinRaj Saikripa
 
Multi(Touch) Gesture Research
Multi(Touch) Gesture ResearchMulti(Touch) Gesture Research
Multi(Touch) Gesture ResearchFlorian Weil
 
Result Base Project Management
Result Base Project ManagementResult Base Project Management
Result Base Project ManagementArifur Rahman
 
Neural network 20161210_jintaekseo
Neural network 20161210_jintaekseoNeural network 20161210_jintaekseo
Neural network 20161210_jintaekseoJinTaek Seo
 
Somking dangers and contents
Somking dangers and contentsSomking dangers and contents
Somking dangers and contentsmohammed Qazzaz
 
what is neural network....???
what is neural network....???what is neural network....???
what is neural network....???Adii Shah
 
Fundamentals of Neural Networks
Fundamentals of Neural NetworksFundamentals of Neural Networks
Fundamentals of Neural NetworksGagan Deep
 
ANN for Load Flow Studies
ANN for Load Flow StudiesANN for Load Flow Studies
ANN for Load Flow Studiesmjvshin
 
Smart machines -presentation, January 2015
Smart machines -presentation, January 2015Smart machines -presentation, January 2015
Smart machines -presentation, January 2015Immo Salo
 
Artificial Neural Network for hand Gesture recognition
Artificial Neural Network for hand Gesture recognitionArtificial Neural Network for hand Gesture recognition
Artificial Neural Network for hand Gesture recognitionVigneshwer Dhinakaran
 
Chemistry and behavior of fire
Chemistry and behavior of fireChemistry and behavior of fire
Chemistry and behavior of fireDada Pariseo
 
Soft Computing
Soft ComputingSoft Computing
Soft ComputingMANISH T I
 
Seminar smart machine
Seminar smart machineSeminar smart machine
Seminar smart machineGarima Nanda
 
Applications of Artificial Neural Networks in Civil Engineering
Applications of Artificial Neural Networks in Civil EngineeringApplications of Artificial Neural Networks in Civil Engineering
Applications of Artificial Neural Networks in Civil EngineeringPramey Zode
 
memristor
memristormemristor
memristorjithoot
 
Artificial Neural Networks: Applications In Management
Artificial Neural Networks: Applications In ManagementArtificial Neural Networks: Applications In Management
Artificial Neural Networks: Applications In ManagementIOSR Journals
 

Viewers also liked (20)

Smart machines presentation, Oct 2014
Smart machines presentation, Oct 2014Smart machines presentation, Oct 2014
Smart machines presentation, Oct 2014
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
40120140503005 2
40120140503005 240120140503005 2
40120140503005 2
 
Gesture recognition using artificial neural network,a technology for identify...
Gesture recognition using artificial neural network,a technology for identify...Gesture recognition using artificial neural network,a technology for identify...
Gesture recognition using artificial neural network,a technology for identify...
 
Multi(Touch) Gesture Research
Multi(Touch) Gesture ResearchMulti(Touch) Gesture Research
Multi(Touch) Gesture Research
 
Result Base Project Management
Result Base Project ManagementResult Base Project Management
Result Base Project Management
 
Neural network 20161210_jintaekseo
Neural network 20161210_jintaekseoNeural network 20161210_jintaekseo
Neural network 20161210_jintaekseo
 
Somking dangers and contents
Somking dangers and contentsSomking dangers and contents
Somking dangers and contents
 
what is neural network....???
what is neural network....???what is neural network....???
what is neural network....???
 
Fundamentals of Neural Networks
Fundamentals of Neural NetworksFundamentals of Neural Networks
Fundamentals of Neural Networks
 
ANN for Load Flow Studies
ANN for Load Flow StudiesANN for Load Flow Studies
ANN for Load Flow Studies
 
Data preprocessing
Data preprocessingData preprocessing
Data preprocessing
 
Smart machines -presentation, January 2015
Smart machines -presentation, January 2015Smart machines -presentation, January 2015
Smart machines -presentation, January 2015
 
Artificial Neural Network for hand Gesture recognition
Artificial Neural Network for hand Gesture recognitionArtificial Neural Network for hand Gesture recognition
Artificial Neural Network for hand Gesture recognition
 
Chemistry and behavior of fire
Chemistry and behavior of fireChemistry and behavior of fire
Chemistry and behavior of fire
 
Soft Computing
Soft ComputingSoft Computing
Soft Computing
 
Seminar smart machine
Seminar smart machineSeminar smart machine
Seminar smart machine
 
Applications of Artificial Neural Networks in Civil Engineering
Applications of Artificial Neural Networks in Civil EngineeringApplications of Artificial Neural Networks in Civil Engineering
Applications of Artificial Neural Networks in Civil Engineering
 
memristor
memristormemristor
memristor
 
Artificial Neural Networks: Applications In Management
Artificial Neural Networks: Applications In ManagementArtificial Neural Networks: Applications In Management
Artificial Neural Networks: Applications In Management
 

Similar to Artificial Neural Network Paper Presentation

Artificial Neural Networks ppt.pptx for final sem cse
Artificial Neural Networks  ppt.pptx for final sem cseArtificial Neural Networks  ppt.pptx for final sem cse
Artificial Neural Networks ppt.pptx for final sem cseNaveenBhajantri1
 
Unit I & II in Principles of Soft computing
Unit I & II in Principles of Soft computing Unit I & II in Principles of Soft computing
Unit I & II in Principles of Soft computing Sivagowry Shathesh
 
Nature Inspired Reasoning Applied in Semantic Web
Nature Inspired Reasoning Applied in Semantic WebNature Inspired Reasoning Applied in Semantic Web
Nature Inspired Reasoning Applied in Semantic Webguestecf0af
 
Neural networks of artificial intelligence
Neural networks of artificial  intelligenceNeural networks of artificial  intelligence
Neural networks of artificial intelligencealldesign
 
Quantum neural network
Quantum neural networkQuantum neural network
Quantum neural networksurat murthy
 
Artifical neural networks
Artifical neural networksArtifical neural networks
Artifical neural networksalldesign
 
Artificial neural networks
Artificial neural networks Artificial neural networks
Artificial neural networks ShwethaShreeS
 
Neural network
Neural network Neural network
Neural network Faireen
 
Neural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdfNeural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdfneelamsanjeevkumar
 
Basics of Artificial Neural Network
Basics of Artificial Neural Network Basics of Artificial Neural Network
Basics of Artificial Neural Network Subham Preetam
 
Artificial Neural Network report
Artificial Neural Network reportArtificial Neural Network report
Artificial Neural Network reportAnjali Agrawal
 

Similar to Artificial Neural Network Paper Presentation (20)

ANN - UNIT 1.pptx
ANN - UNIT 1.pptxANN - UNIT 1.pptx
ANN - UNIT 1.pptx
 
Artificial Neural Networks ppt.pptx for final sem cse
Artificial Neural Networks  ppt.pptx for final sem cseArtificial Neural Networks  ppt.pptx for final sem cse
Artificial Neural Networks ppt.pptx for final sem cse
 
Project Report -Vaibhav
Project Report -VaibhavProject Report -Vaibhav
Project Report -Vaibhav
 
Unit I & II in Principles of Soft computing
Unit I & II in Principles of Soft computing Unit I & II in Principles of Soft computing
Unit I & II in Principles of Soft computing
 
Nature Inspired Reasoning Applied in Semantic Web
Nature Inspired Reasoning Applied in Semantic WebNature Inspired Reasoning Applied in Semantic Web
Nature Inspired Reasoning Applied in Semantic Web
 
Neural network
Neural networkNeural network
Neural network
 
Neural networks of artificial intelligence
Neural networks of artificial  intelligenceNeural networks of artificial  intelligence
Neural networks of artificial intelligence
 
Quantum neural network
Quantum neural networkQuantum neural network
Quantum neural network
 
Artifical neural networks
Artifical neural networksArtifical neural networks
Artifical neural networks
 
Artificial neural networks
Artificial neural networks Artificial neural networks
Artificial neural networks
 
neural networks
neural networksneural networks
neural networks
 
Neural Network
Neural NetworkNeural Network
Neural Network
 
Neural network
Neural network Neural network
Neural network
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Neural networks
Neural networksNeural networks
Neural networks
 
Neural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdfNeural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdf
 
Neural networks
Neural networksNeural networks
Neural networks
 
08 neural networks(1).unlocked
08 neural networks(1).unlocked08 neural networks(1).unlocked
08 neural networks(1).unlocked
 
Basics of Artificial Neural Network
Basics of Artificial Neural Network Basics of Artificial Neural Network
Basics of Artificial Neural Network
 
Artificial Neural Network report
Artificial Neural Network reportArtificial Neural Network report
Artificial Neural Network report
 

More from guestac67362

5 I N T R O D U C T I O N T O A E R O S P A C E T R A N S P O R T A T I O...
5  I N T R O D U C T I O N  T O  A E R O S P A C E  T R A N S P O R T A T I O...5  I N T R O D U C T I O N  T O  A E R O S P A C E  T R A N S P O R T A T I O...
5 I N T R O D U C T I O N T O A E R O S P A C E T R A N S P O R T A T I O...guestac67362
 
4 G Paper Presentation
4 G  Paper  Presentation4 G  Paper  Presentation
4 G Paper Presentationguestac67362
 
Bluetooth Technology Paper Presentation
Bluetooth Technology Paper PresentationBluetooth Technology Paper Presentation
Bluetooth Technology Paper Presentationguestac67362
 
Ce052391 Environmental Studies Set1
Ce052391 Environmental Studies Set1Ce052391 Environmental Studies Set1
Ce052391 Environmental Studies Set1guestac67362
 
Bluetooth Technology In Wireless Communications
Bluetooth Technology In Wireless CommunicationsBluetooth Technology In Wireless Communications
Bluetooth Technology In Wireless Communicationsguestac67362
 
Bio Chip Paper Presentation
Bio Chip Paper PresentationBio Chip Paper Presentation
Bio Chip Paper Presentationguestac67362
 
Bluetooth Paper Presentation
Bluetooth Paper PresentationBluetooth Paper Presentation
Bluetooth Paper Presentationguestac67362
 
Bio Metrics Paper Presentation
Bio Metrics Paper PresentationBio Metrics Paper Presentation
Bio Metrics Paper Presentationguestac67362
 
Bio Medical Instrumentation
Bio Medical InstrumentationBio Medical Instrumentation
Bio Medical Instrumentationguestac67362
 
Bluetooth Abstract Paper Presentation
Bluetooth Abstract Paper PresentationBluetooth Abstract Paper Presentation
Bluetooth Abstract Paper Presentationguestac67362
 
Basic Electronics Jntu Btech 2008
Basic Electronics Jntu Btech 2008Basic Electronics Jntu Btech 2008
Basic Electronics Jntu Btech 2008guestac67362
 
Basic electronic devices and circuits
Basic electronic devices and circuitsBasic electronic devices and circuits
Basic electronic devices and circuitsguestac67362
 
Automatic Speed Control System Paper Presentation
Automatic Speed Control System Paper PresentationAutomatic Speed Control System Paper Presentation
Automatic Speed Control System Paper Presentationguestac67362
 
Artificial Intelligence Techniques In Power Systems Paper Presentation
Artificial Intelligence Techniques In Power Systems Paper PresentationArtificial Intelligence Techniques In Power Systems Paper Presentation
Artificial Intelligence Techniques In Power Systems Paper Presentationguestac67362
 
Artificial Neural Networks
Artificial Neural NetworksArtificial Neural Networks
Artificial Neural Networksguestac67362
 
Automata And Compiler Design
Automata And Compiler DesignAutomata And Compiler Design
Automata And Compiler Designguestac67362
 
Auto Configuring Artificial Neural Paper Presentation
Auto Configuring Artificial Neural Paper PresentationAuto Configuring Artificial Neural Paper Presentation
Auto Configuring Artificial Neural Paper Presentationguestac67362
 
A Paper Presentation On Artificial Intelligence And Global Risk Paper Present...
A Paper Presentation On Artificial Intelligence And Global Risk Paper Present...A Paper Presentation On Artificial Intelligence And Global Risk Paper Present...
A Paper Presentation On Artificial Intelligence And Global Risk Paper Present...guestac67362
 
Applied Physics Jntu Btech 2008
Applied Physics Jntu Btech 2008Applied Physics Jntu Btech 2008
Applied Physics Jntu Btech 2008guestac67362
 

More from guestac67362 (20)

5 I N T R O D U C T I O N T O A E R O S P A C E T R A N S P O R T A T I O...
5  I N T R O D U C T I O N  T O  A E R O S P A C E  T R A N S P O R T A T I O...5  I N T R O D U C T I O N  T O  A E R O S P A C E  T R A N S P O R T A T I O...
5 I N T R O D U C T I O N T O A E R O S P A C E T R A N S P O R T A T I O...
 
4 G Paper Presentation
4 G  Paper  Presentation4 G  Paper  Presentation
4 G Paper Presentation
 
Bluetooth Technology Paper Presentation
Bluetooth Technology Paper PresentationBluetooth Technology Paper Presentation
Bluetooth Technology Paper Presentation
 
Ce052391 Environmental Studies Set1
Ce052391 Environmental Studies Set1Ce052391 Environmental Studies Set1
Ce052391 Environmental Studies Set1
 
Bluetooth Technology In Wireless Communications
Bluetooth Technology In Wireless CommunicationsBluetooth Technology In Wireless Communications
Bluetooth Technology In Wireless Communications
 
Bio Chip Paper Presentation
Bio Chip Paper PresentationBio Chip Paper Presentation
Bio Chip Paper Presentation
 
Bluetooth Paper Presentation
Bluetooth Paper PresentationBluetooth Paper Presentation
Bluetooth Paper Presentation
 
Bio Metrics Paper Presentation
Bio Metrics Paper PresentationBio Metrics Paper Presentation
Bio Metrics Paper Presentation
 
Bio Medical Instrumentation
Bio Medical InstrumentationBio Medical Instrumentation
Bio Medical Instrumentation
 
Bluetooth Abstract Paper Presentation
Bluetooth Abstract Paper PresentationBluetooth Abstract Paper Presentation
Bluetooth Abstract Paper Presentation
 
Basic Electronics Jntu Btech 2008
Basic Electronics Jntu Btech 2008Basic Electronics Jntu Btech 2008
Basic Electronics Jntu Btech 2008
 
Basic electronic devices and circuits
Basic electronic devices and circuitsBasic electronic devices and circuits
Basic electronic devices and circuits
 
Awp
AwpAwp
Awp
 
Automatic Speed Control System Paper Presentation
Automatic Speed Control System Paper PresentationAutomatic Speed Control System Paper Presentation
Automatic Speed Control System Paper Presentation
 
Artificial Intelligence Techniques In Power Systems Paper Presentation
Artificial Intelligence Techniques In Power Systems Paper PresentationArtificial Intelligence Techniques In Power Systems Paper Presentation
Artificial Intelligence Techniques In Power Systems Paper Presentation
 
Artificial Neural Networks
Artificial Neural NetworksArtificial Neural Networks
Artificial Neural Networks
 
Automata And Compiler Design
Automata And Compiler DesignAutomata And Compiler Design
Automata And Compiler Design
 
Auto Configuring Artificial Neural Paper Presentation
Auto Configuring Artificial Neural Paper PresentationAuto Configuring Artificial Neural Paper Presentation
Auto Configuring Artificial Neural Paper Presentation
 
A Paper Presentation On Artificial Intelligence And Global Risk Paper Present...
A Paper Presentation On Artificial Intelligence And Global Risk Paper Present...A Paper Presentation On Artificial Intelligence And Global Risk Paper Present...
A Paper Presentation On Artificial Intelligence And Global Risk Paper Present...
 
Applied Physics Jntu Btech 2008
Applied Physics Jntu Btech 2008Applied Physics Jntu Btech 2008
Applied Physics Jntu Btech 2008
 

Recently uploaded

#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions
 
Azure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAzure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAndikSusilo4
 
Artificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraArtificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraDeakin University
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machinePadma Pradeep
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...HostedbyConfluent
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Patryk Bandurski
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
Next-generation AAM aircraft unveiled by Supernal, S-A2
Next-generation AAM aircraft unveiled by Supernal, S-A2Next-generation AAM aircraft unveiled by Supernal, S-A2
Next-generation AAM aircraft unveiled by Supernal, S-A2Hyundai Motor Group
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure servicePooja Nehwal
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slidespraypatel2
 
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your BudgetHyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your BudgetEnjoy Anytime
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 

Recently uploaded (20)

#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
#StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping Elbows
 
Azure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAzure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & Application
 
Artificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning eraArtificial intelligence in the post-deep learning era
Artificial intelligence in the post-deep learning era
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
Vulnerability_Management_GRC_by Sohang Sengupta.pptx
Vulnerability_Management_GRC_by Sohang Sengupta.pptxVulnerability_Management_GRC_by Sohang Sengupta.pptx
Vulnerability_Management_GRC_by Sohang Sengupta.pptx
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machine
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
Next-generation AAM aircraft unveiled by Supernal, S-A2
Next-generation AAM aircraft unveiled by Supernal, S-A2Next-generation AAM aircraft unveiled by Supernal, S-A2
Next-generation AAM aircraft unveiled by Supernal, S-A2
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your BudgetHyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 

Artificial Neural Network Paper Presentation

  • 1. www.studentyogi.com www.studentyogi.com Artificial Neural Network INTRODUCTION BACKGROUND: co om Many task which seem simple for us, such as reading a handwritten note or recognizing a face, are difficult task for even the most m advanced computer. In an effort to increase the computer ability to perform such task, programmers began designing software to act more like the human brain, with its neurons and synaptic connections. Thus the field of gi. .c “Artificial neural network” was born. Rather than employ the traditional method of one central processor (a Pentium) to carry out many instructions one at a time, the Artificial neural network software analyzes data by passing oogi it through several simulated processos which are interconnected with synaptic like “weight” Once we have collected several record of the data we wish to analyze, the network will run through them and “learn ” the input of each record may be related to the result. After training on a few doesn’t cases the ntyy network begin to organize and refines its on own architecture to feed the eent data to much the human brain; learn from example. This “REVERSE ENGINEERING” technology were once regarded as the best kept secret of large corporate, government an academic researchers. The field of neural network was pioneered by BERNARD t t dd WIDROW of Stanford University in 1950’s. ssuu w. . w ww ww www.studentyogi.com www.studentyogi.com1
  • 2. www.studentyogi.com www.studentyogi.com Artificial Neural Network Why would anyone want a `new' sort of computer? What are (everyday) computer systems good at... .....and not so good at? Good at Not so good at om Interacting with noisy data or data Fast arithmetic from the environment Doing precisely what the programmer Massive parallelism programs them to do i.c Massive parallelism Fault tolerance Adapting to circumstances og Where can neural network systems help? • where we can't formulate an algorithmic solution. nty • where we can get lots of examples of the behaviour we require. • where we need to pick out the structure from existing data. What is a neural network? Neural Networks are a different paradigm for computing: de • Von Neumann machines are based on the processing/memory abstraction of human information processing. stu • Neural networks are based on the parallel architecture of animal brains. Neural networks are a form of multiprocessor computer system, with w. • Simple processing elements • A high degree of interconnection • Simple scalar messages • Adaptive interaction between elements ww Artificial neural network (ANNs) are programs designed to solve any problem by trying to mimic structure and function of our nervous system. Neural network are based on simulated neurons. Which are joined together in a variety of ways to form networks. Neural network resembles the human brain in the following two ways: - www.studentyogi.com www.studentyogi.com2
  • 3. www.studentyogi.com www.studentyogi.com Artificial Neural Network A neural network acquires knowledge through learning A neural network’s knowledge is stored with in the interconnection strengths known as synaptic weight. Neural network are typically organized in layers. Layers are om made up of a number of interconnected ‘nodes’, which contain an ‘activation function’. Patterns are presented to the network via the ‘input layer’, which communicates to one or more ‘hidden layers’ where the actual processing is done via a system of weighted ‘connections’. The hidden layers then link to an ‘output layer’ where the answer is output as shown in the graphic below. i.c Hidden Layers og Connections nty de Input layer Output layer Basic structure of neural network stu Each layer of neural makes independent computation on data that it receives and passes the result to the next layers(s). The next layer may in turn make independent computation and pass data further or it may end the w. computation and give the output of the overall computation .The first layer is the input layer and the last one, the output layer. The layers that are placed within these two are the middle or hidden layers. A neural network is a system that emulates the cognitive ww abilities of the brain by establishing recognition of particular inputs and producing the appropriate output. Neural networks are not “hard-wired” in particular way; they are trained using presented inputs to establish their own internal weights and relationships guided by feedback. Neural networks are free to form their own internal working and adapt on their own. Commonly neural network are adjusted, or trained so that a particular input leads to a specific target output www.studentyogi.com www.studentyogi.com3
  • 4. www.studentyogi.com www.studentyogi.com Artificial Neural Network Target Neural networksddss Neural network om Including Input connections Compare (called weights) i.c Figure showing adjust of neural network og There, the network is adjusted based on a comparison of the output and the target, until the network output matches the target. Typically many such input/target pairs are used to train network. nty Once a neural network is ‘trained’ to a satisfactory level it may be used as an analytical tool on other data. To do this, the user no longer specifies any training runs and instead allows the network to work in forward propagation mode only. New inputs are presented to the input pattern where they filter de into and are processed by the middle layers as though training were taking place, however, at this point the output is retained and no back propagation occurs. The structure of Nervous system stu Nervous system of a human brain consists of neurons, which are interconnected to each other in a rather complex way. Each neuron can be thought of as a node and interconnection between them are edge, which has a weight associates with it, which represents how mach the tow neuron w. which are connected by it can it interact. Node (neuron) ww Edge (interconnection) www.studentyogi.com www.studentyogi.com4
  • 5. www.studentyogi.com www.studentyogi.com Artificial Neural Network Functioning of A Nervous System The natures of interconnections between 2 neurons can such that – one neuron can either stimulate or inhibit the other. An interaction can take place only if there is an edge between 2 neurons. If neuron A is connected to om neuron B as below with a weight w, then if A is stimulated sufficiently, it sends a signal to B. The signal depends on w i.c A B The weight w, and the nature of the signal, whether it is stimulating or inhibiting. This depends on whether w is positive or negative. If its stimulation is more than its threshold. Also if it sends a signal, it will send it og to all nodes to which it is connected. The threshold for different neurons may be different. If many neurons send signal to A, the combined stimulus may be more than the threshold. nty Next if B is stimulated sufficiently, it may trigger a signal to all neurons to which it is connected. Depending on the complexity of the structure, the overall functioning may be very complex but the functioning of individual de neurons is as simple as this. Because of this we may dare to try to simulate this using software or even special purpose hardware. Major components Of Artificial Neuron stu This section describes the seven major components, which make up an artificial neuron. These components are valid whether the neuron is used for input, output, or is in one of the hidden layers. Component 1. Weighing factors: w. A neuron usually receives many simultaneous inputs. Each input has its own relative weight, which gives the input the impact that it needs on the processing elements summation function. These weights perform the same type of function, as do the varying synaptic strengths of biological ww neurons. In bath cases, some input are made more important than others so that they have a greater effect on the processing element as they combine to produce a neuron response. Weights are adaptive coefficients within the network that determine the intensity of the input signal as registered by the artificial neuron. They are a measure of an input’s connection strength. These strengths can be www.studentyogi.com www.studentyogi.com5
  • 6. www.studentyogi.com www.studentyogi.com Artificial Neural Network modified in response to various training sets and according to a network’s specific topology or through its learning rules. Component 2. Summation Function: The first step in a processing element’s operation is to compute the om weighted sum of all of the inputs. Mathematically, the inputs and the corresponding weights are vectors which can be represented as (i1, i2,………….in) and (w1, w2,……………….wn). The total input signal is the dot, or inner, product of these two vectors. This simplistic summation function is found by multiplying each component of the i vector by the i.c corresponding component of the w vector and then adding up all the products. Input1 = i1*w1, input2=i2*w2, etc., are added as input1+input2+………..+inputn. The result is a single number, not a multi- element vector. og Geometrically, the inner product of two vectors can be considered a measure of their similarity. If the vector point in the same direction, the inner product is maximum; if the vectors points in opposite direction (180 nty degrees out of phase), their inner product is minimum. The summation function can be more complex than just the simple input and weight sum of products. The input and weighting coefficients can be combined in many different ways before passing on to the transfer function. In addition to a simple product summing, the summation function can select de the minimum, maximum, majority, product, or several normalizing algorithms. The specific algorithm for combining neural inputs is determined by the chosen network architecture and paradigm. Component 3: Transfer Function: stu The result of the summation function, almost always the weighted sum, is transformed to a working out put through an algorithm process known as the transfer function. In transfer function the summation total can be compared with some threshold to determine the neural output. If the sum is w. greater than the threshold value, the processing element generates a signal. If the sum of the input and weight product is less than the threshold, no signal (no some inhibitory signal) is generated. Both types of response are significant. ww A simple binary neuron with 3 inputs ( X [1], X [2] and X [3]) and 2 outputs (O [1] and O[2] ). Every neuron has a particular threshold. A neuron fires only if the weighted sum of its input exceeds the threshold. Sum=summation (W i * Xi ) If sum>=T then the output is Output O[1] and O[2]= 1 if ( ∑ X[i] * W[i] >=T ) www.studentyogi.com www.studentyogi.com6
  • 7. www.studentyogi.com www.studentyogi.com Artificial Neural Network =0 else X [1] W [1] om X [2] W [2] (synaptic connection) O [1] X [3] W [3] (neuron processing node) i.c McCulloch-Pitts Neuron Model og We can use different kind of threshold functions namely: Sigmoid function : f (x)=1/(1+exp(-x)) Step function : f (x) =0 if x<T =k if x>=T nty Ramp funtion : f (x)=ax+b The transfer function could be something as simple as depending upon whether the result of the summation function is positive or negative. The de network could output zero and one, and minus one, or other numeric combinations. The transfer function would then be a “hard limiter” or step function. stu Hard limiter Ramping function Y y 1 1 x x -1 -1 w. x<0, y=-1 x<0, y=0 x>=0, y=1 0<=x<=1, y=x x>1, y=1 ww y sigmoid functions y 1 1 x x -1,0 y = 1/(1+e-x) x>=0, y=1-1(1+x) x<0, y=-1+1(1-x) Figure : Sample Transfer Functions www.studentyogi.com www.studentyogi.com7
  • 8. www.studentyogi.com www.studentyogi.com Artificial Neural Network Component 4: Scaling and Limiting: After the processing element’s transfer function, the result can pass through additional processes which scale and limit. This scaling simply om multiplies a scale factor times the transfer value, and then adds an offset. Limiting is the mechanism, which ensures that the scaled result does not exceed, and upper or lower bound. This limiting is in addition to the hard limits that the original transfer function may have performed. Component 5: Output function (competition) i.c At Each processing element is allowed one output signal, which it may output to hundreds of other neurons. This is the just like the biological neuron, where there are many inputs and only one output action. Normally, the output is directly equivalent to the transfer function’s result. Some og network topologies, however, modify the transfer result to incorporate competition among neighboring processing elements. Neurons are allowed to complete with each other, inhibiting processing elements unless they have great strength. Competition can occur one or both of two levels. First, nty competition determines which artificial neuron will be active, are provides an output. Second, competitive inputs help determine which processing elements will participate in the learning or adaptation process. Component 6: Error function and back-propagated value: de In most learning networks the difference between the current output and the desired output is calculated. This raw error is then transformed by the error function to match particular network architecture. The most basic architectures use this error directly, but some square the error while retaining stu its sign, some cube the error, other paradigms modify the raw error to fit their specific purposes. The artificial neuron’s error is then typically propagated into the learning function of another processing element. This error term is sometimes called the current error. w. The current error is typically propagated backwards to a previous layer. Yet, this back-propagated value can be either the current error, the current error scaled in some manner (obtained by the derivative of the transfer function), or some desired output depending on the network type. Normally, ww this back-propagated value, after being scaled by the learning function, is multiplied against each of the incoming connection weights to modify them before the next learning cycle. Component 7: Learning function : The purpose of the learning function is to modify the variable connection weights on the inputs of each processing elements according to some neural base algorithm. This process of changing the weights of the www.studentyogi.com www.studentyogi.com8
  • 9. www.studentyogi.com www.studentyogi.com Artificial Neural Network inputs connections to achieve some desired results could also be called the adaption function, as well as the learning mode. Paradigms of learning— om There are three broad paradigms of learning: • Supervised • Unsupervised (or self-organised) • Reinforcement learning ( a special case of supervised learning ). i.c Supervised learning: The vast majority of the artificial neural network solutions have been trained with supervision. In this mode, the actual output of a neural og network is compared to the desired output. The network then adjusts weights, which are usually randomly set to begin with, so that the next iteration, or cycle, will produce a closer match between the desired and the actual output. This learning method tries to minimize the current nty errors of all processing elements. This global error reduction is created over time by continuously modifying the input weights until acceptable network accuracy is reached. Adaptive de network O X W stu Distant Learning signal generater D Ρ [d, 0] Distance measure w. Block dig. Of supervised learning ww Unsupervised learning: In supervised learning the system directly compares the network output with a known correct or desired answer, whereas in unsupervised learning the output is not known. Unsupervised training allows the neurons to compete with each other until winner emerge. The resulting values of the winner neurons determine the class to which a particular www.studentyogi.com www.studentyogi.com9
  • 10. www.studentyogi.com www.studentyogi.com Artificial Neural Network data set belongs. Unsupervised learning is the great promise of the future. It shouts that computers could some day learn on their own in a true robotic sense. Currently, this learning method is limited to networks known as self organizing map. These kinds of networks are not in wide om spread use. Adaptive network x W o i.c og Block diagram of unsupervised learning nty Reinforcement learning: Reinforcement learning is a form of supervised learning where adopted neuron receives feedback from the environment that directly influences learning. de Learning law : The following general learning rules is adopted in the neural network studies: stu The weight vector wi =[Wi1, Wi2……………………………….Win] t increases proportion to the product of input x and learning signal r. The learning signal r is a function of wi x, and sometimes of the teacher’s signal di. Hence we have, r =r (Wi, X, di) and increment in weight vector produced w. by the learning step at time t is ∆Wi (t) =cr [Wi (t), X (t), di (t)] X (t) Where c is learning constant. Thus Wi (t+1) =Wi (t)+ cr [Wi (t), X (t), di (t)] X (t) ww Various learning rules are their assist the learning process. They are : 1. Hebbian learning rule : This rule represents purely feed forward, unsupervised learning 10 www.studentyogi.com www.studentyogi.com
  • 11. www.studentyogi.com www.studentyogi.com Artificial Neural Network th X1 Wi1 i neuron X2 Wi2 Oi om Wi3 ∆Wi X3 Learning di Win x signal r generater Xn i.c c Hebbian Learning Rule According to this rule, we have og r=f(Wti, X) and increment of weight becomes ∆Wi=c f(Wti, X) X nty In this learning, the initial weight of neuron is. 2. Perceptron learning rule: This learning is supervised and learning signal is equal to r =di-Oi Where Oi=sgn(Wti, X) and di is the desired response. de Weight adjustment in this method is ∆Wi=c[di- sgn(Wti, X)] X stu in this method of learning, initial weight can have any value and neuron much be binary bipolar are binary unipolar. X1 X2 Wi1 TLU w. Wi2 neti + oi ww x3 Wi3 ∆Wi Win di-oi + di Xn X C Perception Learning Rule 11 www.studentyogi.com www.studentyogi.com
  • 12. www.studentyogi.com www.studentyogi.com Artificial Neural Network 3. Delta learning rule: This rule is valid for continuous activation functions and in the supervised training mode. The learning signal is called as delta and is given as: om r=[di – f(Wti X)] f ′ (Wti X) The adjustment for the single weight in this rule is given as: ∆Wi = c (di-Oi) f ′ (net i) X In this method of learning, the initial weight can have any value and the neuron must be continuous. i.c X1 Wi1 continuous perception og X2 wi2 f(neti) Oi Win Xn ∆Wi f΄(net i) + nty X r di - oi + di c Delta Learning Rule de 4. Windrow-Hoff learning rule: This is applicable for the supervised training of neural networks and is stu independent of activation function. Learning signal is given as : r = di – Wti X The weight vector increment under this learning rule is : ∆Wi=c (di - Wti X) X w. 5. Correlation learning rule : Substituting r = di in general learning rule, we obtain correlation learning rule. The adjustment for the weight vector is given by: ww ∆Wi = c di X 6. Winner-take all learning rule: This rule is applicable for an ensemble of neurons, let’s say being arranged in a layer of p units. This learning is base on the premise that one of the neurons in the layer, say the mth, has the maximum response due to input x, as shown in fig. This neuron is declared the winner. As a result of this winning event, the weight vector Wm containing weights 12 www.studentyogi.com www.studentyogi.com
  • 13. www.studentyogi.com www.studentyogi.com Artificial Neural Network W11 X1 Wm1 O1 Wp1 “winning neuron” W1j Xj Wmj Om om Wpj W1n Wmn Xn Wpn Op i.c Winner –take –All learning Rule highlight in the fig. , by double headed arrow, is the only one adjusted in the given unsupervised learning. It’s increment is computed as follows: ∆Wm = α (X - Wm) og 7.outstar learning rule: This is an another learning rule that is best explained when the neurons are arranged in layers. This rule is designed to produce a desired response nty d of the layer of p neurons as shown in fig… This rule is concerned with the supervised learning and the weight adjustment is computed as: ∆Wj = β (X - Wj) de X W11 o1 Wm1 stu Wp1 ∆Wi + + d1 W1j β Xj Wmj om Wpj w. ∆Wi + + dm W1n Wmn β ww Xn op Wpn ∆Wi + + dp β Outstar Learning Rule 13 www.studentyogi.com www.studentyogi.com
  • 14. www.studentyogi.com www.studentyogi.com Artificial Neural Network Training a neural network: Since the output of the neural network may not be what is expected, the network needs to be trained. Training involves altering the interconnection weights between the neurons. A criterion is needed to om specify when to change the weights and how to change them. Training is an external process whiling learning is the process that takes place internal to the network. The following guideline will be of help as a step methodology for training a network. • Choosing the number of neurons i.c The number of hidden neurons affect how well the network is able to separate the data. A large number of hidden neurons will ensure correct learning and the network is able to correctly predict the data it has og been trained on, but its performance on new data, its ability to generalize, is compromised. With too few hidden neurons, the network may be unable to learn the relationship amongst the data and the error will fail to fall below an acceptable level. Thus, selection of the number of hidden nty neurons is a crucial decision. Often a trial and error approach is taken starting with a modest number of hidden neurons and gradually increasing the number if the network fails to reduce its error. A much used approximation for the number of hidden neurons for a three layered network is N=1/2(j + k)+v P, where J and K are the number of input de neurons and P is the number of patterns in the training set. • Choosing the initial weights The learning algorithm uses steepest descent technique, which rolls stu straight downhill in weight space until the first valley is reached. This valley may not correspond to a zero error for the resulting network. This makes the choice of initial starting point in the multidimensional weight space critical. However, there are no recommended rules for this selection except trying several different weight values to see if the w. network results are improved. • Choosing the learning rate Learning rate effectively controls the size of the step that is taken ww in multidimensional weight space when each weight is modified. If the selected learning rate is too large then the local minimum may be over stopped constantly, resulting in oscillations and slow convergence to lower error state. If the learning rate is too low, the number of iterations requires may be too large, resulting in slow performance. Usually the default value of most commercial neural network packages are in the range 0.1-0.3 providing 14 www.studentyogi.com www.studentyogi.com
  • 15. www.studentyogi.com www.studentyogi.com Artificial Neural Network a satisfactory balance between the two reducing the learning rate may help improve convergence to a local minimum of the error surface. • Choosing the activation function The learning signal is a function of the error multiplied by the om gradient of the activation function df/d (net). The larger the value of the gradient, the more weight the learning will receive. For training, the activation function must be monotonically increasing from left to right, differentiable and smooth. i.c Models Of artificial Neural Networks There are different kinds of neural network models that can be used. Some of the common ones are: 1. Perception model: og This is a very simple model and consists of a single ‘trainable’ neuron. Trainable means that its threshold and input each input has a desired output (determined by us). If the neuron doesn’t gives the desired nty output, then it has made a mistake. To rectify this, its threshold and/or input weights must be changed. How this change is to be calculated is determined by the learning algorithm. The output of the perceptron is constrained to Boolean values – (true, false), (1, 0), (1, -1) or whatever. This is not a limitation because if the de output of the perceptron were to be the input for something else, then the output edge could be made to have a weight. Then the output would be dependant on this weight. The perceptron looks like – stu X1 W1 X2 W2 w. X3 W2 y Xn Wn ww X1, X2, …………., Xn are inputs. These could be real numbers or Boolean values depending on the problem. y is the output and is Boolean. w 1, w2, …………, wn are weights of the edges and are real valued. T is the threshold and is a real valued. The output y is 1 if the net input which is : w1 x1 + w2 x2 + …….+ wn xn 15 www.studentyogi.com www.studentyogi.com
  • 16. www.studentyogi.com www.studentyogi.com Artificial Neural Network is greater than the threshold T. Otherwise the output is zero. 2. Feed – Forward Model: Elementary feed forward architecture of m neurons and receiving n inputs is shown in the figure. Its output and input vectors are om respectfully. O = [O] O2 ………Om] X = [x] x2………..xn] Weight Wij connects the ith neuron with the jth input. Hence activation value net i for the ith neuron is i.c Net i = j=1∑n Wij Xj for i=1, 2, 3,……, n Hence, the output is given as Oi = f(Wti X) for i = 1, 2, 3, ……….., m Where Wi is the weight vector containing the weights leading towards og the ith output node and defined as Wi = [Wi1 Wi2 …………. Win] If  is the nonlinear matrix operarte, the mapping of input space X to nty output space O implemented by the network can be expressed O = WX Where W is the weight matrix, also called the connection matrix. The generic feedforward network is characterized by the lack of feedback. This type network can be connected in cascade to create a de multiplayer network. W11 X1 W21 O1 W12 1 stu X2 W22 w. W2n O2 Wm1 2 W1n Wm2 ww Xn Wmn m On X(t) ┌ [Wx] O(t) Single –layer Feed forward network:interconnection scheme & block dig. 16 www.studentyogi.com www.studentyogi.com
  • 17. www.studentyogi.com www.studentyogi.com Artificial Neural Network W11 W12 ……..………W1n W21 W22 ……..………W2n om . . ……………. . W= . . …………… . . . …………… . Wm1 …………….Wmn i.c f(.) O…………..O O f (.) ………..O og ┌ [.] = . . ………… . . . ………… . nty O O………….f(.) 3. Feed-Back model: A feedback network can be obtained from the feedforawrd de network by connecting the neuron’s outputs to their inputs, as shown in the fig. The essence of closing the feedback loop is to hold control of output Oi through outputs Oj; for j =1, 2, ……, m. or controlling the output at instant t+∆ by the output at instant t. This delay ∆ is stu introduced by the delay element in the feedback loop. The mapping of O(t) into O(t+ ∆) can now be written as O(t+ ∆) = ┌[W o(t)] w. ww 17 www.studentyogi.com www.studentyogi.com
  • 18. www.studentyogi.com www.studentyogi.com Artificial Neural Network O2 (t) ∆ O1 (t) W11 ∆ om X1 W21 O1(t+ ∆) W12 x2 W22 O2(t+ ∆) i.c W2n Lag free neuron W1n Wm1 Wm2 og Xn Wmn O3(t+ ∆) m ∆ Om(t) nty Instantaneous X (0) network O (t+∆) ┌ [W o(t)] de O (t) stu Delay ∆ Single –layer feedback network : interconnection scheme & block dig. w. Notations Used: M1 is a 2-D matrix where M1[i] [j] represents the weight on the connection from the ith input neuron to the jth neuron in the hidden layer. ww M2 is a 2 –D matrix where M2[i][j] denotes the weight on the connection from the ith hidden layer neuron to the jth output layer neuron. x, y and z are used to denote the outputs of neurons from the input layer, hidden layer and output layer respectively. If there are m input data, then (x1, x2, ……., xm). P denotes the desired output pattern with components (p1, p2, ………, pr) for r outputs. Let the number of hidden layer neurons be n. 18 www.studentyogi.com www.studentyogi.com
  • 19. www.studentyogi.com www.studentyogi.com Artificial Neural Network βo = learning parameter for the output layer. βh = learning parameter for the hidden layer. α = momentum term θj = threshold value (bias) for the jth hidden layer neuron. τj = threshold value for the jth output layer neuron. om ej = error at the jth output layer neuron. tj = error at the jth hidden layer neuron. Threshold function = sigmoid function : F(x) = 1/(1 + exp(x)). Mathematical expressions: i.c Output of jth hidden layer neuron : yj = f((Σi xi M1[i][j]) + θj) Output of jth output layer neuron : zj = f((Σi yi M2[i][j]) + τj). ith component of vector of output differences: desired value – computed value = Pj – zj og th i component of output error at the output layer: ej = pj – zj. th i component of output error at the hidden layer: ti = yi (1 – yi ) (Σj M2[i][j] ej ) nty Adjustment of weights between the ith neuron in the hidden layer and jth output neuron: ∆M2[i][j] (t) = β0 yi ej + α ∆M2 [i][j] (t - 1) Adjustment of weights between the ith input neuron and jth neuron in the de hidden layer : ∆M1[i][j] = βh xi tj + α ∆M1 [i][j] (t - 1) Adjustment of the threshold value for the jth output neuron: ∆ τj = β0 ej stu Adjustment of the threshold value for the jth hidden layer neuron: ∆ θj = βh ej Neural network applications: w. Aerospace • High performance aircraft autopilot, flight path simulation, aircraft ww control systems, autopilot enhancements, aircraft component simulation, aircraft component fault detection. Automobile control • Automobile automatic guidance system, warranty activity analysis. Banking 19 www.studentyogi.com www.studentyogi.com
  • 20. www.studentyogi.com www.studentyogi.com Artificial Neural Network • Check and other document reading credit application evaluation. Credit card activity checking • Neural networks are used to spot unusual credit card om activity that might possibly be associated with loss of a credit card Defense • Weapon steering, target tracking, object discrimination, i.c facial recognition, new kinds of sensors, sonar, radar and image signal processing including data compression, feature extraction and noise suppression, signal/ image og identification. Electronics • Code sequence prediction, integrated circuit chip laying, nty process control, chip failure analysis, machine vision voice synthesis, nonlinear modeling. Entertainment • Animation, special effects, market forecasting. de Financial • Real estate appraisals, loan advisor, mortgage screening, corporate bond rating, credit-line use analysis, and stu portfolio trading program, corporate financial analysis, and currency price prediction. Industrial • Neural networks are being trained to predict the output w. gasses of furnaces and other industrial process. They then replace complex and costly equipment used for this purpose in the past. ww Insurance • Neural networks are used in policy application evaluation, product optimization. Manufacturing • Neural networks are used in manufacturing process control, product design and analysis, process and machine 20 www.studentyogi.com www.studentyogi.com
  • 21. www.studentyogi.com www.studentyogi.com Artificial Neural Network diagnosis, real-time particle identification, visual quality analysis, paper quality prediction, computer – chip quality analysis, analysis of grinding operations, chemical product design analysis, machine maintenance analysis, project om bidding, planning and management, dynamic modeling of chemical process system. Medical • Neural networks are used in breast cancer cell analysis, i.c EEG and ECG analysis, prosthesis design, optimization of transplant times, hospital expense reduction, hospital quality improvement, and emergency room test og advisement. Oil and Gas • Neural networks are used in exploration of oil and gas. nty Robotics • Neural networks are used in trajectory control, forklift robot, manipulator controllers, vision systems Other application • Artificial intelligence de • Character recognition • Image understanding • Logistics stu • Optimization • Quality Control • Visualization w. Advantages and disadvantages Of ANN: Advantage: -- ww 1. It involve human like thinking. 2. They handle noisy or missing data. 3. They create their own relationship amongst information – no equations! 4. They can work with large number of variables or parameters. 5. They provide general solutions with good predictive accuracy. 6. System has got property of continuous learning. 21 www.studentyogi.com www.studentyogi.com
  • 22. www.studentyogi.com www.studentyogi.com Artificial Neural Network 7. They deal with the non-linearity in the world in which we live. Disadvantage – 1. In learning process time required may be in several months or years. 2. It is hard to implement. om Conclusion – In this present world of automation to automate systems neural network and fuzzy logic systems are required. Fuzzy logic deals with vagueness or uncertainty and neural network related to human like thinking. i.c If we use only fuzzy systems or only NN complete automation is never possible. The combination is suitable because fuzzy logic has tolerance for impression of data, while neural networks have tolerance for noisy data. As there are different advantages and disadvantages usage of NN thus we use og combination of neural networks and Fuzzy logic in order to implement a real world system without manual interference. Artificial neural network, in the present scenario is novel in it’s technological field and still we have to witness a large of it’s development in nty the upcoming era’s, whose speculations are not required, as it will speak for themselves. de Bibliography stu References books:-- • “Introduction to Artificial Neural Network”, by Jacek M. Zurada; Jaico Publishing House, 1999. • “An Introduction to Neural Network”, by James A. Anderson; PHI, w. 1999. • “Elements of Artificial Neural Network”, K. Mehrotra, C.K. Mohan and Sanjay ranka, MIT Press, 1997. • “Neural Network nad Fuzzy System”, by Bart Kosko; PHI, 1992. ww • “Neural Network – A comprehensive foundation”, Simon Haykin, Macmillan Publishing Co., Newyork, 1993. Reference sites – • http://www.cs.stir.ac.uk/~lss/NNInro/invSlides.htm • http://www.bitstar.com/nnet.htm 22 www.studentyogi.com www.studentyogi.com
  • 23. www.studentyogi.com www.studentyogi.com Artificial Neural Network • http://www.pmsi.fr/sxcxmpa.htm • http://www.pmsi.fr/neurinia.htm om i.c og nty de stu w. ww 23 www.studentyogi.com www.studentyogi.com