Soft Computing (173101)Soft Computing (173101)
UNIT-1UNIT-1
Introduction to Soft Computing
 The idea of Soft Computing was initiated by Lotfi A. Zadeh.
Definition: “Soft Computing is an emerging (up and coming,
rising, promising, talented) approach to computing which
parallel the remarkable ability of human mind to reason and
learn in a environment of uncertainty (doubt) and imprecision”.
Zadeh defines SC into one multidisciplinary system as the
fusion (Union or Combination) of the fields of Fuzzy Logic,
Neuro-Computing, Genetic Computing and Probabilistic
Computing.
Fusion of methodologies designed to model and enable
solutions to real world problems, which are not modeled or too
difficult to model mathematically.
They are composed of two features: “adaptively” &
“knowledge”
Introduction to Soft Computing
 SC consist of : Neural Networks, Fuzzy Systems, and Genetic
Algorithms.
 Neural Networks: for learning and adaption
 Fuzzy Systems: for knowledge representation via fuzzy if-then
rules.
 Genetic Algorithms: for evolutionary computation.
Soft Computing is still growing and developing.
Goal of Soft Computing
 It is a new multidisciplinary field, to construct a new
generation of Artificial Intelligence, known as Computational
Intelligence.
 The main goal is: to develop intelligent machines to provide
solutions to real world problems, which are not modeled or
too difficult to model mathematically.
 Its aim is to exploit (develop) the tolerance for
Approximation, Uncertainty, Imprecision, and Partial Truth in
order to achieve close resemblance with human like decision
making.
Neural Networks (NN)
 NN are simplified models of the biological neuron system.
 Neural network: information processing paradigm(model)
inspired by biological nervous systems, such as our brain
 Structure: large number of highly interconnected processing
elements (neurons) working together. Inspired by brain.
 Like people, they learn from experience (by example),
therefore train with known example of problem to acquire
knowledge.
 NN adopt various learning mechanisms (Supervised and
Unsupervised are very popular)
Neural Networks (NN)
 Characteristics, such as:
Mapping capabilities or Pattern recognition.
Data classification.
Generalization.
High speed information processing.
Parallel Distributed Processing.
 In a biological system,
learning involves adjustments to the synaptic connections
between neurons.
Architecture:
 Feed Forward (Single layer and Multi layer)
 Recurrent.
Neural Networks (NN)
Where can neural network systems help.
 When we can't formulate an algorithmic solution.
 When we can get lots of examples of the behavior we require.
 ‘learning from experience’
 When we need to pick out the structure from existing data.
Neural Networks (NN)
 Biological Neuron.
 Brain contain about 1010
basic unit called neurons (Small cell).
 Connected to 1014
other neurons.
 It receives electro-chemical signals from its various source and
transmit electrical impulses to other neurons.
 Average brain weight 1.5 kg, and neuron has 1.5 * 10-9
gms.
Neural Networks (NN)
 Biological Neuron.
 While some of the neuron performs input and output
operations:
 Form a part of an interconnected network and responsible for
signal transformation and storage of information.
 Composed of:
 Cell body known as soma (Behave as processing unit).
 Dendrites (Behave as input channels).
 Axoma (Behave as output channels).
Key Elements of NN
 Neural computing requires a number of neurons, to be connected
together into a neural network. Neuron(s) are arranged in layers.
 Each neuron within the network is usually a simple processing unit
which takes one or more inputs and produces an output. At each
neuron, every input has an associated weight which modifies the
strength of each input. The neuron simply adds together all the
inputs and calculates an output to be passed on.
I n p u t s W e i g h t s
O u t p u t
B i a s
1
3p
2p
1p
f a
3w
2w
1w
( ) ( )∑ +=+++= bwpfbwpwpwpfa ii332211
Artificial Neural Network
 What is Artificial Neuron?
 Definition : Non linear, parameterized function with restricted
output range.
 The neuron calculates a weighted sum of inputs and compares
it to a threshold Θ. If the sum is higher than the threshold Θ,
the output is set to 1, otherwise to 0.






+= ∑
−
=
1
1
0
n
i
ii xwwfy
x1 x2 x3
w0
y
Θ
xy =
)exp(1
1
x
y
−+
=
)exp()exp(
)exp()exp(
xx
xx
y
−+
−−
=
Activation Function
Linear
Logistic
Hyperbolic tangent
-10 -8 -6 -4 -2 0 2 4 6 8 10
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
-10 -8 -6 -4 -2 0 2 4 6 8 10
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
 Perform mathematical
operation on the signal output.
Architecture of ANN
 Feed Forward Neural Network.
Single Layer Feed Forward Neural Network.
Multi Layer Feed Forward Neural Network.
 Recurrent Neural Network.
Feed Forward Neural Networks
 The information is propagated
from the inputs to the outputs
x1 x2 xn…..
1st hidden
layer
2nd hidden
layer
Output layer
x1 x2 xn…..
Recurrent Neural Networks
 There is at lease one feed back
loop.
x1 x2 xn…..
1st hidden
layer
2nd hidden
layer
Output layer
Learning Methods
Supervised Learning:
 A teacher is assumed to be present during the learning
process.
 Input pattern is used to train the network associated with an
output pattern (Target pattern).
 For determination of error, compare network’s calculated
output and expected target output.
 The error can be used to change n/w parameter, which results
in an improvement in performance.
Learning Methods
Unsupervised Learning:
 A teacher is assumed to be not present during the learning
process.
 Target output is not presented to the network.
 So that n/w learn by itself.
Reinforced learning:
 Teacher available but does not present the expected answer.
 Only indicates if the computed o/p is correct or incorrect.
Perceptron
 The perceptron neuron produces a 1 if the net input into the
transfer function is equal to or greater than 0, otherwise it
produces a 0.
• It’s a single-unit network
• Change the weight by an
amount proportional to
the difference between
the desired output and
the actual output.
Δ Wi = η * (D-Y).Ii
Desired output
Actual output
Input
Learning rate
Example: A simple single unit adaptive
network
• The network has 2 inputs,
and one output. All are
binary. The output is
– 1 if W0I0 + W1I1 + Wb > 0
– 0 if W0I0 + W1I1 + Wb ≤ 0
• We want it to learn simple
OR: output a 1 if either I0
or I1 is 1.

Soft Computing-173101

  • 1.
    Soft Computing (173101)SoftComputing (173101) UNIT-1UNIT-1
  • 2.
    Introduction to SoftComputing  The idea of Soft Computing was initiated by Lotfi A. Zadeh. Definition: “Soft Computing is an emerging (up and coming, rising, promising, talented) approach to computing which parallel the remarkable ability of human mind to reason and learn in a environment of uncertainty (doubt) and imprecision”. Zadeh defines SC into one multidisciplinary system as the fusion (Union or Combination) of the fields of Fuzzy Logic, Neuro-Computing, Genetic Computing and Probabilistic Computing. Fusion of methodologies designed to model and enable solutions to real world problems, which are not modeled or too difficult to model mathematically. They are composed of two features: “adaptively” & “knowledge”
  • 3.
    Introduction to SoftComputing  SC consist of : Neural Networks, Fuzzy Systems, and Genetic Algorithms.  Neural Networks: for learning and adaption  Fuzzy Systems: for knowledge representation via fuzzy if-then rules.  Genetic Algorithms: for evolutionary computation. Soft Computing is still growing and developing.
  • 4.
    Goal of SoftComputing  It is a new multidisciplinary field, to construct a new generation of Artificial Intelligence, known as Computational Intelligence.  The main goal is: to develop intelligent machines to provide solutions to real world problems, which are not modeled or too difficult to model mathematically.  Its aim is to exploit (develop) the tolerance for Approximation, Uncertainty, Imprecision, and Partial Truth in order to achieve close resemblance with human like decision making.
  • 5.
    Neural Networks (NN) NN are simplified models of the biological neuron system.  Neural network: information processing paradigm(model) inspired by biological nervous systems, such as our brain  Structure: large number of highly interconnected processing elements (neurons) working together. Inspired by brain.  Like people, they learn from experience (by example), therefore train with known example of problem to acquire knowledge.  NN adopt various learning mechanisms (Supervised and Unsupervised are very popular)
  • 6.
    Neural Networks (NN) Characteristics, such as: Mapping capabilities or Pattern recognition. Data classification. Generalization. High speed information processing. Parallel Distributed Processing.  In a biological system, learning involves adjustments to the synaptic connections between neurons. Architecture:  Feed Forward (Single layer and Multi layer)  Recurrent.
  • 7.
    Neural Networks (NN) Wherecan neural network systems help.  When we can't formulate an algorithmic solution.  When we can get lots of examples of the behavior we require.  ‘learning from experience’  When we need to pick out the structure from existing data.
  • 8.
    Neural Networks (NN) Biological Neuron.  Brain contain about 1010 basic unit called neurons (Small cell).  Connected to 1014 other neurons.  It receives electro-chemical signals from its various source and transmit electrical impulses to other neurons.  Average brain weight 1.5 kg, and neuron has 1.5 * 10-9 gms.
  • 9.
    Neural Networks (NN) Biological Neuron.  While some of the neuron performs input and output operations:  Form a part of an interconnected network and responsible for signal transformation and storage of information.  Composed of:  Cell body known as soma (Behave as processing unit).  Dendrites (Behave as input channels).  Axoma (Behave as output channels).
  • 10.
    Key Elements ofNN  Neural computing requires a number of neurons, to be connected together into a neural network. Neuron(s) are arranged in layers.  Each neuron within the network is usually a simple processing unit which takes one or more inputs and produces an output. At each neuron, every input has an associated weight which modifies the strength of each input. The neuron simply adds together all the inputs and calculates an output to be passed on. I n p u t s W e i g h t s O u t p u t B i a s 1 3p 2p 1p f a 3w 2w 1w ( ) ( )∑ +=+++= bwpfbwpwpwpfa ii332211
  • 11.
    Artificial Neural Network What is Artificial Neuron?  Definition : Non linear, parameterized function with restricted output range.  The neuron calculates a weighted sum of inputs and compares it to a threshold Θ. If the sum is higher than the threshold Θ, the output is set to 1, otherwise to 0.       += ∑ − = 1 1 0 n i ii xwwfy x1 x2 x3 w0 y Θ
  • 12.
    xy = )exp(1 1 x y −+ = )exp()exp( )exp()exp( xx xx y −+ −− = Activation Function Linear Logistic Hyperbolictangent -10 -8 -6 -4 -2 0 2 4 6 8 10 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -10 -8 -6 -4 -2 0 2 4 6 8 10 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2  Perform mathematical operation on the signal output.
  • 13.
    Architecture of ANN Feed Forward Neural Network. Single Layer Feed Forward Neural Network. Multi Layer Feed Forward Neural Network.  Recurrent Neural Network.
  • 14.
    Feed Forward NeuralNetworks  The information is propagated from the inputs to the outputs x1 x2 xn….. 1st hidden layer 2nd hidden layer Output layer x1 x2 xn…..
  • 15.
    Recurrent Neural Networks There is at lease one feed back loop. x1 x2 xn….. 1st hidden layer 2nd hidden layer Output layer
  • 16.
    Learning Methods Supervised Learning: A teacher is assumed to be present during the learning process.  Input pattern is used to train the network associated with an output pattern (Target pattern).  For determination of error, compare network’s calculated output and expected target output.  The error can be used to change n/w parameter, which results in an improvement in performance.
  • 17.
    Learning Methods Unsupervised Learning: A teacher is assumed to be not present during the learning process.  Target output is not presented to the network.  So that n/w learn by itself. Reinforced learning:  Teacher available but does not present the expected answer.  Only indicates if the computed o/p is correct or incorrect.
  • 18.
    Perceptron  The perceptronneuron produces a 1 if the net input into the transfer function is equal to or greater than 0, otherwise it produces a 0. • It’s a single-unit network • Change the weight by an amount proportional to the difference between the desired output and the actual output. Δ Wi = η * (D-Y).Ii Desired output Actual output Input Learning rate
  • 19.
    Example: A simplesingle unit adaptive network • The network has 2 inputs, and one output. All are binary. The output is – 1 if W0I0 + W1I1 + Wb > 0 – 0 if W0I0 + W1I1 + Wb ≤ 0 • We want it to learn simple OR: output a 1 if either I0 or I1 is 1.