Artificial Neural Networks
: An Introduction
Fundamental concept
• NN are constructed and implemented to model
the human brain.
• Performs various tasks such as pattern-
detection, Image detection, Face detection,
Object detection, Automotive, Aerospace,
Military, Medical, Speech .
• These tasks are difficult for traditional
computers
ANN
• ANN poses a large number of processing elements
called nodes/neurons which operate in parallel.
• Neurons are connected with others by connection
link.
• Each link is associated with weights which contain
information about the input signal.
• Each neuron has an internal state of its own which is
a function of the inputs that neuron receives-
Activation Function.
Artificial Neural Networks
x1
x2
X1
X2
w1
w2
Y y
n
X
1 1 2 2
in
y x w x w
 
( )
in
y f y

Neural net of pure linear eqn.
Y
X
Input
m
mx
Comparison between brain and computer
Brain ANN
Speed Few ms. Few nano sec. massive
processing.
Size and complexity 1011 neurons & 1015
interconnections
Depends on designer
Storage capacity Stores information in its
interconnection or in
synapse.
No Loss of memory
Contiguous memory
locations
loss of memory may
happen sometimes.
Tolerance Has fault tolerance No fault tolerance Inf gets
disrupted when
interconnections are
disconnected
Control mechanism Complicated, involves
chemicals in biological
neuron
Simpler in ANN
Basic models of ANN
Basic Models of ANN
Interconnections Learning rules Activation function
Classification based on
interconnections
Interconnections
Feed forward
Single layer
Multilayer
Feed Back Recurrent
Single layer
Multilayer
Single layer Feedforward
Network
Multilayer feed forward network
Can be used to solve complicated problems
Feedback network
When outputs are directed back as
inputs to same or preceding layer
nodes it results in the formation of
feedback networks
Recurrent N/Ws
• Single-layer recurrent networks
• Multilayer recurrent networks
Feedback networks with closed loop are called Recurrent
Networks. The response at the k+1’th instant depends on
the entire history of the network starting at k=0.
multilayer Recurrent Networks
Learning
• The process of modifying the weights in
the connections between network layers
with the objective of achieving the
expected output is called learning or
training a network.
• This is achieved through
– Supervised learning
– Unsupervised learning
– Reinforcement learning
Classification of learning
• Supervised learning
• Unsupervised learning
• Reinforcement learning
Supervised Learning
• Child learns from a teacher
• Each input vector requires a
corresponding target vector.
• Learning pair=[input vector, target vector]
Neural
Network
W
Error
Signal
Generator
X
(Input)
Y
(Actual output)
(Desired Output)
Error
(D-Y)
signals
Unsupervised Learning
• How a fish or tadpole learns
• All similar input patterns are grouped together as
clusters.
• If a matching input pattern is not found a new
cluster is formed
Reinforcement Learning
NN
W
Error
Signal
Generator
X
(Input)
Y
(Actual output)
Error
signals R
Reinforcement signal
When Reinforcement learning is
used?
• If less information is available about the
target output values (critic information)
• Learning based on this critic information is
called reinforcement learning and the
feedback sent is called reinforcement
signal
• Feedback in this case is only evaluative
and not instructive
1. Identity Function
f(x)=x for all x
2. Binary Step function
3. Bipolar Step function
4. Ramp functions:-
Activation Function





ifx
ifx
x
f
0
1
{
)
(






ifx
ifx
x
f
1
1
{
)
(
0
0
1
0
1
1
)
(





ifx
x
if
x
ifx
x
f
Some learning algorithms
• Supervised:
• Adaline, Madaline
• Perceptron
• Back Propagation
• multilayer perceptrons
• Radial Basis Function Networks
• Unsupervised
• Competitive Learning
• Kohenen self organizing map
• Learning vector quantization
• Hebbian learning
Important terminologies of ANNs
• Weights
• Bias
• Threshold
• Learning rate
Weights
• Each neuron is connected to every other
neuron by means of directed links
• Links are associated with weights
• Weights contain information about the
input signal and is represented as a matrix
• Weight matrix also called connection
matrix
Weight matrix
W= 1
2
3
.
.
.
.
.
T
T
T
T
n
w
w
w
w
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
=
11 12 13 1
21 22 23 2
1 2 3
...
...
..................
...................
...
m
m
n n n nm
w w w w
w w w w
w w w w
 
 
 
 
 
 
 
 
 
Bias
• Bias is like another weight. Its included by
adding a component x0=1 to the input
vector X.
• X=(1,X1,X2…Xi,…Xn)
• Bias is of two types
– Positive bias: increase the net input
– Negative bias: decrease the net input
Why Bias is required?
• The relationship between input and output
given by the equation of straight line
y=mx+c
X Y
Input
C(bias)
y=mx+C
Threshold
• Set value based upon which the final output of
the network may be calculated
• Used in activation function
• The activation function using threshold can be
defined as












ifnet
ifnet
net
f
1
1
)
(
Learning rate
• Denoted by α.
• Used to control the amount of weight
adjustment at each step of training
• Learning rate ranging from 0 to 1
determines the rate of learning in each
time step
Hebb Network Learning
• Hebb learning rule is the simplest one
• The learning in the brain is performed by the
change in the synaptic gap
• When an axon of cell A is near enough to excite
cell B and repeatedly keep firing it, some growth
process takes place in one or both cells
• According to Hebb rule, weight vector is found to
increase proportionately to the product of the
input and learning signal.
y
x
old
w
new
w i
i
i 
 )
(
)
(
Flow chart of Hebb learning
algorithm
Start
Initialize Weights
For
Each
node
Activate input
1
1
Activate output
Weight update
y
x
old
w
new
w i
i
i 
 )
(
)
(
Bias update
b(new)=b(old) + y
Stop
y
n
• Hebb rule can be used for pattern association,
pattern categorization, pattern classification
and over a range of other areas.
Applications of Neural Networks:
They can perform tasks that are easy for a human but difficult
for a machine −
• Aerospace − Autopilot aircrafts, aircraft fault detection.
• Automotive − Automobile guidance systems.
• Military − Weapon orientation and steering, target tracking,
object discrimination, facial recognition, signal/image
identification.
• Electronics − Code sequence prediction, IC chip layout, chip
failure analysis, machine vision, voice synthesis.
• Financial − Real estate appraisal, loan advisor, mortgage
screening, corporate bond rating, portfolio trading program,
corporate financial analysis, currency value prediction,
document readers, credit application evaluators.
• Industrial − Manufacturing process control, product design and analysis, quality
inspection systems, welding quality analysis, paper quality prediction, chemical
product design analysis, dynamic modeling of chemical process systems,
machine maintenance analysis, project bidding, planning, and management.
• Medical − Cancer cell analysis, EEG and ECG analysis, prosthetic design,
transplant time optimizer.
• Speech − Speech recognition, speech classification, text to speech conversion.
• Telecommunications − Image and data compression, automated information
services, real-time spoken language translation.
• Transportation − Truck Brake system diagnosis, vehicle scheduling, routing
systems.
• Software − Pattern Recognition in facial recognition, optical character
recognition, etc.
• Time Series Prediction − ANNs are used to make predictions on stocks and
natural calamities.
• Signal Processing − Neural networks can be trained to process an audio signal
and filter it appropriately in the hearing aids.
• Control − ANNs are often used to make steering decisions of physical vehicles.
Refrences
• AN INTRODUCTION TO ARTIFICIAL NEURAL
NETWORK-International Journal Of Advance Research And
Innovative Ideas In Education
• Encyclopedia of Mathematical Geosciences -Yuanyuan Tian
• International Journal on Recent and Innovation Trends in
Computing and Communication Research Paper on Basic of
Artificial Neural Network- Ms. Sonali b , Ms. Priyunsha Wankar

Artificial-Neural-Networks.ppt

  • 1.
  • 2.
    Fundamental concept • NNare constructed and implemented to model the human brain. • Performs various tasks such as pattern- detection, Image detection, Face detection, Object detection, Automotive, Aerospace, Military, Medical, Speech . • These tasks are difficult for traditional computers
  • 3.
    ANN • ANN posesa large number of processing elements called nodes/neurons which operate in parallel. • Neurons are connected with others by connection link. • Each link is associated with weights which contain information about the input signal. • Each neuron has an internal state of its own which is a function of the inputs that neuron receives- Activation Function.
  • 4.
    Artificial Neural Networks x1 x2 X1 X2 w1 w2 Yy n X 1 1 2 2 in y x w x w   ( ) in y f y 
  • 5.
    Neural net ofpure linear eqn. Y X Input m mx
  • 6.
    Comparison between brainand computer Brain ANN Speed Few ms. Few nano sec. massive processing. Size and complexity 1011 neurons & 1015 interconnections Depends on designer Storage capacity Stores information in its interconnection or in synapse. No Loss of memory Contiguous memory locations loss of memory may happen sometimes. Tolerance Has fault tolerance No fault tolerance Inf gets disrupted when interconnections are disconnected Control mechanism Complicated, involves chemicals in biological neuron Simpler in ANN
  • 7.
    Basic models ofANN Basic Models of ANN Interconnections Learning rules Activation function
  • 8.
    Classification based on interconnections Interconnections Feedforward Single layer Multilayer Feed Back Recurrent Single layer Multilayer
  • 9.
  • 10.
    Multilayer feed forwardnetwork Can be used to solve complicated problems
  • 11.
    Feedback network When outputsare directed back as inputs to same or preceding layer nodes it results in the formation of feedback networks
  • 12.
    Recurrent N/Ws • Single-layerrecurrent networks • Multilayer recurrent networks Feedback networks with closed loop are called Recurrent Networks. The response at the k+1’th instant depends on the entire history of the network starting at k=0.
  • 13.
  • 14.
    Learning • The processof modifying the weights in the connections between network layers with the objective of achieving the expected output is called learning or training a network. • This is achieved through – Supervised learning – Unsupervised learning – Reinforcement learning
  • 15.
    Classification of learning •Supervised learning • Unsupervised learning • Reinforcement learning
  • 16.
    Supervised Learning • Childlearns from a teacher • Each input vector requires a corresponding target vector. • Learning pair=[input vector, target vector] Neural Network W Error Signal Generator X (Input) Y (Actual output) (Desired Output) Error (D-Y) signals
  • 17.
    Unsupervised Learning • Howa fish or tadpole learns • All similar input patterns are grouped together as clusters. • If a matching input pattern is not found a new cluster is formed
  • 18.
  • 19.
    When Reinforcement learningis used? • If less information is available about the target output values (critic information) • Learning based on this critic information is called reinforcement learning and the feedback sent is called reinforcement signal • Feedback in this case is only evaluative and not instructive
  • 20.
    1. Identity Function f(x)=xfor all x 2. Binary Step function 3. Bipolar Step function 4. Ramp functions:- Activation Function      ifx ifx x f 0 1 { ) (       ifx ifx x f 1 1 { ) ( 0 0 1 0 1 1 ) (      ifx x if x ifx x f
  • 21.
    Some learning algorithms •Supervised: • Adaline, Madaline • Perceptron • Back Propagation • multilayer perceptrons • Radial Basis Function Networks • Unsupervised • Competitive Learning • Kohenen self organizing map • Learning vector quantization • Hebbian learning
  • 22.
    Important terminologies ofANNs • Weights • Bias • Threshold • Learning rate
  • 23.
    Weights • Each neuronis connected to every other neuron by means of directed links • Links are associated with weights • Weights contain information about the input signal and is represented as a matrix • Weight matrix also called connection matrix
  • 24.
    Weight matrix W= 1 2 3 . . . . . T T T T n w w w w                                  = 11 12 13 1 21 22 23 2 1 2 3 ... ... .................. ................... ... m m n n n nm w w w w w w w w w w w w                  
  • 25.
    Bias • Bias islike another weight. Its included by adding a component x0=1 to the input vector X. • X=(1,X1,X2…Xi,…Xn) • Bias is of two types – Positive bias: increase the net input – Negative bias: decrease the net input
  • 26.
    Why Bias isrequired? • The relationship between input and output given by the equation of straight line y=mx+c X Y Input C(bias) y=mx+C
  • 27.
    Threshold • Set valuebased upon which the final output of the network may be calculated • Used in activation function • The activation function using threshold can be defined as             ifnet ifnet net f 1 1 ) (
  • 28.
    Learning rate • Denotedby α. • Used to control the amount of weight adjustment at each step of training • Learning rate ranging from 0 to 1 determines the rate of learning in each time step
  • 29.
    Hebb Network Learning •Hebb learning rule is the simplest one • The learning in the brain is performed by the change in the synaptic gap • When an axon of cell A is near enough to excite cell B and repeatedly keep firing it, some growth process takes place in one or both cells • According to Hebb rule, weight vector is found to increase proportionately to the product of the input and learning signal. y x old w new w i i i   ) ( ) (
  • 30.
    Flow chart ofHebb learning algorithm Start Initialize Weights For Each node Activate input 1 1 Activate output Weight update y x old w new w i i i   ) ( ) ( Bias update b(new)=b(old) + y Stop y n
  • 31.
    • Hebb rulecan be used for pattern association, pattern categorization, pattern classification and over a range of other areas.
  • 32.
    Applications of NeuralNetworks: They can perform tasks that are easy for a human but difficult for a machine − • Aerospace − Autopilot aircrafts, aircraft fault detection. • Automotive − Automobile guidance systems. • Military − Weapon orientation and steering, target tracking, object discrimination, facial recognition, signal/image identification. • Electronics − Code sequence prediction, IC chip layout, chip failure analysis, machine vision, voice synthesis. • Financial − Real estate appraisal, loan advisor, mortgage screening, corporate bond rating, portfolio trading program, corporate financial analysis, currency value prediction, document readers, credit application evaluators.
  • 33.
    • Industrial −Manufacturing process control, product design and analysis, quality inspection systems, welding quality analysis, paper quality prediction, chemical product design analysis, dynamic modeling of chemical process systems, machine maintenance analysis, project bidding, planning, and management. • Medical − Cancer cell analysis, EEG and ECG analysis, prosthetic design, transplant time optimizer. • Speech − Speech recognition, speech classification, text to speech conversion. • Telecommunications − Image and data compression, automated information services, real-time spoken language translation. • Transportation − Truck Brake system diagnosis, vehicle scheduling, routing systems. • Software − Pattern Recognition in facial recognition, optical character recognition, etc. • Time Series Prediction − ANNs are used to make predictions on stocks and natural calamities. • Signal Processing − Neural networks can be trained to process an audio signal and filter it appropriately in the hearing aids. • Control − ANNs are often used to make steering decisions of physical vehicles.
  • 34.
    Refrences • AN INTRODUCTIONTO ARTIFICIAL NEURAL NETWORK-International Journal Of Advance Research And Innovative Ideas In Education • Encyclopedia of Mathematical Geosciences -Yuanyuan Tian • International Journal on Recent and Innovation Trends in Computing and Communication Research Paper on Basic of Artificial Neural Network- Ms. Sonali b , Ms. Priyunsha Wankar