SlideShare a Scribd company logo
Neural Networks
REFERENCES: DATA MINING TECHNIQUES BY ARUN K. PUJARI
MRS.SOWMYA JYOTHI
SDMCBM
MANGALORE
Neural network are different paradigm for computing,
which draws its inspiration from neuroscience.
The human brain consists of a network of neurons,
each of which is made up of a number of nerve fibres
called dendrites, connected to the cell body where the
cell nucleus is located.
The axon is a long, single fibre that originates from the
cell body that branches near its end into a number of
strands.
At single axon typically makes synapses with other
neurons.
The transmission process is a complex chemical process
which effectively increases or decreases the electrical
potential within the cell body of the receiving neuron.
Neuron vs. Node
Artifical neurons are highly simplified model of biological
neurons.
Artifical neural network are densly interconnected networks
PEs together with a rule to adjust the strength of the
connections between the units in response to externally
supplied data.
The network has 2 binary input, I0 and I1 and one binary
output Y.
W0 and W1 are the connection strengths of input 1 and input
2 respectively.
Thus the total input received at the processing unit is given by
W0I0+W1I1-Wb
Where Wb is the threshold.
The output Y takes on the value 1,if W0I0+W1I1-Wb >0 and,
otherwise, it is 0 if W0I0+W1I1-Wb≤0
But the model, known as perceptron, was far
from a true model of a biological neuron as,
for a start, the, biological neuron’s output is a
continuous function rather than a step function.
This model also has a limited computational
capability as it represents only a linear-
separation.
There have been many improvements on this simple model and
much architecture has been presented in recently.
The threshold function or the step function is replaced by more
continuous functions called activation functions.
For this particular node n, weighted inputs(denoted Wi,
i=1………n) are combined via a combination function that consists
of a simple summation.
A transfer function then calculates a corresponding value the
result yielding a single output, usually between 0 and 1. together,
the combination function and the transfer function make up the
activation function of the node.
Three common transfer function are the sigmoid,linear and
hyperbolic functions. The sigmoid function is very widely used and
it produces values between 0 and 1 for any input from the
combination function.
The Neuron
Neural Networks NN 1 7
Input
Summing
function
Activation
function
Output
y
x1
x2
xm
w2
wm
w1
 
 )
(

Individual nodes are linked together in different
ways to create neural networks.
In a feed-forward network, the connections
between layers are unidirectional from input to
output.
two different architectures of the feed-forward
network, multi-layer Perceptron and Radial-
Basis Function.
MULTI-LAYER PERCEPTRON(MLP)
MLP is a development from the simple perceptron
in which extra hidden layers are added
More than one hidden layer can be used.
The network topology is a constrained to be
feedforward,ie.,loop free.
Connections are allowed from the input layer to
the first hidden layer; the to second and so on,
until the last hidden layer is the output layer.
Multi layer feed-forward
Neural Networks NN 1 10
Input
layer
Output
layer
Hidden Layer
3-4-2 Network
Single Layer Feed-forward
Neural Networks NN 1 11
Input layer
of
source nodes
Output layer
of
neurons
RADIAL BASIS FUNCTION NETWORKS
Radial Basis Function (RBF) networks
feedforward, but have only one hidden layer.
Like MLP, RBF nets can learn arbitrary mappings;
the primary difference is in the hidden layer.
RBF hidden layer units have a receptive field
;that is, a particular input value at which they
have a maximal output.
LEARNING IN NN
In order to fit a particular ANN to a particular
problem, it must be trained to generate a correct
response for a given set of inputs.
1. Unsupervised training may be used when a clear
link between input data sets and the targets
output values does not exist.
2. Supervised training involves providing an ANN
with specified input and output values and
allowing it to iteratively reach a solution.
MLP and RBF employ the supervised learning.
PERCEPTRON LEARNING RULE
This is the first learning scheme of neural computing.
The weights are changed by an amount proportional to the
difference between the desired output and the actual output.
if W is the weight vector and
Wi is the change in the ith weight
learning rate parameter is used to decide the magnitude of
change.
If learning rate is high, the change in the weight is bigger at every
step. The rule is given by
Wi=ξ(D-Y).Ii
Where ξ is the learning rate
D is the desired output and
Y is the actual output.
TRAINING IN MLP
The multi-layer perceptron overcomes the above shortcoming of the
single layer perceptron.
but learning in MLP is not trival.
The idea is to carry out the computation layer wise,moving in the
forward direction.
The weight adjustment can be done layer wise by moving in a
backward direction.
For the nodes in the output layer, it is easy to compute the error as
we know the actual outcome and desired result.
For the nodes in the hidden layers ,since we donot know the desired
result,we propogate the error computed in the last layer backward.
This standard method used in training MLPs is called the back
propogation algorithm.
The learning steps consist of the
1. Forward pass
The output and the error at the output
units are calculated.
2. Backward pass
The output unit error is used to alter
weights on the output units. Then the error at
the hidden nodes is calculated and weights on
the hidden nodes are altered using these values
TRAINING RBF NETWORKS
The RBF design involves deciding on their centres and
the sharpness of their Gaussians.
The centres and SDs(standard deviation) are decided first
by examining the vectors in training data.
RBF networks are trained in a similar way as MLP.
The output layer weights are trained using the delta
rule.
MLP is the most widely applied neural network
technique.
RBF have advantage that one can add extra units with
their centers near parts of the input, which are difficult to
classify.
UNSUPERVISED LEARNING
Simple perceptron, MLP and RBF networks
are unsupervised networks.
In unsupervised mode, the network
adapts purely in response to its inputs.
Such networks can learn to pick structures
in their input.
One of the most popular models in the
unsupervised framework is the self-
organizing map(SOM).
.
COMPETITIVE LEARNING
Competitive learning or winner takes all may be
regarded as the basis of a number of unsupervised
learning strategies.
A competitive learning consists of k units with
weight vectors of equal dimension to the input
data. The unit with the closest weight vector is
termed as the winner of the selection process. This
learning strategy is generally implemented by
gradually reducing the difference between the
weight vector and input vector.
The actual amount of reduction at each learning
step may be guided by means of the so -called
learning rate
Supervised vs. Unsupervised Learning
Supervised learning (classification)
◦ Supervision: The training data (observations, measurements,
etc.) are accompanied by labels indicating the class of the
observations
◦ New data is classified based on the training set
Unsupervised learning (clustering)
◦ The class labels of training data is unknown
◦ Given a set of measurements, observations, etc. with the aim
of establishing the existence of classes or clusters in the data
20
KOHONEN’S SOM
The self- organizing map(SOM) was a neural network model developed by
Teuvo Kohonen during 1979-82.
SOM is one of the most widely used unsupervised NN models and employs
competitive learning steps.
It consist of the input units, each of which is fully connected to a set of
output units.
The input units, after receiving the input patterns X, propogate them as they
are onto the output units. Each of the output units k is assigned a weight
vector wk. During the learning step, the unit c corresponding to the highest
activity level with respect to a randomly-selected input pattern X, is adapted in
a such a way that it exhibits an even higher activity level at a future
presentation of X. During the learning steps of SOM , a set of units around
the winner is tuned towards the currently presented input pattern enabling a
spatial arrangement of the input patterns, such that similar inputs are
mapped onto regions close to each other in the grid of output units. Thus,
the training process of SOM results in a topological organization of the input
patterns. It is ,in some sense related to k-means clustering.
SOM is one of the most widely used unsupervised NN models and
employs competitive learning steps.
It consist of the input units, each of which is fully connected to a set of
output units.
The input units, after receiving the input patterns X, propogate them as
they are onto the output units. Each of the output units k is assigned a
weight vector wk. During the learning step, the unit c corresponding to
the highest activity level with respect to a randomly-selected input
pattern X, is adapted in a such a way that it exhibits an even higher
activity level at a future presentation of X. During the learning steps of
SOM , a set of units around the winner is tuned towards the currently
presented input pattern enabling a spatial arrangement of the input
patterns, such that similar inputs are mapped onto regions close to
each other in the grid of output units. Thus, the training process of SOM
results in a topological organization of the input patterns. It is ,in some
sense related to k-means clustering.
Self-Organizing Maps (Kohonen Maps)
November 24, 2009
INTRODUCTION TO COGNITIVE SCIENCE
LECTURE 21: SELF-ORGANIZING MAPS
23
Topology-conserving mapping can be achieved by
SOMs:
• Two layers: input layer and output (map) layer
• Input and output layers are completely connected.
• Output neurons are interconnected within a defined
neighborhood.
• A topology (neighborhood relation) is defined on
the output layer.
APPLICATIONS OF NEURAL NETWORKS
Neural networks are used in a very large number of
applications. Neural networks are being used in
Investment analysis: To predict the movement of
stocks,currencies etc..,from previous data. There.they are
replacing earlier simpler linear models.
Monitoring: Networks have been used to monitor the state
of aircraft engines. By monitoring vibration levels and
sound, as early warning of engine problems can be given.
Marketing: Neural networks have been used to improve
marketing mailshots. One technique is to run a test
mailshot.and look at the patern of returns from this. The
idea is to find predictive mapping from the data known
about clients to how they have responded. This mapping is
then used to direct further mailshots.
Additional Points: A self-organizing map (SOM) or self-organizing feature map
(SOFM) is a type of artificial neural network (ANN) that is trained using
unsupervised learning to produce a low-dimensional (typically two-dimensional),
discretized representation of the input space of the training samples, called a map.
Self-organizing maps are different from other artificial neural networks in the sense
that they use a neighborhood function to preserve the topological properties of the
input space.
A self-organizing map consists of components called nodes or neurons. Associated
with each node is a weight vector of the same dimension as the input data vectors
and a position in the map space. The usual arrangement of nodes is a two-
dimensional regular spacing in a hexagonal or rectangular grid. The self-organizing
map describes a mapping from a higher-dimensional input space to a lower-
dimensional map space.
The Kohonen Self-Organizing Feature Map (SOFM or SOM) is a clustering and data
visualization technique based on a neural network viewpoint. As with other types of
centroid-based clustering, the goal of SOM is to find a set of centroids (reference or
codebook vector in SOM terminology) and to assign each object in the data set to the
centroid that provides the best approximation of that object

More Related Content

What's hot

Association Rule Learning Part 1: Frequent Itemset Generation
Association Rule Learning Part 1: Frequent Itemset GenerationAssociation Rule Learning Part 1: Frequent Itemset Generation
Association Rule Learning Part 1: Frequent Itemset Generation
Knoldus Inc.
 
Social Network Analysis Introduction including Data Structure Graph overview.
Social Network Analysis Introduction including Data Structure Graph overview. Social Network Analysis Introduction including Data Structure Graph overview.
Social Network Analysis Introduction including Data Structure Graph overview.
Doug Needham
 
Introduction To Multilevel Association Rule And Its Methods
Introduction To Multilevel Association Rule And Its MethodsIntroduction To Multilevel Association Rule And Its Methods
Introduction To Multilevel Association Rule And Its Methods
IJSRD
 
Community detection in graphs
Community detection in graphsCommunity detection in graphs
Community detection in graphs
Nicola Barbieri
 
Association rule mining.pptx
Association rule mining.pptxAssociation rule mining.pptx
Association rule mining.pptx
maha797959
 
Denclue Algorithm - Cluster, Pe
Denclue Algorithm - Cluster, PeDenclue Algorithm - Cluster, Pe
Denclue Algorithm - Cluster, Pe
Tauhidul Khandaker
 
Sql vs NoSQL-Presentation
 Sql vs NoSQL-Presentation Sql vs NoSQL-Presentation
Sql vs NoSQL-Presentation
Shubham Tomar
 
Hadoop combiner and partitioner
Hadoop combiner and partitionerHadoop combiner and partitioner
Hadoop combiner and partitioner
Subhas Kumar Ghosh
 
Designing Distributed Systems: Google Cas Study
Designing Distributed Systems: Google Cas StudyDesigning Distributed Systems: Google Cas Study
Designing Distributed Systems: Google Cas Study
Meysam Javadi
 
3.3 hierarchical methods
3.3 hierarchical methods3.3 hierarchical methods
3.3 hierarchical methods
Krish_ver2
 
6 Data Modeling for NoSQL 2/2
6 Data Modeling for NoSQL 2/26 Data Modeling for NoSQL 2/2
6 Data Modeling for NoSQL 2/2
Fabio Fumarola
 
04 Classification in Data Mining
04 Classification in Data Mining04 Classification in Data Mining
04 Classification in Data Mining
Valerii Klymchuk
 
Data cube computation
Data cube computationData cube computation
Data cube computationRashmi Sheikh
 
Community detection in social networks
Community detection in social networksCommunity detection in social networks
Community detection in social networks
Francisco Restivo
 
Ensemble methods
Ensemble methods Ensemble methods
Ensemble methods
zekeLabs Technologies
 
Community Detection
Community Detection Community Detection
Community Detection
Kanika Kanwal
 
Data Mining Concepts and Techniques, Chapter 10. Cluster Analysis: Basic Conc...
Data Mining Concepts and Techniques, Chapter 10. Cluster Analysis: Basic Conc...Data Mining Concepts and Techniques, Chapter 10. Cluster Analysis: Basic Conc...
Data Mining Concepts and Techniques, Chapter 10. Cluster Analysis: Basic Conc...
Salah Amean
 
K MEANS CLUSTERING.pptx
K MEANS CLUSTERING.pptxK MEANS CLUSTERING.pptx
K MEANS CLUSTERING.pptx
kibriaswe
 

What's hot (20)

Association Rule Learning Part 1: Frequent Itemset Generation
Association Rule Learning Part 1: Frequent Itemset GenerationAssociation Rule Learning Part 1: Frequent Itemset Generation
Association Rule Learning Part 1: Frequent Itemset Generation
 
Social Network Analysis Introduction including Data Structure Graph overview.
Social Network Analysis Introduction including Data Structure Graph overview. Social Network Analysis Introduction including Data Structure Graph overview.
Social Network Analysis Introduction including Data Structure Graph overview.
 
Introduction To Multilevel Association Rule And Its Methods
Introduction To Multilevel Association Rule And Its MethodsIntroduction To Multilevel Association Rule And Its Methods
Introduction To Multilevel Association Rule And Its Methods
 
Community detection in graphs
Community detection in graphsCommunity detection in graphs
Community detection in graphs
 
Association rule mining.pptx
Association rule mining.pptxAssociation rule mining.pptx
Association rule mining.pptx
 
Hierachical clustering
Hierachical clusteringHierachical clustering
Hierachical clustering
 
Denclue Algorithm - Cluster, Pe
Denclue Algorithm - Cluster, PeDenclue Algorithm - Cluster, Pe
Denclue Algorithm - Cluster, Pe
 
Sql vs NoSQL-Presentation
 Sql vs NoSQL-Presentation Sql vs NoSQL-Presentation
Sql vs NoSQL-Presentation
 
Hadoop combiner and partitioner
Hadoop combiner and partitionerHadoop combiner and partitioner
Hadoop combiner and partitioner
 
Designing Distributed Systems: Google Cas Study
Designing Distributed Systems: Google Cas StudyDesigning Distributed Systems: Google Cas Study
Designing Distributed Systems: Google Cas Study
 
3.3 hierarchical methods
3.3 hierarchical methods3.3 hierarchical methods
3.3 hierarchical methods
 
6 Data Modeling for NoSQL 2/2
6 Data Modeling for NoSQL 2/26 Data Modeling for NoSQL 2/2
6 Data Modeling for NoSQL 2/2
 
Clique
Clique Clique
Clique
 
04 Classification in Data Mining
04 Classification in Data Mining04 Classification in Data Mining
04 Classification in Data Mining
 
Data cube computation
Data cube computationData cube computation
Data cube computation
 
Community detection in social networks
Community detection in social networksCommunity detection in social networks
Community detection in social networks
 
Ensemble methods
Ensemble methods Ensemble methods
Ensemble methods
 
Community Detection
Community Detection Community Detection
Community Detection
 
Data Mining Concepts and Techniques, Chapter 10. Cluster Analysis: Basic Conc...
Data Mining Concepts and Techniques, Chapter 10. Cluster Analysis: Basic Conc...Data Mining Concepts and Techniques, Chapter 10. Cluster Analysis: Basic Conc...
Data Mining Concepts and Techniques, Chapter 10. Cluster Analysis: Basic Conc...
 
K MEANS CLUSTERING.pptx
K MEANS CLUSTERING.pptxK MEANS CLUSTERING.pptx
K MEANS CLUSTERING.pptx
 

Similar to NEURALNETWORKS_DM_SOWMYAJYOTHI.pdf

Classification by back propagation, multi layered feed forward neural network...
Classification by back propagation, multi layered feed forward neural network...Classification by back propagation, multi layered feed forward neural network...
Classification by back propagation, multi layered feed forward neural network...
bihira aggrey
 
Self Organizing Maps: Fundamentals
Self Organizing Maps: FundamentalsSelf Organizing Maps: Fundamentals
Self Organizing Maps: Fundamentals
Spacetoshare
 
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptxACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
gnans Kgnanshek
 
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTOR
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTORARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTOR
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTOR
ijac123
 
ML_Unit_2_Part_A
ML_Unit_2_Part_AML_Unit_2_Part_A
ML_Unit_2_Part_A
Srimatre K
 
Neuralnetwork 101222074552-phpapp02
Neuralnetwork 101222074552-phpapp02Neuralnetwork 101222074552-phpapp02
Neuralnetwork 101222074552-phpapp02Deepu Gupta
 
Intro to Deep learning - Autoencoders
Intro to Deep learning - Autoencoders Intro to Deep learning - Autoencoders
Intro to Deep learning - Autoencoders
Akash Goel
 
Neural networks Self Organizing Map by Engr. Edgar Carrillo II
Neural networks Self Organizing Map by Engr. Edgar Carrillo IINeural networks Self Organizing Map by Engr. Edgar Carrillo II
Neural networks Self Organizing Map by Engr. Edgar Carrillo II
Edgar Carrillo
 
Introduction to Neural networks (under graduate course) Lecture 9 of 9
Introduction to Neural networks (under graduate course) Lecture 9 of 9Introduction to Neural networks (under graduate course) Lecture 9 of 9
Introduction to Neural networks (under graduate course) Lecture 9 of 9
Randa Elanwar
 
Neural network final NWU 4.3 Graphics Course
Neural network final NWU 4.3 Graphics CourseNeural network final NWU 4.3 Graphics Course
Neural network final NWU 4.3 Graphics Course
Mohaiminur Rahman
 
Web spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithmsWeb spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithms
aciijournal
 
20200428135045cfbc718e2c.pdf
20200428135045cfbc718e2c.pdf20200428135045cfbc718e2c.pdf
20200428135045cfbc718e2c.pdf
TitleTube
 
Artificial neural networks (2)
Artificial neural networks (2)Artificial neural networks (2)
Artificial neural networks (2)
sai anjaneya
 
Perceptron Study Material with XOR example
Perceptron Study Material with XOR examplePerceptron Study Material with XOR example
Perceptron Study Material with XOR example
GSURESHKUMAR11
 
Perceptron (neural network)
Perceptron (neural network)Perceptron (neural network)
Perceptron (neural network)
EdutechLearners
 
ai7.ppt
ai7.pptai7.ppt
ai7.ppt
MrHacker61
 

Similar to NEURALNETWORKS_DM_SOWMYAJYOTHI.pdf (20)

Ffnn
FfnnFfnn
Ffnn
 
Classification by back propagation, multi layered feed forward neural network...
Classification by back propagation, multi layered feed forward neural network...Classification by back propagation, multi layered feed forward neural network...
Classification by back propagation, multi layered feed forward neural network...
 
Self Organizing Maps: Fundamentals
Self Organizing Maps: FundamentalsSelf Organizing Maps: Fundamentals
Self Organizing Maps: Fundamentals
 
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptxACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
 
20120140503023
2012014050302320120140503023
20120140503023
 
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTOR
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTORARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTOR
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTOR
 
ML_Unit_2_Part_A
ML_Unit_2_Part_AML_Unit_2_Part_A
ML_Unit_2_Part_A
 
Neuralnetwork 101222074552-phpapp02
Neuralnetwork 101222074552-phpapp02Neuralnetwork 101222074552-phpapp02
Neuralnetwork 101222074552-phpapp02
 
Intro to Deep learning - Autoencoders
Intro to Deep learning - Autoencoders Intro to Deep learning - Autoencoders
Intro to Deep learning - Autoencoders
 
Neural networks Self Organizing Map by Engr. Edgar Carrillo II
Neural networks Self Organizing Map by Engr. Edgar Carrillo IINeural networks Self Organizing Map by Engr. Edgar Carrillo II
Neural networks Self Organizing Map by Engr. Edgar Carrillo II
 
Introduction to Neural networks (under graduate course) Lecture 9 of 9
Introduction to Neural networks (under graduate course) Lecture 9 of 9Introduction to Neural networks (under graduate course) Lecture 9 of 9
Introduction to Neural networks (under graduate course) Lecture 9 of 9
 
Neural network final NWU 4.3 Graphics Course
Neural network final NWU 4.3 Graphics CourseNeural network final NWU 4.3 Graphics Course
Neural network final NWU 4.3 Graphics Course
 
Web spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithmsWeb spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithms
 
20200428135045cfbc718e2c.pdf
20200428135045cfbc718e2c.pdf20200428135045cfbc718e2c.pdf
20200428135045cfbc718e2c.pdf
 
Artificial neural networks (2)
Artificial neural networks (2)Artificial neural networks (2)
Artificial neural networks (2)
 
Perceptron Study Material with XOR example
Perceptron Study Material with XOR examplePerceptron Study Material with XOR example
Perceptron Study Material with XOR example
 
Perceptron (neural network)
Perceptron (neural network)Perceptron (neural network)
Perceptron (neural network)
 
ai7.ppt
ai7.pptai7.ppt
ai7.ppt
 
Unit 2
Unit 2Unit 2
Unit 2
 
Ann
Ann Ann
Ann
 

More from SowmyaJyothi3

USER DEFINED FUNCTIONS IN C MRS.SOWMYA JYOTHI.pdf
USER DEFINED FUNCTIONS IN C MRS.SOWMYA JYOTHI.pdfUSER DEFINED FUNCTIONS IN C MRS.SOWMYA JYOTHI.pdf
USER DEFINED FUNCTIONS IN C MRS.SOWMYA JYOTHI.pdf
SowmyaJyothi3
 
STRUCTURE AND UNION IN C MRS.SOWMYA JYOTHI.pdf
STRUCTURE AND UNION IN C MRS.SOWMYA JYOTHI.pdfSTRUCTURE AND UNION IN C MRS.SOWMYA JYOTHI.pdf
STRUCTURE AND UNION IN C MRS.SOWMYA JYOTHI.pdf
SowmyaJyothi3
 
STRINGS IN C MRS.SOWMYA JYOTHI.pdf
STRINGS IN C MRS.SOWMYA JYOTHI.pdfSTRINGS IN C MRS.SOWMYA JYOTHI.pdf
STRINGS IN C MRS.SOWMYA JYOTHI.pdf
SowmyaJyothi3
 
POINTERS IN C MRS.SOWMYA JYOTHI.pdf
POINTERS IN C MRS.SOWMYA JYOTHI.pdfPOINTERS IN C MRS.SOWMYA JYOTHI.pdf
POINTERS IN C MRS.SOWMYA JYOTHI.pdf
SowmyaJyothi3
 
MANAGING INPUT AND OUTPUT OPERATIONS IN C MRS.SOWMYA JYOTHI.pdf
MANAGING INPUT AND OUTPUT OPERATIONS IN C    MRS.SOWMYA JYOTHI.pdfMANAGING INPUT AND OUTPUT OPERATIONS IN C    MRS.SOWMYA JYOTHI.pdf
MANAGING INPUT AND OUTPUT OPERATIONS IN C MRS.SOWMYA JYOTHI.pdf
SowmyaJyothi3
 
Constants Variables Datatypes by Mrs. Sowmya Jyothi
Constants Variables Datatypes by Mrs. Sowmya JyothiConstants Variables Datatypes by Mrs. Sowmya Jyothi
Constants Variables Datatypes by Mrs. Sowmya Jyothi
SowmyaJyothi3
 

More from SowmyaJyothi3 (6)

USER DEFINED FUNCTIONS IN C MRS.SOWMYA JYOTHI.pdf
USER DEFINED FUNCTIONS IN C MRS.SOWMYA JYOTHI.pdfUSER DEFINED FUNCTIONS IN C MRS.SOWMYA JYOTHI.pdf
USER DEFINED FUNCTIONS IN C MRS.SOWMYA JYOTHI.pdf
 
STRUCTURE AND UNION IN C MRS.SOWMYA JYOTHI.pdf
STRUCTURE AND UNION IN C MRS.SOWMYA JYOTHI.pdfSTRUCTURE AND UNION IN C MRS.SOWMYA JYOTHI.pdf
STRUCTURE AND UNION IN C MRS.SOWMYA JYOTHI.pdf
 
STRINGS IN C MRS.SOWMYA JYOTHI.pdf
STRINGS IN C MRS.SOWMYA JYOTHI.pdfSTRINGS IN C MRS.SOWMYA JYOTHI.pdf
STRINGS IN C MRS.SOWMYA JYOTHI.pdf
 
POINTERS IN C MRS.SOWMYA JYOTHI.pdf
POINTERS IN C MRS.SOWMYA JYOTHI.pdfPOINTERS IN C MRS.SOWMYA JYOTHI.pdf
POINTERS IN C MRS.SOWMYA JYOTHI.pdf
 
MANAGING INPUT AND OUTPUT OPERATIONS IN C MRS.SOWMYA JYOTHI.pdf
MANAGING INPUT AND OUTPUT OPERATIONS IN C    MRS.SOWMYA JYOTHI.pdfMANAGING INPUT AND OUTPUT OPERATIONS IN C    MRS.SOWMYA JYOTHI.pdf
MANAGING INPUT AND OUTPUT OPERATIONS IN C MRS.SOWMYA JYOTHI.pdf
 
Constants Variables Datatypes by Mrs. Sowmya Jyothi
Constants Variables Datatypes by Mrs. Sowmya JyothiConstants Variables Datatypes by Mrs. Sowmya Jyothi
Constants Variables Datatypes by Mrs. Sowmya Jyothi
 

Recently uploaded

Key Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdfKey Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdf
Cheryl Hung
 
Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
Safe Software
 
When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...
Elena Simperl
 
Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
Alison B. Lowndes
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
Laura Byrne
 
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Ramesh Iyer
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
Alan Dix
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
RTTS
 
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Product School
 
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
Product School
 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
OnBoard
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
Jemma Hussein Allen
 
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMsTo Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
Paul Groth
 
UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3
DianaGray10
 
Knowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and backKnowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and back
Elena Simperl
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance
 
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
Product School
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
Kari Kakkonen
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
91mobiles
 

Recently uploaded (20)

Key Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdfKey Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdf
 
Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
 
When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...
 
Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........Bits & Pixels using AI for Good.........
Bits & Pixels using AI for Good.........
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
 
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
 
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...
 
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
 
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMsTo Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
 
UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3
 
Knowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and backKnowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and back
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
 
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
 

NEURALNETWORKS_DM_SOWMYAJYOTHI.pdf

  • 1. Neural Networks REFERENCES: DATA MINING TECHNIQUES BY ARUN K. PUJARI MRS.SOWMYA JYOTHI SDMCBM MANGALORE
  • 2. Neural network are different paradigm for computing, which draws its inspiration from neuroscience. The human brain consists of a network of neurons, each of which is made up of a number of nerve fibres called dendrites, connected to the cell body where the cell nucleus is located. The axon is a long, single fibre that originates from the cell body that branches near its end into a number of strands. At single axon typically makes synapses with other neurons. The transmission process is a complex chemical process which effectively increases or decreases the electrical potential within the cell body of the receiving neuron.
  • 4. Artifical neurons are highly simplified model of biological neurons. Artifical neural network are densly interconnected networks PEs together with a rule to adjust the strength of the connections between the units in response to externally supplied data. The network has 2 binary input, I0 and I1 and one binary output Y. W0 and W1 are the connection strengths of input 1 and input 2 respectively. Thus the total input received at the processing unit is given by W0I0+W1I1-Wb Where Wb is the threshold. The output Y takes on the value 1,if W0I0+W1I1-Wb >0 and, otherwise, it is 0 if W0I0+W1I1-Wb≤0
  • 5. But the model, known as perceptron, was far from a true model of a biological neuron as, for a start, the, biological neuron’s output is a continuous function rather than a step function. This model also has a limited computational capability as it represents only a linear- separation.
  • 6. There have been many improvements on this simple model and much architecture has been presented in recently. The threshold function or the step function is replaced by more continuous functions called activation functions. For this particular node n, weighted inputs(denoted Wi, i=1………n) are combined via a combination function that consists of a simple summation. A transfer function then calculates a corresponding value the result yielding a single output, usually between 0 and 1. together, the combination function and the transfer function make up the activation function of the node. Three common transfer function are the sigmoid,linear and hyperbolic functions. The sigmoid function is very widely used and it produces values between 0 and 1 for any input from the combination function.
  • 7. The Neuron Neural Networks NN 1 7 Input Summing function Activation function Output y x1 x2 xm w2 wm w1    ) ( 
  • 8. Individual nodes are linked together in different ways to create neural networks. In a feed-forward network, the connections between layers are unidirectional from input to output. two different architectures of the feed-forward network, multi-layer Perceptron and Radial- Basis Function.
  • 9. MULTI-LAYER PERCEPTRON(MLP) MLP is a development from the simple perceptron in which extra hidden layers are added More than one hidden layer can be used. The network topology is a constrained to be feedforward,ie.,loop free. Connections are allowed from the input layer to the first hidden layer; the to second and so on, until the last hidden layer is the output layer.
  • 10. Multi layer feed-forward Neural Networks NN 1 10 Input layer Output layer Hidden Layer 3-4-2 Network
  • 11. Single Layer Feed-forward Neural Networks NN 1 11 Input layer of source nodes Output layer of neurons
  • 12. RADIAL BASIS FUNCTION NETWORKS Radial Basis Function (RBF) networks feedforward, but have only one hidden layer. Like MLP, RBF nets can learn arbitrary mappings; the primary difference is in the hidden layer. RBF hidden layer units have a receptive field ;that is, a particular input value at which they have a maximal output.
  • 13. LEARNING IN NN In order to fit a particular ANN to a particular problem, it must be trained to generate a correct response for a given set of inputs. 1. Unsupervised training may be used when a clear link between input data sets and the targets output values does not exist. 2. Supervised training involves providing an ANN with specified input and output values and allowing it to iteratively reach a solution. MLP and RBF employ the supervised learning.
  • 14. PERCEPTRON LEARNING RULE This is the first learning scheme of neural computing. The weights are changed by an amount proportional to the difference between the desired output and the actual output. if W is the weight vector and Wi is the change in the ith weight learning rate parameter is used to decide the magnitude of change. If learning rate is high, the change in the weight is bigger at every step. The rule is given by Wi=ξ(D-Y).Ii Where ξ is the learning rate D is the desired output and Y is the actual output.
  • 15. TRAINING IN MLP The multi-layer perceptron overcomes the above shortcoming of the single layer perceptron. but learning in MLP is not trival. The idea is to carry out the computation layer wise,moving in the forward direction. The weight adjustment can be done layer wise by moving in a backward direction. For the nodes in the output layer, it is easy to compute the error as we know the actual outcome and desired result. For the nodes in the hidden layers ,since we donot know the desired result,we propogate the error computed in the last layer backward. This standard method used in training MLPs is called the back propogation algorithm.
  • 16. The learning steps consist of the 1. Forward pass The output and the error at the output units are calculated. 2. Backward pass The output unit error is used to alter weights on the output units. Then the error at the hidden nodes is calculated and weights on the hidden nodes are altered using these values
  • 17. TRAINING RBF NETWORKS The RBF design involves deciding on their centres and the sharpness of their Gaussians. The centres and SDs(standard deviation) are decided first by examining the vectors in training data. RBF networks are trained in a similar way as MLP. The output layer weights are trained using the delta rule. MLP is the most widely applied neural network technique. RBF have advantage that one can add extra units with their centers near parts of the input, which are difficult to classify.
  • 18. UNSUPERVISED LEARNING Simple perceptron, MLP and RBF networks are unsupervised networks. In unsupervised mode, the network adapts purely in response to its inputs. Such networks can learn to pick structures in their input. One of the most popular models in the unsupervised framework is the self- organizing map(SOM). .
  • 19. COMPETITIVE LEARNING Competitive learning or winner takes all may be regarded as the basis of a number of unsupervised learning strategies. A competitive learning consists of k units with weight vectors of equal dimension to the input data. The unit with the closest weight vector is termed as the winner of the selection process. This learning strategy is generally implemented by gradually reducing the difference between the weight vector and input vector. The actual amount of reduction at each learning step may be guided by means of the so -called learning rate
  • 20. Supervised vs. Unsupervised Learning Supervised learning (classification) ◦ Supervision: The training data (observations, measurements, etc.) are accompanied by labels indicating the class of the observations ◦ New data is classified based on the training set Unsupervised learning (clustering) ◦ The class labels of training data is unknown ◦ Given a set of measurements, observations, etc. with the aim of establishing the existence of classes or clusters in the data 20
  • 21. KOHONEN’S SOM The self- organizing map(SOM) was a neural network model developed by Teuvo Kohonen during 1979-82. SOM is one of the most widely used unsupervised NN models and employs competitive learning steps. It consist of the input units, each of which is fully connected to a set of output units. The input units, after receiving the input patterns X, propogate them as they are onto the output units. Each of the output units k is assigned a weight vector wk. During the learning step, the unit c corresponding to the highest activity level with respect to a randomly-selected input pattern X, is adapted in a such a way that it exhibits an even higher activity level at a future presentation of X. During the learning steps of SOM , a set of units around the winner is tuned towards the currently presented input pattern enabling a spatial arrangement of the input patterns, such that similar inputs are mapped onto regions close to each other in the grid of output units. Thus, the training process of SOM results in a topological organization of the input patterns. It is ,in some sense related to k-means clustering.
  • 22. SOM is one of the most widely used unsupervised NN models and employs competitive learning steps. It consist of the input units, each of which is fully connected to a set of output units. The input units, after receiving the input patterns X, propogate them as they are onto the output units. Each of the output units k is assigned a weight vector wk. During the learning step, the unit c corresponding to the highest activity level with respect to a randomly-selected input pattern X, is adapted in a such a way that it exhibits an even higher activity level at a future presentation of X. During the learning steps of SOM , a set of units around the winner is tuned towards the currently presented input pattern enabling a spatial arrangement of the input patterns, such that similar inputs are mapped onto regions close to each other in the grid of output units. Thus, the training process of SOM results in a topological organization of the input patterns. It is ,in some sense related to k-means clustering.
  • 23. Self-Organizing Maps (Kohonen Maps) November 24, 2009 INTRODUCTION TO COGNITIVE SCIENCE LECTURE 21: SELF-ORGANIZING MAPS 23 Topology-conserving mapping can be achieved by SOMs: • Two layers: input layer and output (map) layer • Input and output layers are completely connected. • Output neurons are interconnected within a defined neighborhood. • A topology (neighborhood relation) is defined on the output layer.
  • 24. APPLICATIONS OF NEURAL NETWORKS Neural networks are used in a very large number of applications. Neural networks are being used in Investment analysis: To predict the movement of stocks,currencies etc..,from previous data. There.they are replacing earlier simpler linear models. Monitoring: Networks have been used to monitor the state of aircraft engines. By monitoring vibration levels and sound, as early warning of engine problems can be given. Marketing: Neural networks have been used to improve marketing mailshots. One technique is to run a test mailshot.and look at the patern of returns from this. The idea is to find predictive mapping from the data known about clients to how they have responded. This mapping is then used to direct further mailshots.
  • 25. Additional Points: A self-organizing map (SOM) or self-organizing feature map (SOFM) is a type of artificial neural network (ANN) that is trained using unsupervised learning to produce a low-dimensional (typically two-dimensional), discretized representation of the input space of the training samples, called a map. Self-organizing maps are different from other artificial neural networks in the sense that they use a neighborhood function to preserve the topological properties of the input space. A self-organizing map consists of components called nodes or neurons. Associated with each node is a weight vector of the same dimension as the input data vectors and a position in the map space. The usual arrangement of nodes is a two- dimensional regular spacing in a hexagonal or rectangular grid. The self-organizing map describes a mapping from a higher-dimensional input space to a lower- dimensional map space. The Kohonen Self-Organizing Feature Map (SOFM or SOM) is a clustering and data visualization technique based on a neural network viewpoint. As with other types of centroid-based clustering, the goal of SOM is to find a set of centroids (reference or codebook vector in SOM terminology) and to assign each object in the data set to the centroid that provides the best approximation of that object