3. Originally conceived as computational models
of the way in which the human brain works
Learn relationships b/w sets of variables;
generalization
Graceful degradation- knowledge learnt as
set of weights; any of these weights if
removed, network can still function but with
reduced performance
4. Consists of interconnected units, arranged as
layers
Perceptrons- 1 input, 1 output layer; linear
relationships b/w variables
Multi-layer perceptron- no. of hidden layers
of units; 2 sets of weights increase the power
of network; non-linear relationships
5. Training- repeatedly presents examples to
the network
Transducers- converting one form of input to
another form of output; transfer functions or
activation functions
Supervised learning- once trained, network
predicts output; tree identification
Unsupervised learning- required response is
not known, training solely based on input
data, have no input, output and hidden layer
distinctions; clustering tasks
6. Supervised learning
feed-forward- direction of flow of data;
backpropagation- errors incurred are
propagated back through the network
Classification, simulation
7.
8. Kohonen Self Organizing Map
Unsupervised learning
All the input nodes are connected to every
node; no distinct output layer
Clustering, pattern recognition
9.
10. Response of the network is compared is to
the desired response
Discrepancy b/w the two is calculated and the
network make changes to its internal weights
to reduce the error the next time input is
presented. Repeated for all the input data,
once completed, constitutes one epoch.
16. All the input nodes are interconnected with
all the output nodes without any hidden layer
Winning node- output node with higher
activation function than all other output
nodes
17.
18. Coding region recognition and gene
identification
Recognition of transcription and translational
signals
Sequence feature analysis and classification
Protein structure prediction
Prediction of signal peptides
Biometrics
Data mining
Editor's Notes
Generalization- once trained, the network can then be shown new examples and asked to predict the outcome of the new data based on the previous examples it has learnt