The document provides an overview of neural networks and the backpropagation algorithm. It defines the basics of neural networks including neurons, layers, weights, and biases. It explains that multi-layer networks are needed when problems are not linearly separable. The backpropagation algorithm is described as adjusting weights to minimize error between the network's classification and actual classifications for each training sample in an iterative process. Weights are updated based on calculating error signals that propagate backwards through the network.
The slide covers the basic concepts and designs of artificial neural networks. It explains and justifies the use of McCulloh Pitts Model, Adaline network, Perceptron algorithm, Backpropagation algorithm, Hopfield network and Kohonen network; along with its practical applications.
The slide covers the basic concepts and designs of artificial neural networks. It explains and justifies the use of McCulloh Pitts Model, Adaline network, Perceptron algorithm, Backpropagation algorithm, Hopfield network and Kohonen network; along with its practical applications.
the slides are aimed to give a brief introductory base to Neural Networks and its architectures. it covers logistic regression, shallow neural networks and deep neural networks. the slides were presented in Deep Learning IndabaX Sudan.
Multilayer Backpropagation Neural Networks for Implementation of Logic GatesIJCSES Journal
ANN is a computational model that is composed of several processing elements (neurons) that tries to solve a specific problem. Like the human brain, it provides the ability to learn
from experiences without being explicitly programmed. This article is based on the implementation of artificial neural networks for logic gates. At first, the 3 layers Artificial Neural Network is
designed with 2 input neurons, 2 hidden neurons & 1 output neuron. after that model is trained
by using a backpropagation algorithm until the model satisfies the predefined error criteria (e)
which set 0.01 in this experiment. The learning rate (α) used for this experiment was 0.01. The
NN model produces correct output at iteration (p)= 20000 for AND, NAND & NOR gate. For
OR & XOR the correct output is predicted at iteration (p)=15000 & 80000 respectively
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
the slides are aimed to give a brief introductory base to Neural Networks and its architectures. it covers logistic regression, shallow neural networks and deep neural networks. the slides were presented in Deep Learning IndabaX Sudan.
Multilayer Backpropagation Neural Networks for Implementation of Logic GatesIJCSES Journal
ANN is a computational model that is composed of several processing elements (neurons) that tries to solve a specific problem. Like the human brain, it provides the ability to learn
from experiences without being explicitly programmed. This article is based on the implementation of artificial neural networks for logic gates. At first, the 3 layers Artificial Neural Network is
designed with 2 input neurons, 2 hidden neurons & 1 output neuron. after that model is trained
by using a backpropagation algorithm until the model satisfies the predefined error criteria (e)
which set 0.01 in this experiment. The learning rate (α) used for this experiment was 0.01. The
NN model produces correct output at iteration (p)= 20000 for AND, NAND & NOR gate. For
OR & XOR the correct output is predicted at iteration (p)=15000 & 80000 respectively
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
AI Pilot Review: The World’s First Virtual Assistant Marketing SuiteGoogle
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
✅Deploy AI expert bots in Any Niche With Just A Click
✅With one keyword, generate complete funnels, websites, landing pages, and more.
✅More than 85 AI features are included in the AI pilot.
✅No setup or configuration; use your voice (like Siri) to do whatever you want.
✅You Can Use AI Pilot To Create your version of AI Pilot And Charge People For It…
✅ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
✅ZERO Limits On Features Or Usages
✅Use Our AI-powered Traffic To Get Hundreds Of Customers
✅No Complicated Setup: Get Up And Running In 2 Minutes
✅99.99% Up-Time Guaranteed
✅30 Days Money-Back Guarantee
✅ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
Enhancing Project Management Efficiency_ Leveraging AI Tools like ChatGPT.pdfJay Das
With the advent of artificial intelligence or AI tools, project management processes are undergoing a transformative shift. By using tools like ChatGPT, and Bard organizations can empower their leaders and managers to plan, execute, and monitor projects more effectively.
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
RISE with SAP and Journey to the Intelligent Enterprise
neural.ppt
1. CSE 634
Data Mining Techniques
Presentationon Neural Network
Jalal Mahmud ( 105241140)
Hyung-Yeon, Gu(104985928)
Course Teacher : Prof. Anita Wasilewska
State University of New York at Stony Brook
3. Overview
Basics of Neural Network
Advanced Features of Neural Network
Applications I-II
Summary
4. Basics of Neural Network
What is a Neural Network
Neural Network Classifier
Data Normalization
Neuron and bias of a neuron
Single Layer Feed Forward
Limitation
Multi Layer Feed Forward
Back propagation
5. Neural Networks
What is a Neural Network?
Similarity with biological network
Fundamental processing elements of a neural network
is a neuron
1.Receives inputs from other source
2.Combines them in someway
3.Performs a generally nonlinear operation on the result
4.Outputs the final result
•Biologically motivated approach to
machine learning
6. Similarity with Biological Network
• Fundamental processing element of a
neural network is a neuron
• A human brain has 100 billion neurons
• An ant brain has 250,000 neurons
8. Neural Network
Neural Network is a set of connected
INPUT/OUTPUT UNITS, where each
connection has a WEIGHT associated with it.
Neural Network learning is also called
CONNECTIONIST learning due to the connections
between units.
It is a case of SUPERVISED, INDUCTIVE or
CLASSIFICATION learning.
9. Neural Network
Neural Network learns by adjusting the
weights so as to be able to correctly classify
the training data and hence, after testing
phase, to classify unknown data.
Neural Network needs long time for training.
Neural Network has a high tolerance to noisy
and incomplete data
10. Neural Network Classifier
Input: Classification data
It contains classification attribute
Data is divided, as in any classification problem.
[Training data and Testing data]
All data must be normalized.
(i.e. all values of attributes in the database are changed to
contain values in the internal [0,1] or[-1,1])
Neural Network can work with data in the range of (0,1) or (-1,1)
Two basic normalization techniques
[1] Max-Min normalization
[2] Decimal Scaling normalization
12. Example of Max-Min
Normalization
A
new
A
new
A
new
A
A
A
v
v min
_
)
min
_
max
_
(
min
max
min
'
Max- Min normalization formula
Example: We want to normalize data to range of the interval [0,1].
We put: new_max A= 1, new_minA =0.
Say, max A was 100 and min A was 20 ( That means maximum and minimum
values for the attribute ).
Now, if v = 40 ( If for this particular pattern , attribute value is 40 ), v’
will be calculated as , v’ = (40-20) x (1-0) / (100-20) + 0
=> v’ = 20 x 1/80
=> v’ = 0.4
13. Decimal Scaling Normalization
[2]Decimal Scaling Normalization
Normalization by decimal scaling normalizes by moving the
decimal point of values of attribute A.
j
v
v
10
'
Here j is the smallest integer such that max|v’|<1.
Example :
A – values range from -986 to 917. Max |v| = 986.
v = -986 normalize to v’ = -986/1000 = -0.986
14. One Neuron as a
Network
Here x1 and x2 are normalized attribute value of data.
y is the output of the neuron , i.e the class label.
x1 and x2 values multiplied by weight values w1 and w2 are input to the
neuron x.
Value of x1 is multiplied by a weight w1 and values of x2 is multiplied by
a weight w2.
Given that
• w1 = 0.5 and w2 = 0.5
• Say value of x1 is 0.3 and value of x2 is 0.8,
• So, weighted sum is :
• sum= w1 x x1 + w2 x x2 = 0.5 x 0.3 + 0.5 x 0.8 = 0.55
15. One Neuron as a Network
• The neuron receives the weighted sum as input and calculates
the output as a function of input as follows :
• y = f(x) , where f(x) is defined as
• f(x) = 0 { when x< 0.5 }
• f(x) = 1 { when x >= 0.5 }
• For our example, x ( weighted sum ) is 0.55, so y = 1 ,
• That means corresponding input attribute values are classified in
class 1.
• If for another input values , x = 0.45 , then f(x) = 0,
• so we could conclude that input values are classified to
class 0.
16. Bias of a Neuron
We need the bias value to be added to the weighted
sum ∑wixi so that we can transform it from the origin.
v = ∑wixi + b, here b is the bias
x1-x2=0
x1-x2= 1
x1
x2
x1-x2= -1
17. Bias as extra input
Input
Attribute
values
weights
Summing function
Activation
function
v
Output
class
y
x1
x2
xm
w2
wm
W1
)
(
w0
x0 = +1
b
w
x
w
v j
m
j
j
0
0
18. Neuron with Activation
The neuron is the basic information processing unit of a
NN. It consists of:
1 A set of links, describing the neuron inputs, with
weights W1, W2, …, Wm
2. An adder function (linear combiner) for computing the
weighted sum of the inputs (real numbers):
3 Activation function : for limiting the amplitude of the
neuron output.
m
1
j
jx
w
u
j
)
(u
y b
19. Why We Need Multi Layer ?
Linear Separable:
Linear inseparable:
Solution?
y
x y
x
y
x
20. k
O
jk
w
Output nodes
Input nodes
Hidden nodes
Output Class
Input Record : xi
wij - weights
Network is fully connected
j
O
A Multilayer Feed-Forward
Neural Network
21. Neural Network Learning
The inputs are fed simultaneously into the
input layer.
The weighted outputs of these units are fed
into hidden layer.
The weighted outputs of the last hidden layer
are inputs to units making up the output layer.
22. A Multilayer Feed Forward Network
The units in the hidden layers and output layer are
sometimes referred to as neurodes, due to their
symbolic biological basis, or as output units.
A network containing two hidden layers is called a
three-layer neural network, and so on.
The network is feed-forward in that none of the
weights cycles back to an input unit or to an output
unit of a previous layer.
23. A Multilayered Feed – Forward Network
INPUT: records without class attribute with
normalized attributes values.
INPUT VECTOR: X = { x1, x2, …. xn}
where n is the number of (non class) attributes.
INPUT LAYER – there are as many nodes as non-
class attributes i.e. as the length of the input vector.
HIDDEN LAYER – the number of nodes in the
hidden layer and the number of hidden layers
depends on implementation.
24. A Multilayered Feed–Forward
Network
OUTPUT LAYER – corresponds to the
class attribute.
There are as many nodes as classes
(values of the class attribute).
k
O k= 1, 2,.. #classes
• Network is fully connected, i.e. each unit provides input
to each unit in the next forward layer.
25. Classification by Back propagation
Back Propagation learns by iteratively
processing a set of training data (samples).
For each sample, weights are modified to
minimize the error between network’s
classification and actual classification.
26. Steps in Back propagation
Algorithm
STEP ONE: initialize the weights and biases.
The weights in the network are initialized to
random numbers from the interval [-1,1].
Each unit has a BIAS associated with it
The biases are similarly initialized to random
numbers from the interval [-1,1].
STEP TWO: feed the training sample.
27. Steps in Back propagation Algorithm
( cont..)
STEP THREE: Propagate the inputs forward;
we compute the net input and output of each
unit in the hidden and output layers.
STEP FOUR: back propagate the error.
STEP FIVE: update weights and biases to
reflect the propagated errors.
STEP SIX: terminating conditions.
28. Propagation through Hidden
Layer ( One Node )
The inputs to unit j are outputs from the previous layer. These are
multiplied by their corresponding weights in order to form a
weighted sum, which is added to the bias associated with unit j.
A nonlinear activation function f is applied to the net input.
-
f
weighted
sum
Input
vector x
output y
Activation
function
weight
vector
w
w0j
w1j
wnj
x0
x1
xn
Bias j
29. Propagate the inputs forward
For unit j in the input layer, its output is
equal to its input, that is,
j
j I
O
for input unit j.
• The net input to each unit in the hidden and output
layers is computed as follows.
•Given a unit j in a hidden or output layer, the net input is
i
j
i
ij
j O
w
I
where wij is the weight of the connection from unit i in the previous layer to
unit j; Oi is the output of unit I from the previous layer;
j
is the bias of the unit
30. Propagate the inputs forward
Each unit in the hidden and output layers takes its
net input and then applies an activation function.
The function symbolizes the activation of the
neuron represented by the unit. It is also called a
logistic, sigmoid, or squashing function.
Given a net input Ij to unit j, then
Oj = f(Ij),
the output of unit j, is computed as j
I
j
e
O
1
1
31. Back propagate the error
When reaching the Output layer, the error is
computed and propagated backwards.
For a unit k in the output layer the error is
computed by a formula:
)
)(
1
( k
k
k
k
k O
T
O
O
Err
•
Where O k – actual output of unit k ( computed by activation
function.
Tk – True output based of known class label; classification of
training sample
Ok(1-Ok) – is a Derivative ( rate of change ) of activation function.
k
I
k
e
O
1
1
32. Back propagate the error
The error is propagated backwards by updating
weights and biases to reflect the error of the
network classification .
For a unit j in the hidden layer the error is
computed by a formula:
•
jk
k
k
j
j
j w
Err
O
O
Err
)
1
(
where wjk is the weight of the connection from unit j to unit
k in the next higher layer, and Errk is the error of unit k.
33. Update weights and biases
Weights are updated by the following equations,
where l is a constant between 0.0 and 1.0
reflecting the learning rate, this learning rate is
fixed for implementation.
i
j
ij O
Err
l
w )
(
ij
ij
ij w
w
w
• Biases are updated by the following equations
j
j
j
j
j Err
l)
(
34. Update weights and biases
We are updating weights and biases after the
presentation of each sample.
This is called case updating.
Epoch --- One iteration through the training set is called an
epoch.
Epoch updating ------------
Alternatively, the weight and bias increments could be
accumulated in variables and the weights and biases
updated after all of the samples of the training set have
been presented.
Case updating is more accurate
35. Terminating Conditions
Training stops
ij
w
• All in the previous epoch are below some
threshold, or
•The percentage of samples misclassified in the previous
epoch is below some threshold, or
• a pre specified number of epochs has expired.
• In practice, several hundreds of thousands of epochs may
be required before the weights will converge.
36. Output nodes
Input nodes
Hidden nodes
Output vector
Input vector: xi
wij
i
j
i
ij
j O
w
I
)
)(
1
( k
k
k
k
k O
T
O
O
Err
jk
k
k
j
j
j w
Err
O
O
Err
)
1
(
i
j
ij
ij O
Err
l
w
w )
(
j
j
j Err
l)
(
j
I
j
e
O
1
1
Backpropagation Formulas
37. Example of Back propagation
x1 x2 x3 w14 w15 w24 w25 w34 w35 w46 w56
1 0 1 0.2 -0.3 0.4 0.1 -0.5 0.2 -0.3 -0.2
Initial Input and weight
Initialize weights :
Input = 3, Hidden
Neuron = 2 Output =1
Random Numbers
from -1.0 to 1.0
38. Example ( cont.. )
Bias added to Hidden
+ Output nodes
Initialize Bias
Random Values from
-1.0 to 1.0
Bias ( Random )
θ4 θ5 θ6
-0.4 0.2 0.1
39. Net Input and Output Calculation
Unitj Net Input Ij Output Oj
4 0.2 + 0 + 0.5 -0.4 = -0.7
5 -0.3 + 0 + 0.2 + 0.2 =0.1
6 (-0.3)0.332-
(0.2)(0.525)+0.1= -0.105
1
.
0
1
1
e
Oj
7
.
0
1
1
e
Oj
105
.
0
1
1
e
Oj
= 0.332
= 0.525
= 0.475
40. Calculation of Error at Each
Node
Unit j Error j
6 0.475(1-0.475)(1-0.475) =0.1311
We assume T 6 = 1
5 0.525 x (1- 0.525)x 0.1311x
(-0.2) = 0.0065
4 0.332 x (1-0.332) x 0.1311 x
(-0.3) = -0.0087
42. Advanced Features of Neural
Network
Training with Subsets
Modular Neural Network
Evolution of Neural Network
43. Variants of Neural Networks
Learning
Supervised learning/Classification
• Control
• Function approximation
• Associative memory
Unsupervised learning or Clustering
44. Training with Subsets
Select subsets of data
Build new classifier on subset
Aggregate with previous classifiers
Compare error after adding classifier
Repeat as long as error decreases
45. Training with subsets
Subset 1
Subset 2
Subset 3
Subset n
NN 1
NN 2
NN 3
NN n
A Single
Neural Network
Model
The
Whole
Dataset
Split the dataset
into subsets
that can fit
into memory
.
.
.
46. Modular Neural Network
Modular Neural Network
• Made up of a combination of several neural
networks.
The idea is to reduce the load for each neural
network as opposed to trying to solve the
problem on a single neural network.
47. Evolving Network Architectures
Small networks without a hidden layer can’t
solve problems such as XOR, that are not
linearly separable.
•Large networks can easily overfit a problem
to match the training data, limiting their
ability to generalize a problem set.
48. Constructive vs Destructive
Algorithm
Constructive algorithms take a minimal
network and build up new layers nodes and
connections during training.
Destructive algorithms take a maximal
network and prunes unnecessary layers
nodes and connections during training.
49. Training Process of the MLP
The training will be continued until the RMS is
minimized.
Global Minimum
Local Minimum
Local Minimum
ERROR
W (N dimensional)
50. Faster Convergence
Back prop requires many epochs to converge
Some ideas to overcome this
• Stochastic learning
• Update weights after each training example
• Momentum
• Add fraction of previous update to current update
• Faster convergence
51. Applications-I
Handwritten Digit Recognition
Face recognition
Time series prediction
Process identification
Process control
Optical character recognition
52. Application-II
Forecasting/Market Prediction: finance and banking
Manufacturing: quality control, fault diagnosis
Medicine: analysis of electrocardiogram data, RNA & DNA
sequencing, drug development without animal testing
Control: process, robotics
53. Summary
We presented mainly the followings-------
Basic building block of Artificial Neural Network.
Construction , working and limitation of single layer neural
network (Single Layer Neural Network).
Back propagation algorithm for multi layer feed forward NN.
Some Advanced Features like training with subsets, Quicker
convergence, Modular Neural Network, Evolution of NN.
Application of Neural Network.
54. Remember…..
ANNs perform well, generally better with larger number of
hidden units
More hidden units generally produce lower error
Determining network topology is difficult
Choosing single learning rate impossible
Difficult to reduce training time by altering the network
topology or learning parameters
NN(Subset) often produce better results