This PPT contains entire content in short. My book on ANN under the title "SOFT COMPUTING" with Watson Publication and my classmates can be referred together.
Convolutional neural network (CNN / ConvNet's) is a part of Computer Vision. Machine Learning Algorithm. Image Classification, Image Detection, Digit Recognition, and many more. https://technoelearn.com .
What Is A Neural Network? | How Deep Neural Networks Work | Neural Network Tu...Simplilearn
This Neural Network presentation will help you understand what is deep learning, what is a neural network, how deep neural network works, advantages of neural network, applications of neural network and the future of neural network. Deep Learning uses advanced computing power and special types of neural networks and applies them to large amounts of data to learn, understand, and identify complicated patterns. Automatic language translation and medical diagnoses are examples of deep learning. Most deep learning methods involve artificial neural networks, modeling how our brains work. Deep Learning forms the basis for most of the incredible advances in Machine Learning. Neural networks are built on Machine Learning algorithms to create an advanced computation model that works much like the human brain. Now, let us deep dive into this video to understand how a neural network actually works along with some real-life examples.
Below topics are explained in this neural network presentation:
1. What is Deep Learning?
2. What is an artificial network?
3. How does neural network work?
4. Advantages of neural network
5. Applications of neural network
6. Future of neural network
Why Deep Learning?
It is one of the most popular software platforms used for deep learning and contains powerful tools to help you build and implement artificial neural networks.
Advancements in deep learning are being seen in smartphone applications, creating efficiencies in the power grid, driving advancements in healthcare, improving agricultural yields, and helping us find solutions to climate change. With this Tensorflow course, you’ll build expertise in deep learning models, learn to operate TensorFlow to manage neural networks and interpret the results.
And according to payscale.com, the median salary for engineers with deep learning skills tops $120,000 per year.
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms. Those who complete the course will be able to:
1. Understand the concepts of TensorFlow, its main functions, operations and the execution pipeline
2. Implement deep learning algorithms, understand neural networks and traverse the layers of data abstraction which will empower you to understand data like never before
3. Master and comprehend advanced topics such as convolutional neural networks, recurrent neural networks, training deep networks and high-level interfaces
4. Build deep learning models in TensorFlow and interpret the results
5. Understand the language and fundamental concepts of artificial neural networks
6. Troubleshoot and improve deep learning models
Learn more at: https://www.simplilearn.com
Convolutional neural network (CNN / ConvNet's) is a part of Computer Vision. Machine Learning Algorithm. Image Classification, Image Detection, Digit Recognition, and many more. https://technoelearn.com .
What Is A Neural Network? | How Deep Neural Networks Work | Neural Network Tu...Simplilearn
This Neural Network presentation will help you understand what is deep learning, what is a neural network, how deep neural network works, advantages of neural network, applications of neural network and the future of neural network. Deep Learning uses advanced computing power and special types of neural networks and applies them to large amounts of data to learn, understand, and identify complicated patterns. Automatic language translation and medical diagnoses are examples of deep learning. Most deep learning methods involve artificial neural networks, modeling how our brains work. Deep Learning forms the basis for most of the incredible advances in Machine Learning. Neural networks are built on Machine Learning algorithms to create an advanced computation model that works much like the human brain. Now, let us deep dive into this video to understand how a neural network actually works along with some real-life examples.
Below topics are explained in this neural network presentation:
1. What is Deep Learning?
2. What is an artificial network?
3. How does neural network work?
4. Advantages of neural network
5. Applications of neural network
6. Future of neural network
Why Deep Learning?
It is one of the most popular software platforms used for deep learning and contains powerful tools to help you build and implement artificial neural networks.
Advancements in deep learning are being seen in smartphone applications, creating efficiencies in the power grid, driving advancements in healthcare, improving agricultural yields, and helping us find solutions to climate change. With this Tensorflow course, you’ll build expertise in deep learning models, learn to operate TensorFlow to manage neural networks and interpret the results.
And according to payscale.com, the median salary for engineers with deep learning skills tops $120,000 per year.
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms. Those who complete the course will be able to:
1. Understand the concepts of TensorFlow, its main functions, operations and the execution pipeline
2. Implement deep learning algorithms, understand neural networks and traverse the layers of data abstraction which will empower you to understand data like never before
3. Master and comprehend advanced topics such as convolutional neural networks, recurrent neural networks, training deep networks and high-level interfaces
4. Build deep learning models in TensorFlow and interpret the results
5. Understand the language and fundamental concepts of artificial neural networks
6. Troubleshoot and improve deep learning models
Learn more at: https://www.simplilearn.com
Basic definitions, terminologies, and Working of ANN has been explained. This ppt also shows how ANN can be performed in matlab. This material contains the explanation of Feed forward back propagation algorithm in detail.
Part 1 of the Deep Learning Fundamentals Series, this session discusses the use cases and scenarios surrounding Deep Learning and AI; reviews the fundamentals of artificial neural networks (ANNs) and perceptrons; discuss the basics around optimization beginning with the cost function, gradient descent, and backpropagation; and activation functions (including Sigmoid, TanH, and ReLU). The demos included in these slides are running on Keras with TensorFlow backend on Databricks.
Basic definitions, terminologies, and Working of ANN has been explained. This ppt also shows how ANN can be performed in matlab. This material contains the explanation of Feed forward back propagation algorithm in detail.
Part 1 of the Deep Learning Fundamentals Series, this session discusses the use cases and scenarios surrounding Deep Learning and AI; reviews the fundamentals of artificial neural networks (ANNs) and perceptrons; discuss the basics around optimization beginning with the cost function, gradient descent, and backpropagation; and activation functions (including Sigmoid, TanH, and ReLU). The demos included in these slides are running on Keras with TensorFlow backend on Databricks.
Neural Network and Artificial Intelligence.
Neural Network and Artificial Intelligence.
WHAT IS NEURAL NETWORK?
The method calculation is based on the interaction of plurality of processing elements inspired by biological nervous system called neurons.
It is a powerful technique to solve real world problem.
A neural network is composed of a number of nodes, or units[1], connected by links. Each linkhas a numeric weight[2]associated with it. .
Weights are the primary means of long-term storage in neural networks, and learning usually takes place by updating the weights.
Artificial neurons are the constitutive units in an artificial neural network.
WHY USE NEURAL NETWORKS?
It has ability to Learn from experience.
It can deal with incomplete information.
It can produce result on the basis of input, has not been taught to deal with.
It is used to extract useful pattern from given data i.e. pattern Recognition etc.
Biological Neurons
Four parts of a typical nerve cell :• DENDRITES: Accepts the inputs• SOMA : Process the inputs• AXON : Turns the processed inputs into outputs.• SYNAPSES : The electrochemical contactbetween the neurons.
ARTIFICIAL NEURONS MODEL
Inputs to the network arerepresented by the x1mathematical symbol, xn
Each of these inputs are multiplied by a connection weight , wn
sum = w1 x1 + ……+ wnxn
These products are simplysummed, fed through the transfer function, f( ) to generate a result and then output.
NEURON MODEL
Neuron Consist of:
Inputs (Synapses): inputsignal.Weights (Dendrites):determines the importance ofincoming value.Output (Axon): output toother neuron or of NN .
Similar to Artificial Neural Networks for NIU session 2016 17 (20)
National Education Policy: A Complete Guide. The National Education Policy 2020 (NEP 2020) was approved by the Union Cabinet on July 29, 2020. The policy aims to bring equity and inclusiveness to the education system.
The NEP 2020 replaces the 10+2 school system with a new 5+3+3+4 system. The new system corresponds to the following age groups:
Foundational stage: 3–8 years
Preparatory stage: 8–11 years
Middle stage: 11–14 years
Secondary stage: 14–18 years
The NEP 2020 curriculum includes:
Proficiency in languages
Scientific temper and evidence-based thinking
Creativity and innovativeness
Sense of aesthetics and art
Oral and written communication
Health and nutrition
Physical education, fitness, wellness, and sports
Collaboration and teamwork
Problem solving
Slides are 30% of a presentation, 70% are Knowledge, Experience, Verbal and Nonverbal Communication. Knowledge with enough practice can make a long-lasting mark on your audience
National Education Policy is a game-changer with equal stress on our Indian Knowledge System and International Education System. Freedom to chose subjects amongst 'Streams' with 'No Silos', multiple entries & multiple exit and flexibility in completion of Higher studies as different learners have different capabilities... are some salient points. The policy is very original, Indic and strong but it all depends now on Implementation. I understand we will get our first batch of freshers in Engineering colleges in 2024. I welcome any more observations.
India is the best case study of digitization to nano level. The demographic divide is a wonderful subject matter for anthropologists. Using Data Science and Artificial Intelligence in such a huge population and area is noteworth.
Course Flow of Engineering Subjects, finding the important and redundant courses and ooptimising the time-effort duo for your own students will give any Engineering faculty a strength as mentor.
Finding the right fit from your mentees for the industry is the most satisfying job for any Engineering Faculty.
This presentation is part of FDP being held in Rajkiya Engineering College, kannauj
Capstone Projects are intense final task as part of completion of Professional Studies. This slide show is focussed on Capstone Project for Engineering Studies. The target audience are Engineering Faculty.
Design Thinking is an iterative exercise on Inspiration, Insight, Ideation & Implementation.
Fail early, Test Often and be creative about your mistakes... never a repeated one!
The effort in this webinar is to make the Civil, Mechanical, and Sanitation Engineers understand, that, DSAI is there to make the best use of the understanding of knowledge they have
Deep Learning was constrained with two key factors for practical applicability. One was the availability of Big Data. With the explosion of Big Data with Internet growth solving the Data problem, the second issue was that even with Big Data availability, to get the compute power required to harvest valuable knowledge from Big Data.
Here is my perspective
Today, During a Management Development Program at Radisson Hotel, Noida.
Participant from PSUs like NTPC, GAIL and HR Personnel from Corporate with more than 20 years of experience.
A grand Teaching Learning Experience
My presentation on Big Data in Defence and National Security, Presented at Geo Intelligence Asia 2017, dated 21nd-24th August 2017, with the address on ‘Big Data Analytics for Defence and National Security’ at Putrajaya International Convention Centre, Malaysia.
Event Management System Vb Net Project Report.pdfKamal Acharya
In present era, the scopes of information technology growing with a very fast .We do not see any are untouched from this industry. The scope of information technology has become wider includes: Business and industry. Household Business, Communication, Education, Entertainment, Science, Medicine, Engineering, Distance Learning, Weather Forecasting. Carrier Searching and so on.
My project named “Event Management System” is software that store and maintained all events coordinated in college. It also helpful to print related reports. My project will help to record the events coordinated by faculties with their Name, Event subject, date & details in an efficient & effective ways.
In my system we have to make a system by which a user can record all events coordinated by a particular faculty. In our proposed system some more featured are added which differs it from the existing system such as security.
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Democratizing Fuzzing at Scale by Abhishek Aryaabh.arya
Presented at NUS: Fuzzing and Software Security Summer School 2024
This keynote talks about the democratization of fuzzing at scale, highlighting the collaboration between open source communities, academia, and industry to advance the field of fuzzing. It delves into the history of fuzzing, the development of scalable fuzzing platforms, and the empowerment of community-driven research. The talk will further discuss recent advancements leveraging AI/ML and offer insights into the future evolution of the fuzzing landscape.
Forklift Classes Overview by Intella PartsIntella Parts
Discover the different forklift classes and their specific applications. Learn how to choose the right forklift for your needs to ensure safety, efficiency, and compliance in your operations.
For more technical information, visit our website https://intellaparts.com
Courier management system project report.pdfKamal Acharya
It is now-a-days very important for the people to send or receive articles like imported furniture, electronic items, gifts, business goods and the like. People depend vastly on different transport systems which mostly use the manual way of receiving and delivering the articles. There is no way to track the articles till they are received and there is no way to let the customer know what happened in transit, once he booked some articles. In such a situation, we need a system which completely computerizes the cargo activities including time to time tracking of the articles sent. This need is fulfilled by Courier Management System software which is online software for the cargo management people that enables them to receive the goods from a source and send them to a required destination and track their status from time to time.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
2. Course Objective
To understand, successfully apply
and evaluate Neural Network
structures and paradigms for
problems in Science, Engineering and
Business.
3. PreRequisites
It is expected that, the audience has a flair
to understand algorithms and basic
knowledge of Mathematics, Logic gates
and Programming
4. Outline
Introduction
How the human brain learns
Neuron Models
Different types of Neural Networks
Network Layers and Structure
Training a Neural Network
Application of ANN
5. Introduction:
Soft Computing techniques such as Neural
networks, genetic algorithms and fuzzy logic are
among the most powerful tools available for
detecting and describing subtle relationships in
massive amounts of seemingly unrelated data.
Neural networks can learn and are actually
taught instead of being programmed.
Teaching mode can be supervised or
unsupervised
Neural Networks learn in the presence of noise
8. How does the brain work
• Each neuron receives inputs from other neurons
– Use spikes to communicate
• The effect of each input line on the neuron is controlled
by a synaptic weight
– Positive or negative
• Synaptic weight adapts so that the whole network learns
to perform useful computations
– Recognizing objects, understanding languages,
making plans, controlling the body
• There are 1011 neurons with 104 weights.
9. How the Human Brain learns
In the human brain, a typical neuron collects signals from others through a host of
fine structures called dendrites.
The neuron sends out spikes of electrical activity through a long, thin stand known
as an axon, which splits into thousands of branches.
At the end of each branch, a structure called a synapse converts the activity from
the axon into electrical effects that inhibit or excite activity in the connected
neurons.
10. Modularity and brain
• Different bits of the cortex do different things
• Local damage to the brain has specific effects
• Early brain damage makes function relocate
• Cortex gives rapid parallel computation plus
flexibility
• Conventional computers requires very fast
central processors for long sequential
computations
12. Fundamental concept
• NN are constructed and implemented to
model the human brain.
• Performs various tasks such as pattern-
matching, classification, optimization
function, approximation, vector
quantization and data clustering.
• These tasks are difficult for traditional
computers
13. ANN
• ANN posess a large number of processing
elements called nodes/neurons which operate in
parallel.
• Neurons are connected with others by
connection link.
• Each link is associated with weights which
contain information about the input signal.
• Each neuron has an internal state of its own
which is a function of the inputs that neuron
receives- Activation level
14. Comparison between brain verses computer
Brain ANN
Speed Few ms. Few nano sec. massive
||el processing
Size and complexity 1011 neurons & 1015
interconnections
Depends on designer
Storage capacity Stores information in its
interconnection or in
synapse.
No Loss of memory
Contiguous memory
locations
loss of memory may
happen sometimes.
Tolerance Has fault tolerance No fault tolerance Inf gets
disrupted when
interconnections are
disconnected
Control mechanism Complicated involves
chemicals in biological
neuron
Simpler in ANN
15. Types of Problems ANN can handle
Mathematical Modeling (Function Approximation)
Classification
Clustering
Forecasting
Vector Quantization
Pattern Association
Control
Optimization
16. A Neuron Model
When a neuron receives excitatory input that is sufficiently large
compared with its inhibitory input, it sends a spike of electrical activity
down its axon. Learning occurs by changing the effectiveness of the
synapses so that the influence of one neuron on another changes.
We conduct these neural networks by first trying to deduce the essential
features of neurons and their interconnections.
We then typically program a computer to simulate these features.
17. A Simple Neuron
An artificial neuron is a device with many inputs and one output.
The neuron has two modes of operation;
the training mode and
the using mode.
18. Important terminologies of ANNs
• Weights
• Bias
• Threshold
• Learning rate
• Momentum factor
• Vigilance parameter
• Notations used in ANN
19. Weights
• Each neuron is connected to every other
neuron by means of directed links
• Links are associated with weights
• Weights contain information about the
input signal and is represented as a matrix
• Weight matrix also called connection
matrix
21. Weights contd…
• wij –is the weight from processing element ”i” (source node)
to processing element “j” (destination node)
X1
1
Xi
Yj
Xn
w1j
wij
wnj
bj
22. Activation Functions
• Used to calculate the output response of a
neuron.
• Sum of the weighted input signal is applied with
an activation to obtain the response.
• Activation functions can be linear or non linear
• Already dealt
– Identity function
– Single/binary step function
– Discrete/continuous sigmoidal function.
23. Bias
• Bias is like another weight. Its included by
adding a component x0=1 to the input
vector X.
• X=(1,X1,X2…Xi,…Xn)
• Bias is of two types
– Positive bias: increase the net input
– Negative bias: decrease the net input
24. Why Bias is required?
• The relationship between input and output
given by the equation of straight line
y=mx+c
X YInput
C(bias)
y=mx+C
25. Threshold
• Set value based upon which the final output of
the network may be calculated
• Used in activation function
• The activation function using threshold can be
defined as
26. Learning rate
• Denoted by α.
• Used to control the amount of weight
adjustment at each step of training
• Learning rate ranging from 0 to 1
determines the rate of learning in each
time step
27. Other terminologies
• Momentum factor:
– used for convergence when momentum factor
is added to weight updation process.
• Vigilance parameter:
– Denoted by ρ
– Used to control the degree of similarity
required for patterns to be assigned to the
same cluster
28. The McCulloch-Pitts model
Neurons work by processing information. They receive and provide
information in form of spikes.
Inputs
Output
w2
w1
w3
wn
.
.
.
x1
x2
x3
…
xn-1
xn
y
32. Features of McCulloch-Pitts model
• Allows binary 0,1 states only
• Operates under a discrete-time assumption
• Weights and the neurons’ thresholds are
fixed in the model and no interaction
among network neurons
• Just a primitive model
33. Properties for Mc Culloch and Pitts Model
Input is 0 or 1
Weights are -1, 0 or +1
Threshold is an integer
Output is 0 or 1
Output is 1 if multiplication of weight and input is more than the threshold
else Outputs 0
Represent the NOT gate with the help of this model, using signal flow graph and flow
Truth Table
L=0
-1
x
y
x y
0 1
1 0
Input x
w= -1
Start
L=0
34. Mc Culloch and Pitts Model……… OR Gate and AND Gate
35
OR Gatex y z
0 0 0
0 1 1
1 0 1
1 1 1
z
y
z
x y
L>=1
Start
Stop
wx,wy
Input x,y
37. Advantages and Disadvantages of
McCulloch Pitt model
• Advantages
• Simplistic
• Substantial computing
power
• Disadvantages
– Weights and thresholds
are fixed
– Not very flexible
38. Quiz
• Which of the following tasks are neural
networks good at?
– Recognizing fragments of words in a pre-
processed sound wave.
– Recognizing badly written characters.
– Storing lists of names and birth dates.
– logical reasoning
Neural networks are good at finding statistical regularities that allow
them to recognize patterns. They are not good at flawlessly
applying symbolic rules or storing exact numbers.
40. Perceptron Learning rule
• Learning signal is the difference between the
desired and actual neuron’s response
• Learning is supervised
41. General symbol of neuron consisting of
processing node and synaptic connections
42. Neuron Modeling for ANN
Is referred to activation function. Domain is
set of activation values net.
Scalar product of weight and input vector
Neuron as a processing node performs the operation of summation of
its weighted input.
43. Sigmoid neurons
• These give a real-valued
output that is a smooth and
bounded function of their
total input.
– Typically they use the
logistic function
– They have nice
derivatives which make
learning easy
0.5
0
0
1
44. Activation function
• Bipolar binary and unipolar binary are
called as hard limiting activation functions
used in discrete neuron model
• Unipolar continuous and bipolar continuous
are called soft limiting activation functions
are called sigmoidal characteristics.
50. Quiz
• Suppose we have 3D input x=(0.5,-0.5) connected to a
neuron with weights w=(2,-1) and bias b=0.5. furthermore
the target for x is t=0. in this case we use a binary
threshold neuron for the output so that
y=1 if xTw+b>=0 and 0 otherwise
What will be the weights and bias after 1 iteration of
perceptron learning algorithm?
w= (1.5,-0.5) b=-1.5
w=(1.5,-0.5) b=-0.5
w=(2.5,-1.5) b=0.5
w=(-1.5,0.5) b=1.5
51. Basic models of ANN
Basic Models of ANN
Interconnections Learning rules Activation function
54. Summary of the simple networks
Single layer nets have limited representation
power (linear separability problem)
Error drive seems a good way to train a net
Multi-layer nets (or nets with non-linear hidden
units) may overcome linear inseparability
problem, learning methods for such nets are
needed
Threshold/step output functions hinders the
effort to develop learning methods for multi-
layered nets
55. Training/ Learning
Learning can be of one of the following forms:
Supervised Learning
Unsupervised Learning
Reinforced Learning
The patterns given to classifier may be on:
Parametric Estimation
Non- Parametric Estimation
56. Machine Learning in ANNs
Supervised Learning − It involves a
teacher that is scholar than the ANN itself.
For example, the teacher feeds some
example data about which the teacher
already knows the answers.
57. Machine Learning in ANNs
Unsupervised Learning − It is required
when there is no example data set with
known answers. For example, searching
for a hidden pattern. In this case, clustering
i.e. dividing a set of elements into groups
according to some unknown pattern is
carried out based on the existing data sets
present.
58. Machine Learning in ANNs
Reinforcement Learning − This strategy
built on observation. The ANN makes a
decision by observing its environment. If
the observation is negative, the network
adjusts its weights to be able to make a
different required decision the next time.
59. Unsupervised Learning: why?
Collecting and labeling a large set of sample patterns can
be costly.
Train with large amounts of unlabeled data, and only then
use supervision to label the groupings found.
In dynamic systems, the samples can change slowly.
To find features that will then be useful for categorization.
To provide a form of data dependent smart processing or
smart feature extraction.
To Perform exploratory data analysis, to find structure of
data, to form proper classes for supervised analysis.
60. Measure of Dissimilarity:
Define a metric or distance function d on the vector space λ as
a real-valued function on the Cartesian product λX λ such that:
Positive Definiteness:
0 < d(x,y) < ∞ for x,y ελ and d(x,y)=0 if and only if x=y
Symmetry:
d(x,y) = d(y,x) for x,y ελ
Triangular Inequality:
d(x,y) = d(x,z) + d(y,z) for x,y,z ελ
Invariance or distance function: d(x+z,y+z) = d(x,y)
61. Error Computation
Minkowski Matrix or Lk norm
Manhattan Distance or L1 norm
Euclidian Distance or L2 norm
Ln norm
62. Neural networks have performed
successfully where other methods have
not, predicting system behavior,
recognizing and matching complicated,
vague, or incomplete data patterns.
Apply ANNs to pattern recognition,
interpretation, prediction, diagnosis,
planning, monitoring, debugging, repair,
instruction, control
Biomedical Signal Processing
Biometric Identification
Pattern Recognition
System Reliability
Business
Target Tracking
Neural Network Applications
63. Pattern Recognition System
Sensing Segmentation
Classification (missing
features & context)
Post-processing (costs/
errors)
Feature Extraction
Input
Output (decision)
64.
65. Feed-forward neural networks
• These are the commonest type of neural
network in practical applications.
– The first layer is the input and the last layer
is the output.
– If there is more than one hidden layer, we
call them “deep” neural networks.
• They compute a series of transformations that
change the similarities between cases.
– The activities of the neurons in each layer
are a non-linear function of the activities in
the layer below.
hidden units
output units
input units
66. Feedforward Network
• Its output and input vectors are
respectively
• Weight wij connects the i’th neuron with
j’th input. Activation rule of ith neuron is
where
EXAMPLE
68. Feedback network
When outputs are directed back as
inputs to same or preceding layer
nodes it results in the formation of
feedback networks
69. Lateral feedback
If the feedback of the output of the processing elements is directed back
as input to the processing elements in the same layer then it is called
lateral feedback
70. Recurrent networks
• These have directed cycles in their connection
graph.
– That means you can sometimes get back to
where you started by following the arrows.
• They can have complicated dynamics and this
can make them very difficult to train.
– There is a lot of interest at present in finding
efficient ways of training recurrent nets.
• They are more biologically realistic.
Recurrent nets with
multiple hidden layers
are just a special case
that has some of the
hiddenhidden
connections missing.
71. Recurrent neural networks for modeling sequences
• Recurrent neural networks are a very natural
way to model sequential data:
– They are equivalent to very deep nets with
one hidden layer per time slice.
– Except that they use the same weights at
every time slice and they get input at every
time slice.
• They have the ability to remember information
in their hidden state for a long time.
– But its very hard to train them to use this
potential.
input
input
input
hidden
hidden
hidden
output
output
output
time
72. An example of what recurrent neural nets can now do
(to whet your interest!)
• Ilya Sutskever (2011) trained a special type of recurrent neural net to
predict the next character in a sequence.
• After training for a long time on a string of half a billion characters
from English Wikipedia, he got it to generate new text.
– It generates by predicting the probability distribution for the next
character and then sampling a character from this distribution.
73. Symmetrically connected networks
• These are like recurrent networks, but the connections between units
are symmetrical (they have the same weight in both directions).
– John Hopfield (and others) realized that symmetric networks are
much easier to analyze than recurrent networks.
– They are also more restricted in what they can do. because they
obey an energy function.
• For example, they cannot model cycles.
• Symmetrically connected nets without hidden units are called
“Hopfield nets”.
74. Symmetrically connected networks
with hidden units
• These are called “Boltzmann machines”.
– They are much more powerful models than Hopfield nets.
– They are less powerful than recurrent neural networks.
– They have a beautifully simple learning algorithm.
75. Basic models of ANN
Basic Models of ANN
Interconnections Learning rules Activation function
76. Learning
• It’s a process by which a NN adapts itself
to a stimulus by making proper parameter
adjustments, resulting in the production of
desired response
• Two kinds of learning
– Parameter learning:- connection weights are
updated
– Structure Learning:- change in network
structure
77. Training
• The process of modifying the weights in
the connections between network layers
with the objective of achieving the
expected output is called training a
network.
• This is achieved through
– Supervised learning
– Unsupervised learning
– Reinforcement learning
78. Classification of learning
• Supervised learning:-
– Learn to predict an output when given an input
vector.
• Unsupervised learning
– Discover a good internal representation of the
input.
• Reinforcement learning
– Learn to select an action to maximize payoff.
79. Supervised Learning
• Child learns from a teacher
• Each input vector requires a corresponding
target vector.
• Training pair=[input vector, target vector]
Neural
Network
W
Error
Signal
Generator
X
(Input)
Y
(Actual output)
(Desired Output)
Error
(D-Y)
signals
80. • Each training case consists of an input vector x and a
target output t.
• Regression: The target output is a real number or a whole
vector of real numbers.
– The price of a stock in 6 months time.
– The temperature at noon tomorrow.
• Classification: The target output is a class label.
– The simplest case is a choice between 1 and 0.
– We can also have multiple alternative labels.
Two types of supervised learning
81. Unsupervised
Learning
• How a fish or tadpole learns
• All similar input patterns are grouped together as clusters.
• If a matching input pattern is not found a new cluster is formed
• One major aim is to create an internal representation of the input
that is useful for subsequent supervised or reinforcement learning.
• It provides a compact, low-dimensional representation of the input.
82. Self-organizing
• In unsupervised learning there is no
feedback
• Network must discover patterns,
regularities, features for the input data over
the output
• While doing so the network might change
in parameters
• This process is called self-organizing
84. When Reinforcement learning is used?
• If less information is available about the
target output values (critic information)
• Learning based on this critic information is
called reinforcement learning and the
feedback sent is called reinforcement
signal
• Feedback in this case is only evaluative
and not instructive
85. Basic models of ANN
Basic Models of ANN
Interconnections Learning rules Activation function
86. 1. Identity Function
f(x)=x for all x
2. Binary Step function
3. Bipolar Step function
4. Sigmoidal Functions:- Continuous functions
5. Ramp functions:-
Activation Function
87. Some learning algorithms we will learn
are
• Supervised:
• Adaline, Madaline
• Perceptron
• Back Propagation
• multilayer perceptrons
• Radial Basis Function Networks
• Unsupervised
• Competitive Learning
• Kohenen self organizing map
• Learning vector quantization
• Hebbian learning
88. Neural processing
• Recall:- processing phase for a NN and its
objective is to retrieve the information. The
process of computing o for a given x
• Basic forms of neural information
processing
– Auto association
– Hetero association
– Classification
89. Neural processing-Autoassociation
• Set of patterns can be
stored in the network
• If a pattern similar to a
member of the stored
set is presented, an
association with the
input of closest stored
pattern is made
90. Neural Processing- Heteroassociation
• Associations between
pairs of patterns are
stored
• Distorted input pattern
may cause correct
heteroassociation at
the output
91. Neural processing-Classification
• Set of input patterns is
divided into a number
of classes or
categories
• In response to an
input pattern from the
set, the classifier is
supposed to recall the
information regarding
class membership of
the input pattern.
93. Hebbian Learning Rule
• The learning signal is equal to the neuron’s
output
FEED FORWARD UNSUPERVISED LEARNING
94. Features of Hebbian Learning
• Feedforward unsupervised learning
• “When an axon of a cell A is near enough
to exicite a cell B and repeatedly and
persistently takes place in firing it, some
growth process or change takes place in
one or both cells increasing the efficiency”
• If oixj is positive the results is increase in
weight else vice versa
95.
96. Delta Learning Rule
• Only valid for continuous activation function
• Used in supervised training mode
• Learning signal for this rule is called delta
• The aim of the delta rule is to minimize the error over all training
patterns
97. Delta Learning Rule Contd.
Learning rule is derived from the condition of least squared error.
Calculating the gradient vector with respect to wi
Minimization of error requires the weight changes to be in the negative
gradient direction
98. Widrow-Hoff learning Rule
• Also called as least mean square learning rule
• Introduced by Widrow(1962), used in supervised learning
• Independent of the activation function
• Special case of delta learning rule wherein activation function is an
identity function ie f(net)=net
• Minimizes the squared error between the desired output value di
and neti
100. Winner-Take-All Learning rule Contd…
• Can be explained for a layer of neurons
• Example of competitive learning and used for
unsupervised network training
• Learning is based on the premise that one of the
neurons in the layer has a maximum response
due to the input x
• This neuron is declared the winner with a weight
103. Linear Separability
• Separation of the input space into regions
is based on whether the network response
is positive or negative
• Line of separation is called linear-
separable line.
• Example:-
– AND function & OR function are linear
separable Example
– EXOR function Linearly inseparable. Example
104. Hebb Network
• Hebb learning rule is the simpliest one
• The learning in the brain is performed by the
change in the synaptic gap
• When an axon of cell A is near enough to excite
cell B and repeatedly keep firing it, some growth
process takes place in one or both cells
• According to Hebb rule, weight vector is found to
increase proportionately to the product of the
input and learning signal.
105. Flow chart of Hebb training algorithm
Start
Initialize Weights
For
Each
s:t
Activate input
xi=si
1
1
Activate output
y=t
Weight update
Bias update
b(new)=b(old) + y
Stop
y
n