The document provides an overview of artificial neural networks (ANN). It discusses how ANN are constructed to model the human brain and can perform tasks like pattern matching and classification. The key points are:
- ANN consist of interconnected nodes that operate in parallel, and connections between nodes are associated with weights. Each node receives weighted inputs and its activation level is calculated.
- Early models include the McCulloch-Pitts neuron model and Hebb network. Learning can be supervised, unsupervised, or reinforcement. Common activation functions and learning rules like backpropagation and Hebbian learning are described.
- Terminology includes weights, bias, thresholds, learning rates, and more. Different network architectures like feed
This PPT contains entire content in short. My book on ANN under the title "SOFT COMPUTING" with Watson Publication and my classmates can be referred together.
This document provides an overview of neural networks and their components. It discusses:
1. The basic structure and functioning of artificial neural networks, including neurons, connections, weights, and activation functions.
2. Different types of neural network architectures like single layer feedforward, multilayer feedforward, and recurrent networks.
3. Neural network learning methods, including supervised learning, unsupervised learning, and reinforcement learning.
4. Key concepts in neural networks like weights, bias, thresholds, learning rates, and momentum factors.
Artificial neural networks (ANNs) are computational models inspired by biological neural networks in the human brain. ANNs contain artificial neurons that are interconnected in layers and transmit signals to one another. The connections between neurons are associated with weights that are adjusted during training to produce the desired output. ANNs can learn complex patterns and relationships through a process of trial and error. They are widely used for tasks like pattern recognition, classification, prediction, and data clustering.
The document discusses artificial neural networks (ANNs). It describes ANNs as computing systems composed of interconnected processing elements that mimic the human brain. ANNs can solve complex problems in parallel and are fault tolerant. The key components of an ANN are the input, hidden and output layers. Feedforward and feedback networks are described. Backpropagation is used to train ANNs by adjusting weights and biases based on error. Training can be supervised, unsupervised or reinforced learning. Patterns and batch modes of training are also outlined.
NEURAL NETWORK IN MACHINE LEARNING FOR STUDENTShemasubbu08
- Artificial neural networks are computational models inspired by the human brain that use algorithms to mimic brain functions. They are made up of simple processing units (neurons) connected in a massively parallel distributed system. Knowledge is acquired through a learning process that adjusts synaptic connection strengths.
- Neural networks can be used for pattern recognition, function approximation, and associative memory in domains like speech recognition, image classification, and financial prediction. They have flexible inputs, resistant to errors, and fast evaluation, though interpretation is difficult.
The document discusses artificial neural networks and backpropagation. It provides background on neural networks, including their biological inspiration from the human brain. It describes the basic components of artificial neurons and how they are connected in networks. It explains feedforward neural networks and discusses limitations of single-layer perceptrons. The document then introduces multi-layer feedforward networks and the backpropagation algorithm, which allows training of hidden layers by propagating error backwards. It provides details on calculating error terms and updating weights in backpropagation training.
The document discusses different types of machine learning paradigms including supervised learning, unsupervised learning, and reinforcement learning. It then provides details on artificial neural networks, describing them as consisting of simple processing units that communicate through weighted connections, similar to neurons in the human brain. The document outlines key aspects of artificial neural networks like processing units, connections between units, propagation rules, and learning methods.
This PPT contains entire content in short. My book on ANN under the title "SOFT COMPUTING" with Watson Publication and my classmates can be referred together.
This document provides an overview of neural networks and their components. It discusses:
1. The basic structure and functioning of artificial neural networks, including neurons, connections, weights, and activation functions.
2. Different types of neural network architectures like single layer feedforward, multilayer feedforward, and recurrent networks.
3. Neural network learning methods, including supervised learning, unsupervised learning, and reinforcement learning.
4. Key concepts in neural networks like weights, bias, thresholds, learning rates, and momentum factors.
Artificial neural networks (ANNs) are computational models inspired by biological neural networks in the human brain. ANNs contain artificial neurons that are interconnected in layers and transmit signals to one another. The connections between neurons are associated with weights that are adjusted during training to produce the desired output. ANNs can learn complex patterns and relationships through a process of trial and error. They are widely used for tasks like pattern recognition, classification, prediction, and data clustering.
The document discusses artificial neural networks (ANNs). It describes ANNs as computing systems composed of interconnected processing elements that mimic the human brain. ANNs can solve complex problems in parallel and are fault tolerant. The key components of an ANN are the input, hidden and output layers. Feedforward and feedback networks are described. Backpropagation is used to train ANNs by adjusting weights and biases based on error. Training can be supervised, unsupervised or reinforced learning. Patterns and batch modes of training are also outlined.
NEURAL NETWORK IN MACHINE LEARNING FOR STUDENTShemasubbu08
- Artificial neural networks are computational models inspired by the human brain that use algorithms to mimic brain functions. They are made up of simple processing units (neurons) connected in a massively parallel distributed system. Knowledge is acquired through a learning process that adjusts synaptic connection strengths.
- Neural networks can be used for pattern recognition, function approximation, and associative memory in domains like speech recognition, image classification, and financial prediction. They have flexible inputs, resistant to errors, and fast evaluation, though interpretation is difficult.
The document discusses artificial neural networks and backpropagation. It provides background on neural networks, including their biological inspiration from the human brain. It describes the basic components of artificial neurons and how they are connected in networks. It explains feedforward neural networks and discusses limitations of single-layer perceptrons. The document then introduces multi-layer feedforward networks and the backpropagation algorithm, which allows training of hidden layers by propagating error backwards. It provides details on calculating error terms and updating weights in backpropagation training.
The document discusses different types of machine learning paradigms including supervised learning, unsupervised learning, and reinforcement learning. It then provides details on artificial neural networks, describing them as consisting of simple processing units that communicate through weighted connections, similar to neurons in the human brain. The document outlines key aspects of artificial neural networks like processing units, connections between units, propagation rules, and learning methods.
Basic definitions, terminologies, and Working of ANN has been explained. This ppt also shows how ANN can be performed in matlab. This material contains the explanation of Feed forward back propagation algorithm in detail.
Neural networks are inspired by biological neurons and are used to learn relationships in data. The document defines an artificial neural network as a large number of interconnected processing elements called neurons that learn from examples. It outlines the key components of artificial neurons including weights, inputs, summation, and activation functions. Examples of neural network architectures include single-layer perceptrons, multi-layer perceptrons, convolutional neural networks, and recurrent neural networks. Common applications of neural networks include pattern recognition, data classification, and processing sequences.
Neural networks of artificial intelligencealldesign
An artificial neural network (ANN) is a machine learning approach that models the human brain. It consists of artificial neurons that are connected in a network. Each neuron receives inputs, performs calculations, and outputs a value. ANNs can be trained to learn patterns from data through examples to perform tasks like classification, prediction, clustering, and association. Common ANN architectures include multilayer perceptrons, convolutional neural networks, and recurrent neural networks.
This document provides an overview of neural networks. It discusses that artificial neural networks (ANNs) are computational models inspired by the human nervous system. ANNs are composed of interconnected processing units (neurons) that learn by example. There are typically three layers in a neural network: an input layer, hidden layers that process inputs, and an output layer. Neural networks can learn complex patterns and are used for applications like pattern recognition. The document also describes how biological neurons function and the key components of artificial neurons and neural network models. It explains different learning methods for neural networks including supervised, unsupervised, and reinforcement learning.
- An artificial neural network (ANN) is a computational model inspired by biological neural networks in the brain. ANNs contain interconnected nodes that can learn relationships and patterns from data using a process similar to biological learning.
- The basic ANN architecture consists of an input layer, hidden layers, and an output layer. Information flows from the input to output layers through the hidden layers as the network learns.
- There are different types of ANNs that vary in their structure and learning methods, including multilayer perceptrons, convolutional neural networks, and recurrent neural networks. ANNs can perform tasks using supervised, unsupervised, or reinforcement learning.
- ANNs have many applications including face recognition, ridesharing, handwriting
- An artificial neural network (ANN) is a computational model inspired by biological neural networks in the brain. ANNs contain interconnected nodes that can learn relationships and patterns from data through a process of training.
- The basic ANN architecture includes an input layer, hidden layers, and an output layer. Information flows from the input to the output layers through the hidden layers as the network learns.
- There are different types of ANNs that vary in their structure and learning methods, including multilayer perceptrons, convolutional neural networks, and recurrent neural networks. ANNs can perform tasks like face recognition, prediction, and classification through supervised, unsupervised, or reinforcement learning.
- While ANNs have advantages like fault tolerance
This document discusses artificial neural networks. It defines neural networks as computational models inspired by the human brain that are used for tasks like classification, clustering, and pattern recognition. The key points are:
- Neural networks contain interconnected artificial neurons that can perform complex computations. They are inspired by biological neurons in the brain.
- Common neural network types are feedforward networks, where data flows from input to output, and recurrent networks, which contain feedback loops.
- Neural networks are trained using algorithms like backpropagation that minimize error by adjusting synaptic weights between neurons.
- Neural networks have many applications including voice recognition, image recognition, robotics and more due to their ability to learn from large amounts of data.
This document discusses artificial neural networks. It defines neural networks as computational models inspired by the human brain that are used for tasks like classification, clustering, and pattern recognition. The key points are:
- Neural networks contain interconnected artificial neurons that can perform complex computations. They are inspired by biological neurons in the brain.
- Common neural network types are feedforward networks, where data flows from input to output, and recurrent networks, which contain feedback loops.
- Neural networks are trained using algorithms like backpropagation that minimize error by adjusting synaptic weights between neurons.
- Neural networks have various applications including voice recognition, image recognition, and robotics due to their ability to learn from large amounts of data.
This presentation covers the basics of neural network along with the back propagation training algorithm and a code for image classification at the end.
Introduction to Neural networks (under graduate course) Lecture 7 of 9Randa Elanwar
This document provides an overview of neural network learning techniques including supervised, unsupervised, and reinforcement learning. It discusses the Hebbian learning rule, which updates weights based on the activation of connected neurons. Examples are provided to illustrate how the Hebbian rule can be used to train networks without error signals by detecting correlations in input-output patterns.
- Artificial neural networks are inspired by biological neurons and are made up of artificial neurons (perceptrons).
- A perceptron receives multiple inputs, assigns weights to them, calculates the weighted sum, and passes it through an activation function to produce an output.
- Weights allow the perceptron to learn the importance of different inputs and change the orientation of the decision boundary. The bias helps shift the activation function curve.
- Common activation functions include sigmoid, tanh, ReLU, and leaky ReLU. They introduce non-linearity and help address issues like vanishing gradients.
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...Simplilearn
- TensorFlow is a popular deep learning library that provides both C++ and Python APIs to make working with deep learning models easier. It supports both CPU and GPU computing and has a faster compilation time than other libraries like Keras and Torch.
- Tensors are multidimensional arrays that represent inputs, outputs, and parameters of deep learning models in TensorFlow. They are the fundamental data structure that flows through graphs in TensorFlow.
- The main programming elements in TensorFlow include constants, variables, placeholders, and sessions. Constants are parameters whose values do not change, variables allow adding trainable parameters, placeholders feed data from outside the graph, and sessions run the graph to evaluate nodes.
This document provides an overview of artificial neural networks. It begins with definitions of artificial neural networks and how they are analogous to biological neural networks. It then discusses the basic structure of artificial neural networks, including different types of networks like feedforward, recurrent, and convolutional networks. Key concepts in artificial neural networks like neurons, weights, forward/backward propagation, and overfitting/underfitting are also explained. The document concludes with limitations of neural networks and references.
This document provides an overview of a neural networks course, including:
- The course is divided into theory and practice parts covering topics like supervised and unsupervised learning algorithms.
- Students must register for the practicum component by email. Course materials will be available online.
- Evaluation is based on a final exam and programming assignments done in pairs using Matlab.
- An introduction to neural networks covers basic concepts like network architectures, neuron models, learning algorithms, and applications.
Neural networks are mathematical models inspired by biological neural networks. They are useful for pattern recognition and data classification through a learning process of adjusting synaptic connections between neurons. A neural network maps input nodes to output nodes through an arbitrary number of hidden nodes. It is trained by presenting examples to adjust weights using methods like backpropagation to minimize error between actual and predicted outputs. Neural networks have advantages like noise tolerance and not requiring assumptions about data distributions. They have applications in finance, marketing, and other fields, though designing optimal network topology can be challenging.
This document provides an overview of neural networks and fuzzy systems. It outlines a course on the topic, which is divided into two parts: neural networks and fuzzy systems. For neural networks, it covers fundamental concepts of artificial neural networks including single and multi-layer feedforward networks, feedback networks, and unsupervised learning. It also discusses the biological neuron, typical neural network architectures, learning techniques such as backpropagation, and applications of neural networks. Popular activation functions like sigmoid, tanh, and ReLU are also explained.
Basic definitions, terminologies, and Working of ANN has been explained. This ppt also shows how ANN can be performed in matlab. This material contains the explanation of Feed forward back propagation algorithm in detail.
Neural networks are inspired by biological neurons and are used to learn relationships in data. The document defines an artificial neural network as a large number of interconnected processing elements called neurons that learn from examples. It outlines the key components of artificial neurons including weights, inputs, summation, and activation functions. Examples of neural network architectures include single-layer perceptrons, multi-layer perceptrons, convolutional neural networks, and recurrent neural networks. Common applications of neural networks include pattern recognition, data classification, and processing sequences.
Neural networks of artificial intelligencealldesign
An artificial neural network (ANN) is a machine learning approach that models the human brain. It consists of artificial neurons that are connected in a network. Each neuron receives inputs, performs calculations, and outputs a value. ANNs can be trained to learn patterns from data through examples to perform tasks like classification, prediction, clustering, and association. Common ANN architectures include multilayer perceptrons, convolutional neural networks, and recurrent neural networks.
This document provides an overview of neural networks. It discusses that artificial neural networks (ANNs) are computational models inspired by the human nervous system. ANNs are composed of interconnected processing units (neurons) that learn by example. There are typically three layers in a neural network: an input layer, hidden layers that process inputs, and an output layer. Neural networks can learn complex patterns and are used for applications like pattern recognition. The document also describes how biological neurons function and the key components of artificial neurons and neural network models. It explains different learning methods for neural networks including supervised, unsupervised, and reinforcement learning.
- An artificial neural network (ANN) is a computational model inspired by biological neural networks in the brain. ANNs contain interconnected nodes that can learn relationships and patterns from data using a process similar to biological learning.
- The basic ANN architecture consists of an input layer, hidden layers, and an output layer. Information flows from the input to output layers through the hidden layers as the network learns.
- There are different types of ANNs that vary in their structure and learning methods, including multilayer perceptrons, convolutional neural networks, and recurrent neural networks. ANNs can perform tasks using supervised, unsupervised, or reinforcement learning.
- ANNs have many applications including face recognition, ridesharing, handwriting
- An artificial neural network (ANN) is a computational model inspired by biological neural networks in the brain. ANNs contain interconnected nodes that can learn relationships and patterns from data through a process of training.
- The basic ANN architecture includes an input layer, hidden layers, and an output layer. Information flows from the input to the output layers through the hidden layers as the network learns.
- There are different types of ANNs that vary in their structure and learning methods, including multilayer perceptrons, convolutional neural networks, and recurrent neural networks. ANNs can perform tasks like face recognition, prediction, and classification through supervised, unsupervised, or reinforcement learning.
- While ANNs have advantages like fault tolerance
This document discusses artificial neural networks. It defines neural networks as computational models inspired by the human brain that are used for tasks like classification, clustering, and pattern recognition. The key points are:
- Neural networks contain interconnected artificial neurons that can perform complex computations. They are inspired by biological neurons in the brain.
- Common neural network types are feedforward networks, where data flows from input to output, and recurrent networks, which contain feedback loops.
- Neural networks are trained using algorithms like backpropagation that minimize error by adjusting synaptic weights between neurons.
- Neural networks have many applications including voice recognition, image recognition, robotics and more due to their ability to learn from large amounts of data.
This document discusses artificial neural networks. It defines neural networks as computational models inspired by the human brain that are used for tasks like classification, clustering, and pattern recognition. The key points are:
- Neural networks contain interconnected artificial neurons that can perform complex computations. They are inspired by biological neurons in the brain.
- Common neural network types are feedforward networks, where data flows from input to output, and recurrent networks, which contain feedback loops.
- Neural networks are trained using algorithms like backpropagation that minimize error by adjusting synaptic weights between neurons.
- Neural networks have various applications including voice recognition, image recognition, and robotics due to their ability to learn from large amounts of data.
This presentation covers the basics of neural network along with the back propagation training algorithm and a code for image classification at the end.
Introduction to Neural networks (under graduate course) Lecture 7 of 9Randa Elanwar
This document provides an overview of neural network learning techniques including supervised, unsupervised, and reinforcement learning. It discusses the Hebbian learning rule, which updates weights based on the activation of connected neurons. Examples are provided to illustrate how the Hebbian rule can be used to train networks without error signals by detecting correlations in input-output patterns.
- Artificial neural networks are inspired by biological neurons and are made up of artificial neurons (perceptrons).
- A perceptron receives multiple inputs, assigns weights to them, calculates the weighted sum, and passes it through an activation function to produce an output.
- Weights allow the perceptron to learn the importance of different inputs and change the orientation of the decision boundary. The bias helps shift the activation function curve.
- Common activation functions include sigmoid, tanh, ReLU, and leaky ReLU. They introduce non-linearity and help address issues like vanishing gradients.
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...Simplilearn
- TensorFlow is a popular deep learning library that provides both C++ and Python APIs to make working with deep learning models easier. It supports both CPU and GPU computing and has a faster compilation time than other libraries like Keras and Torch.
- Tensors are multidimensional arrays that represent inputs, outputs, and parameters of deep learning models in TensorFlow. They are the fundamental data structure that flows through graphs in TensorFlow.
- The main programming elements in TensorFlow include constants, variables, placeholders, and sessions. Constants are parameters whose values do not change, variables allow adding trainable parameters, placeholders feed data from outside the graph, and sessions run the graph to evaluate nodes.
This document provides an overview of artificial neural networks. It begins with definitions of artificial neural networks and how they are analogous to biological neural networks. It then discusses the basic structure of artificial neural networks, including different types of networks like feedforward, recurrent, and convolutional networks. Key concepts in artificial neural networks like neurons, weights, forward/backward propagation, and overfitting/underfitting are also explained. The document concludes with limitations of neural networks and references.
This document provides an overview of a neural networks course, including:
- The course is divided into theory and practice parts covering topics like supervised and unsupervised learning algorithms.
- Students must register for the practicum component by email. Course materials will be available online.
- Evaluation is based on a final exam and programming assignments done in pairs using Matlab.
- An introduction to neural networks covers basic concepts like network architectures, neuron models, learning algorithms, and applications.
Neural networks are mathematical models inspired by biological neural networks. They are useful for pattern recognition and data classification through a learning process of adjusting synaptic connections between neurons. A neural network maps input nodes to output nodes through an arbitrary number of hidden nodes. It is trained by presenting examples to adjust weights using methods like backpropagation to minimize error between actual and predicted outputs. Neural networks have advantages like noise tolerance and not requiring assumptions about data distributions. They have applications in finance, marketing, and other fields, though designing optimal network topology can be challenging.
This document provides an overview of neural networks and fuzzy systems. It outlines a course on the topic, which is divided into two parts: neural networks and fuzzy systems. For neural networks, it covers fundamental concepts of artificial neural networks including single and multi-layer feedforward networks, feedback networks, and unsupervised learning. It also discusses the biological neuron, typical neural network architectures, learning techniques such as backpropagation, and applications of neural networks. Popular activation functions like sigmoid, tanh, and ReLU are also explained.
Similar to artificial-neural-networks-rev.ppt (20)
This document describes the encapsulation and decapsulation process as a message passes through the layers of the OSI model. As the message moves from the physical layer at the source to the application layer at the destination, each higher layer encapsulates the message from the layer below and adds its own header. During decapsulation at the destination, headers are removed in reverse order as the message moves down the layers.
This document discusses the rules for relational databases defined by E.F. Codd, including that all data should be represented as values in tables, each data value can be logically accessed using a table name, primary key and column name, null values should be supported systematically, and the database description should be queryable using the same relational language as the data. It also covers relational algebra operations like select, project, union, difference, cartesian product, and aggregation functions.
This document discusses different types of switched networks including circuit-switched networks, datagram networks, and virtual-circuit networks. It provides details on the key aspects of each type:
- Circuit-switched networks use dedicated channels for each connection and resources are allocated for the full duration of a call. Datagram networks divide messages into packets that are routed independently without resource reservation. Virtual-circuit networks have characteristics of both, allocating resources for the duration of a connection like circuit-switching but allowing packets to take different paths.
- The structure of switches is also discussed, including crossbar switches for circuit switching and packet switches that use techniques like time-slot interchange and time-space-time switching to
Circuit switching and packet switching are the two main switching techniques used in communications networks. Circuit switching establishes a dedicated communication path between two stations for the duration of the call. It is inefficient if the channel is idle but provides transparent transfer once connected. Packet switching breaks messages into packets that are transmitted individually and reassembled at the destination. It makes more efficient use of network resources but adds complexity for routing and reassembly. X.25 and Frame Relay are common standards used to implement packet switching networks.
The document discusses network models and the OSI model. It describes the seven layers of the OSI model which include the physical, data link, network, transport, session, presentation and application layers. Each layer has a specific function and works together with other layers for communication. The document also discusses the TCP/IP protocol suite and how it maps to the OSI model. Finally, it covers the different addressing schemes used in TCP/IP including physical, logical, port and specific addresses.
The document discusses the Arduino UNO board. It describes the components of the board including the ATmega328 microcontroller and digital and analog pins. It explains how to program the board using the Arduino IDE and connect it to a computer via USB. The steps to get started with the board are outlined, including installing drivers and uploading code to control digital pins using functions like pinMode(), digitalWrite(), and delay().
AI can be defined in multiple ways, including studying how to make computers intelligent like humans, automating intelligent behavior, and studying cognitive abilities through computational models. The Turing test proposes that a computer can be considered intelligent if a human cannot distinguish it from a real person through conversation. Early programs like ELIZA passed the Turing test through simple pattern matching and question swapping rather than true understanding. While the Turing test can be passed through extensive rules, it does not prove a system has human-level intelligence or comprehension.
This document provides an introduction to the Internet of Things (IoT) and discusses several applications of IoT technology. It defines IoT as connecting millions of smart devices and sensors to the Internet to collect and share data. Examples given include smart doorbells, wearables, and traffic lights. The document then discusses applications of IoT in agriculture, consumer use, healthcare, insurance, and traffic monitoring. It explains how IoT allows remote monitoring and control of systems to increase efficiency and convenience across several industries.
Hadj Ounis's most notable work is his sculpture titled "Metamorphosis." This piece showcases Ounis's mastery of form and texture, as he seamlessly combines metal and wood to create a dynamic and visually striking composition. The juxtaposition of the two materials creates a sense of tension and harmony, inviting viewers to contemplate the relationship between nature and industry.
Fashionista Chic Couture Maze & Coloring Adventures is a coloring and activity book filled with many maze games and coloring activities designed to delight and engage young fashion enthusiasts. Each page offers a unique blend of fashion-themed mazes and stylish illustrations to color, inspiring creativity and problem-solving skills in children.
The cherry: beauty, softness, its heart-shaped plastic has inspired artists since Antiquity. Cherries and strawberries were considered the fruits of paradise and thus represented the souls of men.
Heart Touching Romantic Love Shayari In English with ImagesShort Good Quotes
Explore our beautiful collection of Romantic Love Shayari in English to express your love. These heartfelt shayaris are perfect for sharing with your loved one. Get the best words to show your love and care.
This document announces the winners of the 2024 Youth Poster Contest organized by MATFORCE. It lists the grand prize and age category winners for grades K-6, 7-12, and individual age groups from 5 years old to 18 years old.
Boudoir photography, a genre that captures intimate and sensual images of individuals, has experienced significant transformation over the years, particularly in New York City (NYC). Known for its diversity and vibrant arts scene, NYC has been a hub for the evolution of various art forms, including boudoir photography. This article delves into the historical background, cultural significance, technological advancements, and the contemporary landscape of boudoir photography in NYC.
2. Learning Objectives
• Fundamentals of ANN
• Comparison between biological neuron
and artificial neuron
• Basic models of ANN
• Different types of connections of NN,
Learning and activation function
• Basic fundamental neuron model-
McCulloch-Pitts neuron and Hebb network
3. Fundamental concept
• NN are constructed and implemented to
model the human brain.
• Performs various tasks such as pattern-
matching, classification, optimization
function, approximation, vector
quantization and data clustering.
• These tasks are difficult for traditional
computers
4. ANN
• ANN posess a large number of processing
elements called nodes/neurons which operate in
parallel.
• Neurons are connected with others by
connection link.
• Each link is associated with weights which
contain information about the input signal.
• Each neuron has an internal state of its own
which is a function of the inputs that neuron
receives- Activation level
10. Biological Neuron
• Has 3 parts
– Soma or cell body:- cell nucleus is located
– Dendrites:- nerve connected to cell body
– Axon: carries impulses of the neuron
• End of axon splits into fine strands
• Each strand terminates into a bulb-like organ called synapse
• Electric impulses are passed between the synapse and dendrites
• Synapses are of two types
– Inhibitory:- impulses hinder the firing of the receiving cell
– Excitatory:- impulses cause the firing of the receiving cell
• Neuron fires when the total of the weights to receive impulses
exceeds the threshold value during the latent summation period
• After carrying a pulse an axon fiber is in a state of complete
nonexcitability for a certain time called the refractory period.
12. Features of McCulloch-Pitts model
• Allows binary 0,1 states only
• Operates under a discrete-time
assumption
• Weights and the neurons’ thresholds are
fixed in the model and no interaction
among network neurons
• Just a primitive model
13. General symbol of neuron
consisting of processing node and
synaptic connections
14. Neuron Modeling for ANN
Is referred to activation function. Domain is
set of activation values net.
Scalar product of weight and input vector
Neuron as a processing node performs the operation of summation of
its weighted input.
15. Activation function
• Bipolar binary and unipolar binary are
called as hard limiting activation functions
used in discrete neuron model
• Unipolar continuous and bipolar
continuous are called soft limiting
activation functions are called sigmoidal
characteristics.
18. Common models of neurons
Binary
perceptrons
Continuous perceptrons
19. Comparison between brain verses
computer
Brain ANN
Speed Few ms. Few nano sec. massive
||el processing
Size and complexity 1011 neurons & 1015
interconnections
Depends on designer
Storage capacity Stores information in its
interconnection or in
synapse.
No Loss of memory
Contiguous memory
locations
loss of memory may
happen sometimes.
Tolerance Has fault tolerance No fault tolerance Inf
gets disrupted when
interconnections are
disconnected
Control mechanism Complicated involves
chemicals in biological
neuron
Simpler in ANN
20. Basic models of ANN
Basic Models of ANN
Interconnections Learning rules Activation function
23. Feedforward Network
• Its output and input vectors are
respectively
• Weight wij connects the i’th neuron with
j’th input. Activation rule of ith neuron is
where
EXAMPLE
25. Feedback network
When outputs are directed back as
inputs to same or preceding layer
nodes it results in the formation of
feedback networks
26. Lateral feedback
If the feedback of the output of the processing elements is directed back
as input to the processing elements in the same layer then it is called
lateral feedback
27. Recurrent n/ws
• Single node with own feedback
• Competitive nets
• Single-layer recurrent nts
• Multilayer recurrent networks
Feedback networks with closed loop are called Recurrent Networks. The
response at the k+1’th instant depends on the entire history of the network
starting at k=0.
Automaton: A system with discrete time inputs and a discrete data
representation is called an automaton
28. Basic models of ANN
Basic Models of ANN
Interconnections Learning rules Activation function
29. Learning
• It’s a process by which a NN adapts itself
to a stimulus by making proper parameter
adjustments, resulting in the production of
desired response
• Two kinds of learning
– Parameter learning:- connection weights are
updated
– Structure Learning:- change in network
structure
30. Training
• The process of modifying the weights in
the connections between network layers
with the objective of achieving the
expected output is called training a
network.
• This is achieved through
– Supervised learning
– Unsupervised learning
– Reinforcement learning
32. Supervised Learning
• Child learns from a teacher
• Each input vector requires a
corresponding target vector.
• Training pair=[input vector, target vector]
Neural
Network
W
Error
Signal
Generator
X
(Input)
Y
(Actual output)
(Desired Output)
Error
(D-Y)
signals
34. Unsupervised Learning
• How a fish or tadpole learns
• All similar input patterns are grouped together as
clusters.
• If a matching input pattern is not found a new
cluster is formed
36. Self-organizing
• In unsupervised learning there is no
feedback
• Network must discover patterns,
regularities, features for the input data
over the output
• While doing so the network might change
in parameters
• This process is called self-organizing
38. When Reinforcement learning is
used?
• If less information is available about the
target output values (critic information)
• Learning based on this critic information is
called reinforcement learning and the
feedback sent is called reinforcement
signal
• Feedback in this case is only evaluative
and not instructive
39. Basic models of ANN
Basic Models of ANN
Interconnections Learning rules Activation function
40. 1. Identity Function
f(x)=x for all x
2. Binary Step function
3. Bipolar Step function
4. Sigmoidal Functions:- Continuous functions
5. Ramp functions:-
Activation Function
ifx
ifx
x
f
0
1
{
)
(
ifx
ifx
x
f
1
1
{
)
(
0
0
1
0
1
1
)
(
ifx
x
if
x
ifx
x
f
41. Some learning algorithms we will
learn are
• Supervised:
• Adaline, Madaline
• Perceptron
• Back Propagation
• multilayer perceptrons
• Radial Basis Function Networks
• Unsupervised
• Competitive Learning
• Kohenen self organizing map
• Learning vector quantization
• Hebbian learning
42. Neural processing
• Recall:- processing phase for a NN and its
objective is to retrieve the information. The
process of computing o for a given x
• Basic forms of neural information
processing
– Auto association
– Hetero association
– Classification
43. Neural processing-Autoassociation
• Set of patterns can be
stored in the network
• If a pattern similar to
a member of the
stored set is
presented, an
association with the
input of closest stored
pattern is made
45. Neural processing-Classification
• Set of input patterns
is divided into a
number of classes or
categories
• In response to an
input pattern from the
set, the classifier is
supposed to recall the
information regarding
class membership of
the input pattern.
46. Important terminologies of ANNs
• Weights
• Bias
• Threshold
• Learning rate
• Momentum factor
• Vigilance parameter
• Notations used in ANN
47. Weights
• Each neuron is connected to every other
neuron by means of directed links
• Links are associated with weights
• Weights contain information about the
input signal and is represented as a matrix
• Weight matrix also called connection
matrix
48. Weight matrix
W= 1
2
3
.
.
.
.
.
T
T
T
T
n
w
w
w
w
=
11 12 13 1
21 22 23 2
1 2 3
...
...
..................
...................
...
m
m
n n n nm
w w w w
w w w w
w w w w
49. Weights contd…
• wij –is the weight from processing element ”i” (source
node) to processing element “j” (destination node)
X1
1
Xi
Yj
Xn
w1j
wij
wnj
bj
0
0 0 1 1 2 2
0
1
1
....
n
i ij
inj
i
j j j n nj
n
j i ij
i
n
j i ij
inj
i
y xw
x w xw x w x w
w xw
y b xw
50. Activation Functions
• Used to calculate the output response of a
neuron.
• Sum of the weighted input signal is applied with
an activation to obtain the response.
• Activation functions can be linear or non linear
• Already dealt
– Identity function
– Single/binary step function
– Discrete/continuous sigmoidal function.
51. Bias
• Bias is like another weight. Its included by
adding a component x0=1 to the input
vector X.
• X=(1,X1,X2…Xi,…Xn)
• Bias is of two types
– Positive bias: increase the net input
– Negative bias: decrease the net input
52. Why Bias is required?
• The relationship between input and output
given by the equation of straight line
y=mx+c
X Y
Input
C(bias)
y=mx+C
53. Threshold
• Set value based upon which the final output of
the network may be calculated
• Used in activation function
• The activation function using threshold can be
defined as
ifnet
ifnet
net
f
1
1
)
(
54. Learning rate
• Denoted by α.
• Used to control the amount of weight
adjustment at each step of training
• Learning rate ranging from 0 to 1
determines the rate of learning in each
time step
55. Other terminologies
• Momentum factor:
– used for convergence when momentum factor
is added to weight updation process.
• Vigilance parameter:
– Denoted by ρ
– Used to control the degree of similarity
required for patterns to be assigned to the
same cluster
57. Hebbian Learning Rule
• The learning signal is equal to the
neuron’s output
FEED FORWARD UNSUPERVISED LEARNING
58. Features of Hebbian Learning
• Feedforward unsupervised learning
• “When an axon of a cell A is near enough
to exicite a cell B and repeatedly and
persistently takes place in firing it, some
growth process or change takes place in
one or both cells increasing the efficiency”
• If oixj is positive the results is increase in
weight else vice versa
60. • For the same inputs for bipolar continuous
activation function the final updated weight
is given by
61. Perceptron Learning rule
• Learning signal is the difference between the
desired and actual neuron’s response
• Learning is supervised
62.
63. Delta Learning Rule
• Only valid for continuous activation function
• Used in supervised training mode
• Learning signal for this rule is called delta
• The aim of the delta rule is to minimize the error over all training
patterns
64. Delta Learning Rule Contd.
Learning rule is derived from the condition of least squared error.
Calculating the gradient vector with respect to wi
Minimization of error requires the weight changes to be in the negative
gradient direction
65. Widrow-Hoff learning Rule
• Also called as least mean square learning rule
• Introduced by Widrow(1962), used in supervised learning
• Independent of the activation function
• Special case of delta learning rule wherein activation function is an
identity function ie f(net)=net
• Minimizes the squared error between the desired output value di
and neti
67. Winner-Take-All Learning rule
Contd…
• Can be explained for a layer of neurons
• Example of competitive learning and used for
unsupervised network training
• Learning is based on the premise that one of the
neurons in the layer has a maximum response
due to the input x
• This neuron is declared the winner with a weight
70. Linear Separability
• Separation of the input space into regions
is based on whether the network response
is positive or negative
• Line of separation is called linear-
separable line.
• Example:-
– AND function & OR function are linear
separable Example
– EXOR function Linearly inseparable. Example
71. Hebb Network
• Hebb learning rule is the simpliest one
• The learning in the brain is performed by the
change in the synaptic gap
• When an axon of cell A is near enough to excite
cell B and repeatedly keep firing it, some growth
process takes place in one or both cells
• According to Hebb rule, weight vector is found to
increase proportionately to the product of the
input and learning signal.
y
x
old
w
new
w i
i
i
)
(
)
(
72. Flow chart of Hebb training
algorithm
Start
Initialize Weights
For
Each
s:t
Activate input
xi=si
1
1
Activate output
y=t
Weight update
y
x
old
w
new
w i
i
i
)
(
)
(
Bias update
b(new)=b(old) + y
Stop
y
n
73. • Hebb rule can be used for pattern
association, pattern categorization, pattern
classification and over a range of other
areas
• Problem to be solved:
Design a Hebb net to implement OR
function
74. How to solve
Use bipolar data in the place of binary data
Initially the weights and bias are set to zero
w1=w2=b=0
X1 X2 B y
1 1 1 1
1 -1 1 1
-1 1 1 1
-1 -1 1 -1