SlideShare a Scribd company logo
1 of 23
ECCDO601
Machine Learning
Semester 6/III year/2024/4
What you vibrate, you Become
Dr. Kiruthika. B M.E.,Ph.D.,
Assistant Professor-ECS-ACE
Hebbian Learning rule
• The simplest neural network (threshold neuron) lacks the capability of learning, which is its major drawback
• In the book “The Organisation of Behaviour”, Donald O. Hebb proposed a mechanism to update weights
between neurons in a neural network
• This method of weight updation enabled neurons to learn and was named as Hebbian Learning.
• Three major points were stated as a part of this learning mechanism :
 Information is stored in the connections between neurons in neural networks, in the form of weights.
 Weight change between neurons is proportional to the product of activation values for neurons.
As learning takes place, simultaneous or repeated activation of weakly connected neurons incrementally
changes the strength and pattern of weights, leading to stronger connections
Cont…
Neuron Assembly Theory
• The repeated stimulus of weak connections between neurons leads to their incremental strengthening.
• The new weights are calculated by the equation :
Inhibitory Connections
This is another kind of connection, that have an opposite response to a stimulus. Here, the connection strength
decreases with repeated or simultaneous stimuli.
Cont…
• Implementation of Hebbian Learning in a Perceptron
• Frank Rosenblatt in 1950, inferred that threshold neuron cannot be used for modeling cognition as it cannot learn or
adopt from the environment or develop capabilities for classification, recognition or similar capabilities.
A perceptron draws inspiration from a biological visual neural model with three layers illustrated as follows :
 Input Layer is synonymous to sensory cells in the retina, with random connections to neurons of the succeeding
layer.
 Association layers have threshold neurons with bi-directional connections to the response layer.
 Response layer has threshold neurons that are interconnected with each other for competitive inhibitory signaling.
Cont…
Cont…
• Response layer neurons compete with each other by sending inhibitory signals to produce output.
• Threshold functions are set at the origin for the association and response layers. This forms the basis of
learning between these layers.
• The goal of the perception is to activate correct response neurons for each input pattern.
• Hebbian Learning is inspired by the biological neural weight adjustment mechanism. It describes the method
to convert a neuron an inability to learn and enables it to develop cognition with response to external stimuli.
These concepts are still the basis for neural learning today.
• Hebbian Learning Rule, also known as Hebb Learning Rule, was proposed by Donald O Hebb. It is one of
the first and also easiest learning rules in the neural network. It is used for pattern classification. It is a single
layer neural network, i.e. it has one input layer and one output layer. The input layer can have many units,
say n. The output layer only has one unit. Hebbian rule works by updating the weights between neurons in
the neural network for each training sample.
Cont…
Hebbian Learning Rule Algorithm :
1. Set all weights to zero, wi = 0 for i=1 to n, and bias to zero.
2. For each input vector, S(input vector) : t(target output pair), repeat steps 3-5.
3. Set activations for input units with the input vector Xi = Si for i = 1 to n.
4. Set the corresponding output value to the output neuron, i.e. y = t.
5. Update weight and bias by applying Hebb rule for all i = 1 to n:
Cont…
Implementing AND Gate :
Truth Table of AND Gate using bipolar sigmoidal function
There are 4 training samples, so there will be 4 iterations. Also, the activation function used here is Bipolar
Sigmoidal Function so the range is [-1,1].
Cont…
• Step 1 :
Set weight and bias to zero, w = [ 0 0 0 ]T and b = 0.
• Step 2 :
Set input vector Xi = Si for i = 1 to 4.
X1 = [ -1 -1 1 ]T
X2 = [ -1 1 1 ]T
X3 = [ 1 -1 1 ]T
X4 = [ 1 1 1 ]T
• Step 3 :
Output value is set to y = t.
Cont…
• Step 4 : Modifying weights using Hebbian Rule:
First iteration – w(new) = w(old) + x1y1 = [ 0 0 0 ]T + [ -1 -1 1 ]T . [ -1 ] = [ 1 1 -1 ]T
For the second iteration, the final weight of the first one will be used and so on.
Second iteration –w(new) = [ 1 1 -1 ]T + [ -1 1 1 ]T . [ -1 ] = [ 2 0 -2 ]T
Third iteration – w(new) = [ 2 0 -2]T + [ 1 -1 1 ]T . [ -1 ] = [ 1 1 -3 ]T
Fourth iteration – w(new) = [ 1 1 -3]T + [ 1 1 1 ]T . [ 1 ] = [ 2 2 -2 ]T
So, the final weight matrix is [ 2 2 -2 ]T
Cont…
• Testing the network :
The network with the final weights
Cont…
• For x1 = -1, x2 = -1, b = 1, Y = (-1)(2) + (-1)(2) + (1)(-2) = -6
• For x1 = -1, x2 = 1, b = 1, Y = (-1)(2) + (1)(2) + (1)(-2) = -2
• For x1 = 1, x2 = -1, b = 1, Y = (1)(2) + (-1)(2) + (1)(-2) = -2
• For x1 = 1, x2 = 1, b = 1, Y = (1)(2) + (1)(2) + (1)(-2) = 2
• The results are all compatible with the original table.
• Decision Boundary :
• 2x1 + 2x2 – 2b = y
• Replacing y with 0, 2x1 + 2x2 – 2b = 0
• Since bias, b = 1, so 2x1 + 2x2 – 2(1) = 0
• 2( x1 + x2 ) = 2 . The final equation, x2 = -x1 + 1
Cont…
EM Algorithm in Machine Learning
Expectation-Maximization algorithm for clustering
• The EM algorithm is considered a latent variable model to find the local maximum likelihood parameters
of a statistical model, proposed by Arthur Dempster, Nan Laird, and Donald Rubin in 1977
• The EM (Expectation-Maximization) algorithm is one of the most commonly used terms in machine learning
to obtain maximum likelihood estimates of variables that are sometimes observable and sometimes not
• it is also applicable to unobserved data or sometimes called latent
• It has various real-world applications in statistics, including obtaining the mode of the posterior marginal
distribution of parameters in machine learning and data mining applications.
• In most real-life applications of machine learning, it is found that several relevant learning features are
available, but very few of them are observable, and the rest are unobservable. If the variables are
observable, then it can predict the value using instances.
• the variables which are latent or directly not observable, for such variables Expectation-Maximization (EM)
algorithm plays a vital role to predict the value with the condition that the general form of probability
distribution governing those latent variables is known to us
Cont…
What is an EM algorithm?
• The Expectation-Maximization (EM) algorithm is defined as the combination of various unsupervised
machine learning algorithms, which is used to determine the local maximum likelihood estimates
(MLE) or maximum a posteriori estimates (MAP) for unobservable variables in statistical models
• it is a technique to find maximum likelihood estimation when the latent variables are present. It is also
referred to as the latent variable model.
• A latent variable model consists of both observable and unobservable variables where observable can be
predicted while unobserved are inferred from the observed variable. These unobservable variables are known
as latent variables.
Key Points:
o It is known as the latent variable model to determine MLE and MAP parameters for latent variables.
o It is used to predict values of parameters in instances where data is missing or unobservable for learning, and
this is done until convergence of the values occurs.
Cont…
• EM Algorithm
• The EM algorithm is the combination of various unsupervised ML algorithms, such as the k-means
clustering algorithm. Being an iterative approach, it consists of two modes
o Expectation step (E - step): It involves the estimation (guess) of all missing values in the dataset so that
after completing this step, there should not be any missing value.
o Maximization step (M - step): This step involves the use of estimated data in the E-step and updating the
parameters.
o Repeat E-step and M-step until the convergence of the values occurs.
• The primary goal of the EM algorithm is to use the available observed data of the dataset to estimate the
missing data of the latent variables and then use that data to update the values of the parameters in the M-step.
Cont…
What is Convergence in the EM algorithm?
Convergence is defined as the specific situation in probability based on intuition, e.g., if there are two random
variables that have very less difference in their probability, then they are known as converged. In other words,
whenever the values of given variables are matched with each other, it is called convergence.
Cont…
• Steps in EM Algorithm
• Initialization Step, Expectation Step, Maximization Step, and convergence Step
Cont…
o 1st Step: The very first step is to initialize the parameter values. Further, the system is provided with
incomplete observed data with the assumption that data is obtained from a specific model.
o 2nd Step: This step is known as Expectation or E-Step, which is used to estimate or guess the values of the
missing or incomplete data using the observed data. Further, E-step primarily updates the variables.
o 3rd Step: This step is known as Maximization or M-step, where we use complete data obtained from the
2nd step to update the parameter values. Further, M-step primarily updates the hypothesis.
o 4th step: The last step is to check if the values of latent variables are converging or not. If it gets "yes", then
stop the process; else, repeat the process from step 2 until the convergence occurs.
Cont…
Gaussian Mixture Model (GMM)
• The Gaussian Mixture Model or GMM is defined as a mixture model that has a combination of the
unspecified probability distribution function
• GMM also requires estimated statistics values such as mean and standard deviation or parameters. It is used to
estimate the parameters of the probability distributions to best fit the density of a given training dataset
• Although there are plenty of techniques available to estimate the parameter of the Gaussian Mixture Model
(GMM), the Maximum Likelihood Estimation is one of the most popular techniques among them.
• Eg., it is very difficult to discriminate which distribution a given point may belong to.
• In such cases, the Estimation-Maximization algorithm is one of the best techniques which helps us to estimate
the parameters of the gaussian distributions. In the EM algorithm, E-step estimates the expected value for
each latent variable, whereas M-step helps in optimizing them significantly using the Maximum Likelihood
Estimation (MLE). Further, this process is repeated until a good set of latent values, and a maximum
likelihood is achieved that fits the data.
Cont…
Applications of EM algorithm
o The EM algorithm is applicable in data clustering in machine learning.
o It is often used in computer vision and NLP (Natural language processing).
o It is used to estimate the value of the parameter in mixed models such as the Gaussian Mixture Modeland
quantitative genetics.
o It is also used in psychometrics for estimating item parameters and latent abilities of item response theory models.
o It is also applicable in the medical and healthcare industry, such as in image reconstruction and structural
engineering.
o It is used to determine the Gaussian density of a function.
Cont…
Advantages of EM algorithm
o It is very easy to implement the first two basic steps of the EM algorithm in various machine learning problems,
which are E-step and M- step.
o It is mostly guaranteed that likelihood will enhance after each iteration.
o It often generates a solution for the M-step in the closed form.
Disadvantages of EM algorithm
o The convergence of the EM algorithm is very slow.
o It can make convergence for the local optima only.
o It takes both forward and backward probability into consideration. It is opposite to that of numerical optimization,
which takes only forward probabilities.
Thank You
More light, sound and love to you all

More Related Content

Similar to machine learning for engineering students

Artificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rulesArtificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rulesMohammed Bennamoun
 
Electricity Demand Forecasting Using ANN
Electricity Demand Forecasting Using ANNElectricity Demand Forecasting Using ANN
Electricity Demand Forecasting Using ANNNaren Chandra Kattla
 
CI_GA_module2_ABC_updatedG.ppt .
CI_GA_module2_ABC_updatedG.ppt           .CI_GA_module2_ABC_updatedG.ppt           .
CI_GA_module2_ABC_updatedG.ppt .Athar739197
 
Introduction to Perceptron and Neural Network.pptx
Introduction to Perceptron and Neural Network.pptxIntroduction to Perceptron and Neural Network.pptx
Introduction to Perceptron and Neural Network.pptxPoonam60376
 
Unit 3 – AIML.pptx
Unit 3 – AIML.pptxUnit 3 – AIML.pptx
Unit 3 – AIML.pptxhiblooms
 
Neural Networks Lec3.pptx
Neural Networks Lec3.pptxNeural Networks Lec3.pptx
Neural Networks Lec3.pptxmoah92926
 
Neural-Networks.ppt
Neural-Networks.pptNeural-Networks.ppt
Neural-Networks.pptRINUSATHYAN
 
Deep Learning Sample Class (Jon Lederman)
Deep Learning Sample Class (Jon Lederman)Deep Learning Sample Class (Jon Lederman)
Deep Learning Sample Class (Jon Lederman)Jon Lederman
 
Introduction to Neural networks (under graduate course) Lecture 9 of 9
Introduction to Neural networks (under graduate course) Lecture 9 of 9Introduction to Neural networks (under graduate course) Lecture 9 of 9
Introduction to Neural networks (under graduate course) Lecture 9 of 9Randa Elanwar
 
Associative_Memory_Neural_Networks_pptx.pptx
Associative_Memory_Neural_Networks_pptx.pptxAssociative_Memory_Neural_Networks_pptx.pptx
Associative_Memory_Neural_Networks_pptx.pptxdgfsdf1
 
20 bayes learning
20 bayes learning20 bayes learning
20 bayes learningTianlu Wang
 
nural network ER. Abhishek k. upadhyay
nural network ER. Abhishek  k. upadhyaynural network ER. Abhishek  k. upadhyay
nural network ER. Abhishek k. upadhyayabhishek upadhyay
 
Cognitive Science Unit 4
Cognitive Science Unit 4Cognitive Science Unit 4
Cognitive Science Unit 4CSITSansar
 
CSA 3702 machine learning module 3
CSA 3702 machine learning module 3CSA 3702 machine learning module 3
CSA 3702 machine learning module 3Nandhini S
 
International Refereed Journal of Engineering and Science (IRJES)
International Refereed Journal of Engineering and Science (IRJES)International Refereed Journal of Engineering and Science (IRJES)
International Refereed Journal of Engineering and Science (IRJES)irjes
 
EssentialsOfMachineLearning.pdf
EssentialsOfMachineLearning.pdfEssentialsOfMachineLearning.pdf
EssentialsOfMachineLearning.pdfAnkita Tiwari
 

Similar to machine learning for engineering students (20)

Artificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rulesArtificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rules
 
Electricity Demand Forecasting Using ANN
Electricity Demand Forecasting Using ANNElectricity Demand Forecasting Using ANN
Electricity Demand Forecasting Using ANN
 
Lec 6-bp
Lec 6-bpLec 6-bp
Lec 6-bp
 
CI_GA_module2_ABC_updatedG.ppt .
CI_GA_module2_ABC_updatedG.ppt           .CI_GA_module2_ABC_updatedG.ppt           .
CI_GA_module2_ABC_updatedG.ppt .
 
Introduction to Perceptron and Neural Network.pptx
Introduction to Perceptron and Neural Network.pptxIntroduction to Perceptron and Neural Network.pptx
Introduction to Perceptron and Neural Network.pptx
 
Unit 3 – AIML.pptx
Unit 3 – AIML.pptxUnit 3 – AIML.pptx
Unit 3 – AIML.pptx
 
Neural Networks Lec3.pptx
Neural Networks Lec3.pptxNeural Networks Lec3.pptx
Neural Networks Lec3.pptx
 
Neural-Networks.ppt
Neural-Networks.pptNeural-Networks.ppt
Neural-Networks.ppt
 
Perceptron in ANN
Perceptron in ANNPerceptron in ANN
Perceptron in ANN
 
Deep Learning Sample Class (Jon Lederman)
Deep Learning Sample Class (Jon Lederman)Deep Learning Sample Class (Jon Lederman)
Deep Learning Sample Class (Jon Lederman)
 
Introduction to Neural networks (under graduate course) Lecture 9 of 9
Introduction to Neural networks (under graduate course) Lecture 9 of 9Introduction to Neural networks (under graduate course) Lecture 9 of 9
Introduction to Neural networks (under graduate course) Lecture 9 of 9
 
Unit iii update
Unit iii updateUnit iii update
Unit iii update
 
Associative_Memory_Neural_Networks_pptx.pptx
Associative_Memory_Neural_Networks_pptx.pptxAssociative_Memory_Neural_Networks_pptx.pptx
Associative_Memory_Neural_Networks_pptx.pptx
 
20 bayes learning
20 bayes learning20 bayes learning
20 bayes learning
 
nural network ER. Abhishek k. upadhyay
nural network ER. Abhishek  k. upadhyaynural network ER. Abhishek  k. upadhyay
nural network ER. Abhishek k. upadhyay
 
Cognitive Science Unit 4
Cognitive Science Unit 4Cognitive Science Unit 4
Cognitive Science Unit 4
 
CSA 3702 machine learning module 3
CSA 3702 machine learning module 3CSA 3702 machine learning module 3
CSA 3702 machine learning module 3
 
SoftComputing6
SoftComputing6SoftComputing6
SoftComputing6
 
International Refereed Journal of Engineering and Science (IRJES)
International Refereed Journal of Engineering and Science (IRJES)International Refereed Journal of Engineering and Science (IRJES)
International Refereed Journal of Engineering and Science (IRJES)
 
EssentialsOfMachineLearning.pdf
EssentialsOfMachineLearning.pdfEssentialsOfMachineLearning.pdf
EssentialsOfMachineLearning.pdf
 

Recently uploaded

Processing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxProcessing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxpranjaldaimarysona
 
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...ranjana rawat
 
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Christo Ananth
 
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptxBSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptxfenichawla
 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations120cr0395
 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlysanyuktamishra911
 
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...ranjana rawat
 
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Call Girls in Nagpur High Profile
 
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
Porous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingPorous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingrakeshbaidya232001
 
Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)simmis5
 
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingUNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingrknatarajan
 
Introduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxIntroduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxupamatechverse
 
Glass Ceramics: Processing and Properties
Glass Ceramics: Processing and PropertiesGlass Ceramics: Processing and Properties
Glass Ceramics: Processing and PropertiesPrabhanshu Chaturvedi
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINESIVASHANKAR N
 
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 

Recently uploaded (20)

Processing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxProcessing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptx
 
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
 
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
 
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptxBSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
 
Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024
 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations
 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghly
 
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
 
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
 
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
Roadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and RoutesRoadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and Routes
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
Porous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingPorous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writing
 
Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)
 
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingUNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
 
Introduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxIntroduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptx
 
Glass Ceramics: Processing and Properties
Glass Ceramics: Processing and PropertiesGlass Ceramics: Processing and Properties
Glass Ceramics: Processing and Properties
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
 
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 

machine learning for engineering students

  • 1. ECCDO601 Machine Learning Semester 6/III year/2024/4 What you vibrate, you Become Dr. Kiruthika. B M.E.,Ph.D., Assistant Professor-ECS-ACE
  • 2. Hebbian Learning rule • The simplest neural network (threshold neuron) lacks the capability of learning, which is its major drawback • In the book “The Organisation of Behaviour”, Donald O. Hebb proposed a mechanism to update weights between neurons in a neural network • This method of weight updation enabled neurons to learn and was named as Hebbian Learning. • Three major points were stated as a part of this learning mechanism :  Information is stored in the connections between neurons in neural networks, in the form of weights.  Weight change between neurons is proportional to the product of activation values for neurons. As learning takes place, simultaneous or repeated activation of weakly connected neurons incrementally changes the strength and pattern of weights, leading to stronger connections
  • 3. Cont… Neuron Assembly Theory • The repeated stimulus of weak connections between neurons leads to their incremental strengthening. • The new weights are calculated by the equation : Inhibitory Connections This is another kind of connection, that have an opposite response to a stimulus. Here, the connection strength decreases with repeated or simultaneous stimuli.
  • 4. Cont… • Implementation of Hebbian Learning in a Perceptron • Frank Rosenblatt in 1950, inferred that threshold neuron cannot be used for modeling cognition as it cannot learn or adopt from the environment or develop capabilities for classification, recognition or similar capabilities. A perceptron draws inspiration from a biological visual neural model with three layers illustrated as follows :  Input Layer is synonymous to sensory cells in the retina, with random connections to neurons of the succeeding layer.  Association layers have threshold neurons with bi-directional connections to the response layer.  Response layer has threshold neurons that are interconnected with each other for competitive inhibitory signaling.
  • 6. Cont… • Response layer neurons compete with each other by sending inhibitory signals to produce output. • Threshold functions are set at the origin for the association and response layers. This forms the basis of learning between these layers. • The goal of the perception is to activate correct response neurons for each input pattern. • Hebbian Learning is inspired by the biological neural weight adjustment mechanism. It describes the method to convert a neuron an inability to learn and enables it to develop cognition with response to external stimuli. These concepts are still the basis for neural learning today. • Hebbian Learning Rule, also known as Hebb Learning Rule, was proposed by Donald O Hebb. It is one of the first and also easiest learning rules in the neural network. It is used for pattern classification. It is a single layer neural network, i.e. it has one input layer and one output layer. The input layer can have many units, say n. The output layer only has one unit. Hebbian rule works by updating the weights between neurons in the neural network for each training sample.
  • 7. Cont… Hebbian Learning Rule Algorithm : 1. Set all weights to zero, wi = 0 for i=1 to n, and bias to zero. 2. For each input vector, S(input vector) : t(target output pair), repeat steps 3-5. 3. Set activations for input units with the input vector Xi = Si for i = 1 to n. 4. Set the corresponding output value to the output neuron, i.e. y = t. 5. Update weight and bias by applying Hebb rule for all i = 1 to n:
  • 8. Cont… Implementing AND Gate : Truth Table of AND Gate using bipolar sigmoidal function There are 4 training samples, so there will be 4 iterations. Also, the activation function used here is Bipolar Sigmoidal Function so the range is [-1,1].
  • 9. Cont… • Step 1 : Set weight and bias to zero, w = [ 0 0 0 ]T and b = 0. • Step 2 : Set input vector Xi = Si for i = 1 to 4. X1 = [ -1 -1 1 ]T X2 = [ -1 1 1 ]T X3 = [ 1 -1 1 ]T X4 = [ 1 1 1 ]T • Step 3 : Output value is set to y = t.
  • 10. Cont… • Step 4 : Modifying weights using Hebbian Rule: First iteration – w(new) = w(old) + x1y1 = [ 0 0 0 ]T + [ -1 -1 1 ]T . [ -1 ] = [ 1 1 -1 ]T For the second iteration, the final weight of the first one will be used and so on. Second iteration –w(new) = [ 1 1 -1 ]T + [ -1 1 1 ]T . [ -1 ] = [ 2 0 -2 ]T Third iteration – w(new) = [ 2 0 -2]T + [ 1 -1 1 ]T . [ -1 ] = [ 1 1 -3 ]T Fourth iteration – w(new) = [ 1 1 -3]T + [ 1 1 1 ]T . [ 1 ] = [ 2 2 -2 ]T So, the final weight matrix is [ 2 2 -2 ]T
  • 11. Cont… • Testing the network : The network with the final weights
  • 12. Cont… • For x1 = -1, x2 = -1, b = 1, Y = (-1)(2) + (-1)(2) + (1)(-2) = -6 • For x1 = -1, x2 = 1, b = 1, Y = (-1)(2) + (1)(2) + (1)(-2) = -2 • For x1 = 1, x2 = -1, b = 1, Y = (1)(2) + (-1)(2) + (1)(-2) = -2 • For x1 = 1, x2 = 1, b = 1, Y = (1)(2) + (1)(2) + (1)(-2) = 2 • The results are all compatible with the original table. • Decision Boundary : • 2x1 + 2x2 – 2b = y • Replacing y with 0, 2x1 + 2x2 – 2b = 0 • Since bias, b = 1, so 2x1 + 2x2 – 2(1) = 0 • 2( x1 + x2 ) = 2 . The final equation, x2 = -x1 + 1
  • 14. EM Algorithm in Machine Learning Expectation-Maximization algorithm for clustering • The EM algorithm is considered a latent variable model to find the local maximum likelihood parameters of a statistical model, proposed by Arthur Dempster, Nan Laird, and Donald Rubin in 1977 • The EM (Expectation-Maximization) algorithm is one of the most commonly used terms in machine learning to obtain maximum likelihood estimates of variables that are sometimes observable and sometimes not • it is also applicable to unobserved data or sometimes called latent • It has various real-world applications in statistics, including obtaining the mode of the posterior marginal distribution of parameters in machine learning and data mining applications. • In most real-life applications of machine learning, it is found that several relevant learning features are available, but very few of them are observable, and the rest are unobservable. If the variables are observable, then it can predict the value using instances. • the variables which are latent or directly not observable, for such variables Expectation-Maximization (EM) algorithm plays a vital role to predict the value with the condition that the general form of probability distribution governing those latent variables is known to us
  • 15. Cont… What is an EM algorithm? • The Expectation-Maximization (EM) algorithm is defined as the combination of various unsupervised machine learning algorithms, which is used to determine the local maximum likelihood estimates (MLE) or maximum a posteriori estimates (MAP) for unobservable variables in statistical models • it is a technique to find maximum likelihood estimation when the latent variables are present. It is also referred to as the latent variable model. • A latent variable model consists of both observable and unobservable variables where observable can be predicted while unobserved are inferred from the observed variable. These unobservable variables are known as latent variables. Key Points: o It is known as the latent variable model to determine MLE and MAP parameters for latent variables. o It is used to predict values of parameters in instances where data is missing or unobservable for learning, and this is done until convergence of the values occurs.
  • 16. Cont… • EM Algorithm • The EM algorithm is the combination of various unsupervised ML algorithms, such as the k-means clustering algorithm. Being an iterative approach, it consists of two modes o Expectation step (E - step): It involves the estimation (guess) of all missing values in the dataset so that after completing this step, there should not be any missing value. o Maximization step (M - step): This step involves the use of estimated data in the E-step and updating the parameters. o Repeat E-step and M-step until the convergence of the values occurs. • The primary goal of the EM algorithm is to use the available observed data of the dataset to estimate the missing data of the latent variables and then use that data to update the values of the parameters in the M-step.
  • 17. Cont… What is Convergence in the EM algorithm? Convergence is defined as the specific situation in probability based on intuition, e.g., if there are two random variables that have very less difference in their probability, then they are known as converged. In other words, whenever the values of given variables are matched with each other, it is called convergence.
  • 18. Cont… • Steps in EM Algorithm • Initialization Step, Expectation Step, Maximization Step, and convergence Step
  • 19. Cont… o 1st Step: The very first step is to initialize the parameter values. Further, the system is provided with incomplete observed data with the assumption that data is obtained from a specific model. o 2nd Step: This step is known as Expectation or E-Step, which is used to estimate or guess the values of the missing or incomplete data using the observed data. Further, E-step primarily updates the variables. o 3rd Step: This step is known as Maximization or M-step, where we use complete data obtained from the 2nd step to update the parameter values. Further, M-step primarily updates the hypothesis. o 4th step: The last step is to check if the values of latent variables are converging or not. If it gets "yes", then stop the process; else, repeat the process from step 2 until the convergence occurs.
  • 20. Cont… Gaussian Mixture Model (GMM) • The Gaussian Mixture Model or GMM is defined as a mixture model that has a combination of the unspecified probability distribution function • GMM also requires estimated statistics values such as mean and standard deviation or parameters. It is used to estimate the parameters of the probability distributions to best fit the density of a given training dataset • Although there are plenty of techniques available to estimate the parameter of the Gaussian Mixture Model (GMM), the Maximum Likelihood Estimation is one of the most popular techniques among them. • Eg., it is very difficult to discriminate which distribution a given point may belong to. • In such cases, the Estimation-Maximization algorithm is one of the best techniques which helps us to estimate the parameters of the gaussian distributions. In the EM algorithm, E-step estimates the expected value for each latent variable, whereas M-step helps in optimizing them significantly using the Maximum Likelihood Estimation (MLE). Further, this process is repeated until a good set of latent values, and a maximum likelihood is achieved that fits the data.
  • 21. Cont… Applications of EM algorithm o The EM algorithm is applicable in data clustering in machine learning. o It is often used in computer vision and NLP (Natural language processing). o It is used to estimate the value of the parameter in mixed models such as the Gaussian Mixture Modeland quantitative genetics. o It is also used in psychometrics for estimating item parameters and latent abilities of item response theory models. o It is also applicable in the medical and healthcare industry, such as in image reconstruction and structural engineering. o It is used to determine the Gaussian density of a function.
  • 22. Cont… Advantages of EM algorithm o It is very easy to implement the first two basic steps of the EM algorithm in various machine learning problems, which are E-step and M- step. o It is mostly guaranteed that likelihood will enhance after each iteration. o It often generates a solution for the M-step in the closed form. Disadvantages of EM algorithm o The convergence of the EM algorithm is very slow. o It can make convergence for the local optima only. o It takes both forward and backward probability into consideration. It is opposite to that of numerical optimization, which takes only forward probabilities.
  • 23. Thank You More light, sound and love to you all