SlideShare a Scribd company logo
Adaline Neural Networks
Roll No: 711 to 720
Introduction
● In supervised learning, output is compared with the target output.
● On the basis of the error signal, the weights are adjusted until the actual
output is matched with the desired output.
● Adaptive Linear Neuron is a single-layer operational unit, which uses
bipolar activation function.
● Adaline neurons can be trained using the Delta or Least Mean Square
rules.
Soft Computing - Adaline Neural Networks Savri - 713
Adaline versus Perceptron
The Adaline learns all the time and the perceptron only after
errors, the Adaline will find a solution faster than the perceptron
for the same problem.
Soft Computing - Adaline Neural Networks Savri - 713
ADALINE NEURAL NETWORK ARCHITECTURE
Soft Computing - Adaline Neural Networks Amar - 718
ADALINE NEURAL NETWORK ARCHITECTURE
COMPONENTS
Soft Computing - Adaline Neural Networks
● Input layer:
○ receives the input data and passes it to the next layer.
○ Each input feature is represented by a separate node.
● Weighted Sum Function:
○ Calculates the dot product of the input features and their corresponding
weights.
○ Represented mathematically as ∑(x_i * w_i)
where,
x_i is the i-th input feature,
w_i is the i-th weight,
∑ is the summation function.
Amar - 718
Soft Computing - Adaline Neural Networks Niraj - 712
● Activation Function:
○ Takes the output of the weighted sum function and
produces the output.
○ Uses a linear activation function.
● Bias Node:
○ An additional input node that always has a value of 1.
○ It is multiplied by a weight(bias weight) and added to the
weighted sum of input features.
○ Weighted sum allows to adjust the output vertically,
which is useful for classification tasks.
Soft Computing - Adaline Neural Networks
● Error Function :
○ Used to evaluate how well the network is performing.
○ The goal of training the network is to minimize the error
function.
● Gradient Descent:
○ It is an optimization algorithm used to train the Adaline
network.
○ Adjusts the weights and bias weight based on the derivative
of the error function w.r.t the weights.
Niraj - 712
Delta Rule (A learning rule)
● The learning rule is a mathematical logic that encourages a neural network to
gain from the existing condition and uplift its performance.
● It is an iterative procedure.
● The Delta rule is derived from the Gradient – Descent Method, which can be
generalized to more than one layer.
● Gradient descent approach continues forever converging asymptotically to the
solution, as compared to the perceptron learning rule which stops after a finite
number of learning steps.
● The major aim is to minimize the error over all training patterns.
Soft Computing - Adaline Neural Networks Nikhil - 715
Delta Rule
Nikhil - 715
Soft Computing - Adaline Neural Networks
Training Algorithm
Step 0: Weights and bias are set to random values but not
zero.Set the learning rate parameter alpha.
Step1: Perform steps 2-6 when stopping condition is false.
Step2: Perform steps 3-5 for each bipolar training pair
Steps3: Set activation for input unit i=! to n.
xi=si
Soft Computing - Adaline Neural Networks Gautham - 716
Steps4: Calculate the net input to output unit
Yin=b+∑xiwi
Steps5: Update the weights and bias for i=1 to n:
Wi(new)=wi(old)+ α(t-yin)xi
b(new)=b(old)+ α(t-yin)
Step6: Stopping criteria: The training can stop when the error is small enough or
does not change much with further training, or when a maximum number of
epochs has been reached.
Soft Computing - Adaline Neural Networks Gautham - 716
● Initialize the weights
● Input data
● Activation function
● Prediction
● Error calculation
● Weight update
● Repeat steps 2-6 for multiple epochs or until the error is
below a certain threshold.
● Evaluate performance
● Deploy
Soft Computing - Adaline Neural Networks Jayesh - 720
Learning rate
● When we train neural networks we usually use weight updation rule.
● At each iteration we use back-propagation to calculate the derivative of the loss
function with respect to each weight and subtract it from that weight.
● Learning rate determines how quickly or how slowly you want to update your
weight(parameter) values.
● Learning rate should be high enough so that it won’t take ages to converge, and it
should be low enough so that it finds the local minima.
Soft Computing - Adaline Neural Networks Shivam - 717
Testing Algorithm
● Used to classify input patterns.
● Step function is used to test the performance of the
network
Soft Computing - Adaline Neural Networks Shivam - 717
Implement AND NOT function using Adaline Network
● Step 1 : Initialize the weights (Obtained from training algorithm)
● Step 2 : Perform Steps 3-5 for each bipolar input vector x.
● Step 3 : Set the activations of the input units to x.
● Step 4 : Calculate the net input to the output unit.
● Step 5 : Apply the activation function over the net input calculated:
Testing Procedure for Adaline network
Soft Computing - Adaline Neural Networks Shivam - 717
Bipolar Activation Function
Application :
● Adaptive filters and adaptive signal processing:
○ The adaptive filter of the given figure has an input signal and produces an
output signal. The desired response is supplied during training.
○ The filtered output is a linear combination of the current and past input signal
samples.
The main application of the ADALINE was in adaptive filtering and adaptive
signal processing.
Soft Computing - Adaline Neural Networks Sakshi Yadav - 711
● Adaptive signal processing examples
○ System Modelling
○ Statistical prediction
○ Noise Cancelling
○ Inverse Modelling
○ Adaptive echo cancellation
○ Channel equalization
● Adaptive pattern recognition
○ It would be useful to devise a neural net configuration that could be trained to
classify an important set of training patterns as required.
○ The adaptive threshold element can be used for pattern recognition and as a
trainable logic device.
Sakshi Yadav - 711
Soft Computing - Adaline Neural Networks
Practical Implementation using Python - Overview
● Implementation of Adaline Neural Network to the Breast Cancer dataset
● We will be:
○ Creating fit() and predict() methods for training and testing the Adaline NN model
○ Training the model using label "0" as the negative class and the label "1" as the positive class
○ Then calculate the accuracy of the model
● This will be a binary classification example, where we are trying to distinguish between malignant
and benign tumors
Soft Computing - Adaline Neural Networks Sameer Bidi - 719
Practical Implementation using Python
Soft Computing - Adaline Neural Networks Sameer Bidi - 719
Adaline Class - fit() method
Adaline Class - predict() method
Practical Implementation using Python
Soft Computing - Adaline Neural Networks Nitin Upadhyay - 714
Load the dataset and split into training and testing sets
Import necessary libraries
Practical Implementation using Python
Soft Computing - Adaline Neural Networks Nitin Upadhyay - 714
Testing the model - Accuracy: 0.9649
Scaling the data Training the model
That’s all folks!

More Related Content

Similar to layer major Networks.pptx

Artificial Neural Network ANN
Artificial Neural Network ANNArtificial Neural Network ANN
Artificial Neural Network ANN
Abdullah al Mamun
 
NITW_Improving Deep Neural Networks (1).pptx
NITW_Improving Deep Neural Networks (1).pptxNITW_Improving Deep Neural Networks (1).pptx
NITW_Improving Deep Neural Networks (1).pptx
DrKBManwade
 
NITW_Improving Deep Neural Networks.pptx
NITW_Improving Deep Neural Networks.pptxNITW_Improving Deep Neural Networks.pptx
NITW_Improving Deep Neural Networks.pptx
ssuserd23711
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networks
madhu sudhakar
 
backpropagation in neural networks
backpropagation in neural networksbackpropagation in neural networks
backpropagation in neural networks
Akash Goel
 
Machine learning Module-2, 6th Semester Elective
Machine learning Module-2, 6th Semester ElectiveMachine learning Module-2, 6th Semester Elective
Machine learning Module-2, 6th Semester Elective
MayuraD1
 
Artificial Neural Networks for NIU session 2016 17
Artificial Neural Networks for NIU session 2016 17 Artificial Neural Networks for NIU session 2016 17
Artificial Neural Networks for NIU session 2016 17
Prof. Neeta Awasthy
 
Sigma Xi Research Showcase 2018 - Oleksii Volkovskyi
Sigma Xi Research Showcase 2018 - Oleksii VolkovskyiSigma Xi Research Showcase 2018 - Oleksii Volkovskyi
Sigma Xi Research Showcase 2018 - Oleksii Volkovskyi
Oleksii Volkovskyi
 
V2.0 open power ai virtual university deep learning and ai introduction
V2.0 open power ai virtual university   deep learning and ai introductionV2.0 open power ai virtual university   deep learning and ai introduction
V2.0 open power ai virtual university deep learning and ai introduction
Ganesan Narayanasamy
 
Scalable gradientbasedtuningcontinuousregularizationhyperparameters ppt
Scalable gradientbasedtuningcontinuousregularizationhyperparameters pptScalable gradientbasedtuningcontinuousregularizationhyperparameters ppt
Scalable gradientbasedtuningcontinuousregularizationhyperparameters ppt
Ruochun Tzeng
 
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...
Simplilearn
 
IRJET - Implementation of Neural Network on FPGA
IRJET - Implementation of Neural Network on FPGAIRJET - Implementation of Neural Network on FPGA
IRJET - Implementation of Neural Network on FPGA
IRJET Journal
 
Deep Learning with Apache Spark: an Introduction
Deep Learning with Apache Spark: an IntroductionDeep Learning with Apache Spark: an Introduction
Deep Learning with Apache Spark: an Introduction
Emanuele Bezzi
 
Artificial Neural Network for machine learning
Artificial Neural Network for machine learningArtificial Neural Network for machine learning
Artificial Neural Network for machine learning
2303oyxxxjdeepak
 
Nimrita deep learning
Nimrita deep learningNimrita deep learning
Nimrita deep learning
Nimrita Koul
 
Neural Networks in Data Mining - “An Overview”
Neural Networks  in Data Mining -   “An Overview”Neural Networks  in Data Mining -   “An Overview”
Neural Networks in Data Mining - “An Overview”
Dr.(Mrs).Gethsiyal Augasta
 
Neural Network Based Individual Classification System
Neural Network Based Individual Classification SystemNeural Network Based Individual Classification System
Neural Network Based Individual Classification System
IRJET Journal
 
Electricity Demand Forecasting Using ANN
Electricity Demand Forecasting Using ANNElectricity Demand Forecasting Using ANN
Electricity Demand Forecasting Using ANN
Naren Chandra Kattla
 
2.2 CLASS.pdf
2.2 CLASS.pdf2.2 CLASS.pdf
2.2 CLASS.pdf
AKANKSHAVERMA20MIP10
 
Hands on machine learning with scikit-learn and tensor flow by ahmed yousry
Hands on machine learning with scikit-learn and tensor flow by ahmed yousryHands on machine learning with scikit-learn and tensor flow by ahmed yousry
Hands on machine learning with scikit-learn and tensor flow by ahmed yousry
Ahmed Yousry
 

Similar to layer major Networks.pptx (20)

Artificial Neural Network ANN
Artificial Neural Network ANNArtificial Neural Network ANN
Artificial Neural Network ANN
 
NITW_Improving Deep Neural Networks (1).pptx
NITW_Improving Deep Neural Networks (1).pptxNITW_Improving Deep Neural Networks (1).pptx
NITW_Improving Deep Neural Networks (1).pptx
 
NITW_Improving Deep Neural Networks.pptx
NITW_Improving Deep Neural Networks.pptxNITW_Improving Deep Neural Networks.pptx
NITW_Improving Deep Neural Networks.pptx
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networks
 
backpropagation in neural networks
backpropagation in neural networksbackpropagation in neural networks
backpropagation in neural networks
 
Machine learning Module-2, 6th Semester Elective
Machine learning Module-2, 6th Semester ElectiveMachine learning Module-2, 6th Semester Elective
Machine learning Module-2, 6th Semester Elective
 
Artificial Neural Networks for NIU session 2016 17
Artificial Neural Networks for NIU session 2016 17 Artificial Neural Networks for NIU session 2016 17
Artificial Neural Networks for NIU session 2016 17
 
Sigma Xi Research Showcase 2018 - Oleksii Volkovskyi
Sigma Xi Research Showcase 2018 - Oleksii VolkovskyiSigma Xi Research Showcase 2018 - Oleksii Volkovskyi
Sigma Xi Research Showcase 2018 - Oleksii Volkovskyi
 
V2.0 open power ai virtual university deep learning and ai introduction
V2.0 open power ai virtual university   deep learning and ai introductionV2.0 open power ai virtual university   deep learning and ai introduction
V2.0 open power ai virtual university deep learning and ai introduction
 
Scalable gradientbasedtuningcontinuousregularizationhyperparameters ppt
Scalable gradientbasedtuningcontinuousregularizationhyperparameters pptScalable gradientbasedtuningcontinuousregularizationhyperparameters ppt
Scalable gradientbasedtuningcontinuousregularizationhyperparameters ppt
 
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...
Deep Learning Interview Questions And Answers | AI & Deep Learning Interview ...
 
IRJET - Implementation of Neural Network on FPGA
IRJET - Implementation of Neural Network on FPGAIRJET - Implementation of Neural Network on FPGA
IRJET - Implementation of Neural Network on FPGA
 
Deep Learning with Apache Spark: an Introduction
Deep Learning with Apache Spark: an IntroductionDeep Learning with Apache Spark: an Introduction
Deep Learning with Apache Spark: an Introduction
 
Artificial Neural Network for machine learning
Artificial Neural Network for machine learningArtificial Neural Network for machine learning
Artificial Neural Network for machine learning
 
Nimrita deep learning
Nimrita deep learningNimrita deep learning
Nimrita deep learning
 
Neural Networks in Data Mining - “An Overview”
Neural Networks  in Data Mining -   “An Overview”Neural Networks  in Data Mining -   “An Overview”
Neural Networks in Data Mining - “An Overview”
 
Neural Network Based Individual Classification System
Neural Network Based Individual Classification SystemNeural Network Based Individual Classification System
Neural Network Based Individual Classification System
 
Electricity Demand Forecasting Using ANN
Electricity Demand Forecasting Using ANNElectricity Demand Forecasting Using ANN
Electricity Demand Forecasting Using ANN
 
2.2 CLASS.pdf
2.2 CLASS.pdf2.2 CLASS.pdf
2.2 CLASS.pdf
 
Hands on machine learning with scikit-learn and tensor flow by ahmed yousry
Hands on machine learning with scikit-learn and tensor flow by ahmed yousryHands on machine learning with scikit-learn and tensor flow by ahmed yousry
Hands on machine learning with scikit-learn and tensor flow by ahmed yousry
 

Recently uploaded

一比一原版(Glasgow毕业证书)格拉斯哥大学毕业证如何办理
一比一原版(Glasgow毕业证书)格拉斯哥大学毕业证如何办理一比一原版(Glasgow毕业证书)格拉斯哥大学毕业证如何办理
一比一原版(Glasgow毕业证书)格拉斯哥大学毕业证如何办理
g4dpvqap0
 
Predictably Improve Your B2B Tech Company's Performance by Leveraging Data
Predictably Improve Your B2B Tech Company's Performance by Leveraging DataPredictably Improve Your B2B Tech Company's Performance by Leveraging Data
Predictably Improve Your B2B Tech Company's Performance by Leveraging Data
Kiwi Creative
 
My burning issue is homelessness K.C.M.O.
My burning issue is homelessness K.C.M.O.My burning issue is homelessness K.C.M.O.
My burning issue is homelessness K.C.M.O.
rwarrenll
 
一比一原版(牛布毕业证书)牛津布鲁克斯大学毕业证如何办理
一比一原版(牛布毕业证书)牛津布鲁克斯大学毕业证如何办理一比一原版(牛布毕业证书)牛津布鲁克斯大学毕业证如何办理
一比一原版(牛布毕业证书)牛津布鲁克斯大学毕业证如何办理
74nqk8xf
 
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data Lake
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data LakeViewShift: Hassle-free Dynamic Policy Enforcement for Every Data Lake
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data Lake
Walaa Eldin Moustafa
 
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...
Timothy Spann
 
一比一原版(UCSF文凭证书)旧金山分校毕业证如何办理
一比一原版(UCSF文凭证书)旧金山分校毕业证如何办理一比一原版(UCSF文凭证书)旧金山分校毕业证如何办理
一比一原版(UCSF文凭证书)旧金山分校毕业证如何办理
nuttdpt
 
Global Situational Awareness of A.I. and where its headed
Global Situational Awareness of A.I. and where its headedGlobal Situational Awareness of A.I. and where its headed
Global Situational Awareness of A.I. and where its headed
vikram sood
 
Udemy_2024_Global_Learning_Skills_Trends_Report (1).pdf
Udemy_2024_Global_Learning_Skills_Trends_Report (1).pdfUdemy_2024_Global_Learning_Skills_Trends_Report (1).pdf
Udemy_2024_Global_Learning_Skills_Trends_Report (1).pdf
Fernanda Palhano
 
Experts live - Improving user adoption with AI
Experts live - Improving user adoption with AIExperts live - Improving user adoption with AI
Experts live - Improving user adoption with AI
jitskeb
 
A presentation that explain the Power BI Licensing
A presentation that explain the Power BI LicensingA presentation that explain the Power BI Licensing
A presentation that explain the Power BI Licensing
AlessioFois2
 
Challenges of Nation Building-1.pptx with more important
Challenges of Nation Building-1.pptx with more importantChallenges of Nation Building-1.pptx with more important
Challenges of Nation Building-1.pptx with more important
Sm321
 
06-12-2024-BudapestDataForum-BuildingReal-timePipelineswithFLaNK AIM
06-12-2024-BudapestDataForum-BuildingReal-timePipelineswithFLaNK AIM06-12-2024-BudapestDataForum-BuildingReal-timePipelineswithFLaNK AIM
06-12-2024-BudapestDataForum-BuildingReal-timePipelineswithFLaNK AIM
Timothy Spann
 
End-to-end pipeline agility - Berlin Buzzwords 2024
End-to-end pipeline agility - Berlin Buzzwords 2024End-to-end pipeline agility - Berlin Buzzwords 2024
End-to-end pipeline agility - Berlin Buzzwords 2024
Lars Albertsson
 
State of Artificial intelligence Report 2023
State of Artificial intelligence Report 2023State of Artificial intelligence Report 2023
State of Artificial intelligence Report 2023
kuntobimo2016
 
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...
sameer shah
 
The Ipsos - AI - Monitor 2024 Report.pdf
The  Ipsos - AI - Monitor 2024 Report.pdfThe  Ipsos - AI - Monitor 2024 Report.pdf
The Ipsos - AI - Monitor 2024 Report.pdf
Social Samosa
 
University of New South Wales degree offer diploma Transcript
University of New South Wales degree offer diploma TranscriptUniversity of New South Wales degree offer diploma Transcript
University of New South Wales degree offer diploma Transcript
soxrziqu
 
DSSML24_tspann_CodelessGenerativeAIPipelines
DSSML24_tspann_CodelessGenerativeAIPipelinesDSSML24_tspann_CodelessGenerativeAIPipelines
DSSML24_tspann_CodelessGenerativeAIPipelines
Timothy Spann
 
一比一原版(UMN文凭证书)明尼苏达大学毕业证如何办理
一比一原版(UMN文凭证书)明尼苏达大学毕业证如何办理一比一原版(UMN文凭证书)明尼苏达大学毕业证如何办理
一比一原版(UMN文凭证书)明尼苏达大学毕业证如何办理
nyfuhyz
 

Recently uploaded (20)

一比一原版(Glasgow毕业证书)格拉斯哥大学毕业证如何办理
一比一原版(Glasgow毕业证书)格拉斯哥大学毕业证如何办理一比一原版(Glasgow毕业证书)格拉斯哥大学毕业证如何办理
一比一原版(Glasgow毕业证书)格拉斯哥大学毕业证如何办理
 
Predictably Improve Your B2B Tech Company's Performance by Leveraging Data
Predictably Improve Your B2B Tech Company's Performance by Leveraging DataPredictably Improve Your B2B Tech Company's Performance by Leveraging Data
Predictably Improve Your B2B Tech Company's Performance by Leveraging Data
 
My burning issue is homelessness K.C.M.O.
My burning issue is homelessness K.C.M.O.My burning issue is homelessness K.C.M.O.
My burning issue is homelessness K.C.M.O.
 
一比一原版(牛布毕业证书)牛津布鲁克斯大学毕业证如何办理
一比一原版(牛布毕业证书)牛津布鲁克斯大学毕业证如何办理一比一原版(牛布毕业证书)牛津布鲁克斯大学毕业证如何办理
一比一原版(牛布毕业证书)牛津布鲁克斯大学毕业证如何办理
 
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data Lake
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data LakeViewShift: Hassle-free Dynamic Policy Enforcement for Every Data Lake
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data Lake
 
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...
 
一比一原版(UCSF文凭证书)旧金山分校毕业证如何办理
一比一原版(UCSF文凭证书)旧金山分校毕业证如何办理一比一原版(UCSF文凭证书)旧金山分校毕业证如何办理
一比一原版(UCSF文凭证书)旧金山分校毕业证如何办理
 
Global Situational Awareness of A.I. and where its headed
Global Situational Awareness of A.I. and where its headedGlobal Situational Awareness of A.I. and where its headed
Global Situational Awareness of A.I. and where its headed
 
Udemy_2024_Global_Learning_Skills_Trends_Report (1).pdf
Udemy_2024_Global_Learning_Skills_Trends_Report (1).pdfUdemy_2024_Global_Learning_Skills_Trends_Report (1).pdf
Udemy_2024_Global_Learning_Skills_Trends_Report (1).pdf
 
Experts live - Improving user adoption with AI
Experts live - Improving user adoption with AIExperts live - Improving user adoption with AI
Experts live - Improving user adoption with AI
 
A presentation that explain the Power BI Licensing
A presentation that explain the Power BI LicensingA presentation that explain the Power BI Licensing
A presentation that explain the Power BI Licensing
 
Challenges of Nation Building-1.pptx with more important
Challenges of Nation Building-1.pptx with more importantChallenges of Nation Building-1.pptx with more important
Challenges of Nation Building-1.pptx with more important
 
06-12-2024-BudapestDataForum-BuildingReal-timePipelineswithFLaNK AIM
06-12-2024-BudapestDataForum-BuildingReal-timePipelineswithFLaNK AIM06-12-2024-BudapestDataForum-BuildingReal-timePipelineswithFLaNK AIM
06-12-2024-BudapestDataForum-BuildingReal-timePipelineswithFLaNK AIM
 
End-to-end pipeline agility - Berlin Buzzwords 2024
End-to-end pipeline agility - Berlin Buzzwords 2024End-to-end pipeline agility - Berlin Buzzwords 2024
End-to-end pipeline agility - Berlin Buzzwords 2024
 
State of Artificial intelligence Report 2023
State of Artificial intelligence Report 2023State of Artificial intelligence Report 2023
State of Artificial intelligence Report 2023
 
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...
 
The Ipsos - AI - Monitor 2024 Report.pdf
The  Ipsos - AI - Monitor 2024 Report.pdfThe  Ipsos - AI - Monitor 2024 Report.pdf
The Ipsos - AI - Monitor 2024 Report.pdf
 
University of New South Wales degree offer diploma Transcript
University of New South Wales degree offer diploma TranscriptUniversity of New South Wales degree offer diploma Transcript
University of New South Wales degree offer diploma Transcript
 
DSSML24_tspann_CodelessGenerativeAIPipelines
DSSML24_tspann_CodelessGenerativeAIPipelinesDSSML24_tspann_CodelessGenerativeAIPipelines
DSSML24_tspann_CodelessGenerativeAIPipelines
 
一比一原版(UMN文凭证书)明尼苏达大学毕业证如何办理
一比一原版(UMN文凭证书)明尼苏达大学毕业证如何办理一比一原版(UMN文凭证书)明尼苏达大学毕业证如何办理
一比一原版(UMN文凭证书)明尼苏达大学毕业证如何办理
 

layer major Networks.pptx

  • 2. Introduction ● In supervised learning, output is compared with the target output. ● On the basis of the error signal, the weights are adjusted until the actual output is matched with the desired output. ● Adaptive Linear Neuron is a single-layer operational unit, which uses bipolar activation function. ● Adaline neurons can be trained using the Delta or Least Mean Square rules. Soft Computing - Adaline Neural Networks Savri - 713
  • 3. Adaline versus Perceptron The Adaline learns all the time and the perceptron only after errors, the Adaline will find a solution faster than the perceptron for the same problem. Soft Computing - Adaline Neural Networks Savri - 713
  • 4. ADALINE NEURAL NETWORK ARCHITECTURE Soft Computing - Adaline Neural Networks Amar - 718
  • 5. ADALINE NEURAL NETWORK ARCHITECTURE COMPONENTS Soft Computing - Adaline Neural Networks ● Input layer: ○ receives the input data and passes it to the next layer. ○ Each input feature is represented by a separate node. ● Weighted Sum Function: ○ Calculates the dot product of the input features and their corresponding weights. ○ Represented mathematically as ∑(x_i * w_i) where, x_i is the i-th input feature, w_i is the i-th weight, ∑ is the summation function. Amar - 718
  • 6. Soft Computing - Adaline Neural Networks Niraj - 712 ● Activation Function: ○ Takes the output of the weighted sum function and produces the output. ○ Uses a linear activation function. ● Bias Node: ○ An additional input node that always has a value of 1. ○ It is multiplied by a weight(bias weight) and added to the weighted sum of input features. ○ Weighted sum allows to adjust the output vertically, which is useful for classification tasks.
  • 7. Soft Computing - Adaline Neural Networks ● Error Function : ○ Used to evaluate how well the network is performing. ○ The goal of training the network is to minimize the error function. ● Gradient Descent: ○ It is an optimization algorithm used to train the Adaline network. ○ Adjusts the weights and bias weight based on the derivative of the error function w.r.t the weights. Niraj - 712
  • 8. Delta Rule (A learning rule) ● The learning rule is a mathematical logic that encourages a neural network to gain from the existing condition and uplift its performance. ● It is an iterative procedure. ● The Delta rule is derived from the Gradient – Descent Method, which can be generalized to more than one layer. ● Gradient descent approach continues forever converging asymptotically to the solution, as compared to the perceptron learning rule which stops after a finite number of learning steps. ● The major aim is to minimize the error over all training patterns. Soft Computing - Adaline Neural Networks Nikhil - 715
  • 9. Delta Rule Nikhil - 715 Soft Computing - Adaline Neural Networks
  • 10. Training Algorithm Step 0: Weights and bias are set to random values but not zero.Set the learning rate parameter alpha. Step1: Perform steps 2-6 when stopping condition is false. Step2: Perform steps 3-5 for each bipolar training pair Steps3: Set activation for input unit i=! to n. xi=si Soft Computing - Adaline Neural Networks Gautham - 716
  • 11. Steps4: Calculate the net input to output unit Yin=b+∑xiwi Steps5: Update the weights and bias for i=1 to n: Wi(new)=wi(old)+ α(t-yin)xi b(new)=b(old)+ α(t-yin) Step6: Stopping criteria: The training can stop when the error is small enough or does not change much with further training, or when a maximum number of epochs has been reached. Soft Computing - Adaline Neural Networks Gautham - 716
  • 12. ● Initialize the weights ● Input data ● Activation function ● Prediction ● Error calculation ● Weight update ● Repeat steps 2-6 for multiple epochs or until the error is below a certain threshold. ● Evaluate performance ● Deploy Soft Computing - Adaline Neural Networks Jayesh - 720
  • 13. Learning rate ● When we train neural networks we usually use weight updation rule. ● At each iteration we use back-propagation to calculate the derivative of the loss function with respect to each weight and subtract it from that weight. ● Learning rate determines how quickly or how slowly you want to update your weight(parameter) values. ● Learning rate should be high enough so that it won’t take ages to converge, and it should be low enough so that it finds the local minima. Soft Computing - Adaline Neural Networks Shivam - 717
  • 14. Testing Algorithm ● Used to classify input patterns. ● Step function is used to test the performance of the network Soft Computing - Adaline Neural Networks Shivam - 717 Implement AND NOT function using Adaline Network
  • 15. ● Step 1 : Initialize the weights (Obtained from training algorithm) ● Step 2 : Perform Steps 3-5 for each bipolar input vector x. ● Step 3 : Set the activations of the input units to x. ● Step 4 : Calculate the net input to the output unit. ● Step 5 : Apply the activation function over the net input calculated: Testing Procedure for Adaline network Soft Computing - Adaline Neural Networks Shivam - 717 Bipolar Activation Function
  • 16. Application : ● Adaptive filters and adaptive signal processing: ○ The adaptive filter of the given figure has an input signal and produces an output signal. The desired response is supplied during training. ○ The filtered output is a linear combination of the current and past input signal samples. The main application of the ADALINE was in adaptive filtering and adaptive signal processing. Soft Computing - Adaline Neural Networks Sakshi Yadav - 711
  • 17. ● Adaptive signal processing examples ○ System Modelling ○ Statistical prediction ○ Noise Cancelling ○ Inverse Modelling ○ Adaptive echo cancellation ○ Channel equalization ● Adaptive pattern recognition ○ It would be useful to devise a neural net configuration that could be trained to classify an important set of training patterns as required. ○ The adaptive threshold element can be used for pattern recognition and as a trainable logic device. Sakshi Yadav - 711 Soft Computing - Adaline Neural Networks
  • 18. Practical Implementation using Python - Overview ● Implementation of Adaline Neural Network to the Breast Cancer dataset ● We will be: ○ Creating fit() and predict() methods for training and testing the Adaline NN model ○ Training the model using label "0" as the negative class and the label "1" as the positive class ○ Then calculate the accuracy of the model ● This will be a binary classification example, where we are trying to distinguish between malignant and benign tumors Soft Computing - Adaline Neural Networks Sameer Bidi - 719
  • 19. Practical Implementation using Python Soft Computing - Adaline Neural Networks Sameer Bidi - 719 Adaline Class - fit() method Adaline Class - predict() method
  • 20. Practical Implementation using Python Soft Computing - Adaline Neural Networks Nitin Upadhyay - 714 Load the dataset and split into training and testing sets Import necessary libraries
  • 21. Practical Implementation using Python Soft Computing - Adaline Neural Networks Nitin Upadhyay - 714 Testing the model - Accuracy: 0.9649 Scaling the data Training the model

Editor's Notes

  1. Activation Function: which means that the output is simply the weighted sum of the input features.
  2. Stopping criteria: The training can stop when the error is small enough or does not change much with further training, or when a maximum number of epochs has been reached.
  3. Gradient descent is a commonly used optimization algorithm in the training of artificial neural networks, including the Adaline (Adaptive Linear Neuron) network. In Adaline, the weights are adjusted based on the difference between the predicted output and the actual output, which is calculated by taking the dot product of the input vector and the weight vector. The objective of the training is to minimize the error between the predicted output and the actual output. To achieve this, the gradient descent algorithm is used to update the weights in each iteration of the training process. The algorithm works by computing the gradient of the error function with respect to the weights and moving the weights in the direction of the negative gradient. The weight update rule in Adaline using gradient descent can be expressed as: w := w - α * ∇E where w is the weight vector, α is the learning rate (a hyperparameter that controls the step size of the weight update), and ∇E is the gradient of the error function with respect to the weights. The gradient of the error function can be computed using the chain rule of calculus, which involves calculating the derivative of the error function with respect to the output and the derivative of the output with respect to the weights. The training process continues until the error is minimized or until a maximum number of iterations is reached. During training, the learning rate is often adjusted to improve the convergence rate and prevent oscillations.