SlideShare a Scribd company logo
Data mining
Assignment week 5




BARRY KOLLEE

10349863
Assignment	
  5	
  
	
  
Exercise 1: Perceptrons

1.1 What is the function of the learning rate in the perceptron training rule?

Within our perceptrons we take a certain weight into account when calculating the difference between
the target and outcome values. This weight it’s purpose is to adjust the actual value which we compare
to a certain threshold (i.e. ‘do we play tennis; yes or no?’).

The learning rate it’s goal is to define the extend of this weight adjustment. This learning rate can be
described as a sensitivity for our calculation of the difference between the target and outcome value. In
conclusion we can state that we give a value to this learning rate based on the difference between the
target and outcome value.

1.2 What kind of Boolean functions can be modeled with perceptrons and which
Boolean functions can not be modeled and why?

Within the model of our perceptron we take several Boolean functions into account which we regularly
see within the common programming languages. These Boolean conditions are:
    •    AND (‘&&’)
    •    OR (‘||’)
    •    NAND (‘! &&’)
    •    NOR (‘! ||’)

The Boolean condition ‘XOR’ can’t be implemented within the perceptron it’s model. When using the
XOR Boolean function the output can only be 1 if x1 is not equal to x2 (x1 != x2)1. The XOR
Boolean condition can be represented by using combinations of perceptrons (more then 1-level) That’s
because we can express the XOR statement using an AND and an OR condition.




	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  	
  
1
       Objective C representation of x1 not equal to x2



2
Assignment	
  5	
  
	
  

Exercise 2: Weight Updating in Perceptrons

Assume the following set of instances with the weights: w0 = 0.4 and w1 = 0.8. The threshold
is 0.

What are the output values for each instance
before the threshold function is applied? What is
the accuracy of the model when applying the
threshold function?

For calculating the output values of the instances we perform the following formula:



       Instance value   = w0 + (x1 * w1)

       Output value:

       1 if((w0 + (x1 * w1) +… +(xn * wn))) > 0
       -1 otherwise


With these formula’s we can find the output values of every instance within the table. If the result of the
instance value is higher then 0 the output value is 1. If this is not the case we set it to -1. Underneath are
all formula results and output values. They formula results are given in green and the output values are
in red.

Instance 1 :



       Instance 1 = 0.4 + (0.8 * 1.0)
       Instance 1 = 0.4 + 0.8
       Instance 1 = 1.2

       Instance 1 > threshold
       Output value for instance 1 = 1.0



Instance 2 :



       Instance 2 = 0.4 + (0.8 * 0.5)
       Instance 2 = 0.4 + 0.4
       Instance 2 = 0.8

       Instance 2 > threshold
       Output value for instance 2 = 1.0



Instance 3:



       Instance 3 = 0.4 + (0.8 * -0.8)
       Instance 3 = 0.4 - 0.64
       Instance 3 = -0.24

       Instance 3 < threshold
       Output value for instance 3 = -1.0




3
Assignment	
  5	
  
	
  

Instance 4:



       Instance 4 = 0.4 + (0.8 * 1.0) + (0.8 * -0.2)
       Instance 4 = 0.4 - 0.16
       Instance 4 = 0.24

       Instance 4 > threshold
       Output value for instance 4 = 1.0



If we compare these output values with the instance it’s      Instance   Target   Output
target value. We can state that we have a 75 % accuracy                  class    value
           th
because ¾ of the target classes is equal to it’s respective              value
output value.
                                                                 1         1        1
                                                                 2         1        1
                                                                 3         -1       -1
                                                                 4         -1       1




4
Assignment	
  5	
  
	
  

Exercise 3: Gradient Descent
Consider the data in Exercise 2. Apply the gradient descent algorithm and
compute the weight updates for one iteration. You can assume the same initial
weights, and threshold as in Exercise 2. Assume that the learning rate = 0.2.

To compute the weight update for one iteration we perform the following formula where:
    •  ‘n’ represents the learning rate
    •  ‘o’ represents the output value (from previous exercise) < 1.2 , 0.8, -0.24, 0.24 >
    •  ‘xi’ represents the input value

We calculate for every instance:


       for each wi (instance) {
           Δwi = n (t – 0) * xi
           Δwi = wi + Δwi
        }




Instance 1 (output is 1.2)


       Δw0 = Δw        + n   ( t2 – o2 ) * X0
       Δw0 = 0         + 0.2 ( 1 – 1.2) * 1.0
       Δw0 = -0.04

       Δw1 = Δw1   + n   ( t2 – o2 ) * X1
       Δw1 = 0     + 0.2 (1 – 1.2 ) * 1.0
       Δw1 = -0.04


Instance 2 (output is 0.8)


       Δw0 = Δw        + n   ( t2 – o2 ) * X0
       Δw0 = -0.04     + 0.2 ( 1 - 0.8) * 1
       Δw0 = 0

       Δw1 = Δw1       + n   ( t2 – o2 ) * X1
       Δw1 = -0.04     + 0.2 (1 – 0.8 ) * 0.5
       Δw1 = -0.02


Instance 3 (output 3 = -0.24)


       Δw0 = Δw        + n   ( t2   - o2      ) * X0
       Δw0 = 0         + 0.2 ( -1   - (-0.24) ) * 1
       Δw0 = 0         + 0.2 ( -1   + 0.24    ) * 1
       Δw0 = 0.152
       Δw1   =   Δw1    + n  ( t2   –   o2 ) * X1
       Δw1   =   -0.02 + 0.2 ( -1   –   (-0.24) ) * ( -0.8 )
       Δw1   =   -0.02 + 0.2 ( -1   +   0.24 ) * ( -0.8 )
       Δw1   =   0.1016


Instance 4 (output 4 = 0.24)


       Δw0 = Δw            + n   ( t2 – o2 ) * X0
       Δw0 = 0.152         + 0.2 ( -1 - 0.24 ) * 1
       Δw0 = -0.4

       Δw1 = Δw1           + n   ( t2 – o2 ) * X1
       Δw1 = -0.1016        + 0.2 ( -1 - 0.24 ) * -0.2
       Δw1 = 0.1016




5
Assignment	
  5	
  
	
  
Now we do our weight updating:


       W0 = W0 + ΔW0
       W0 = 0.4 + (-o.4)
       W0 = 0

       W1 = W1 + ΔW1
       W1 = 0.8 + 0.1512
       W1 = 0.9512



Now we could perform another iteration by starting all over again…




6
Assignment	
  5	
  
	
  


Exercise 4: Stochastic Gradient Descent
Consider the data in Exercise 2. Apply the stochastic
gradient descent algorithm and compute the weight
updates for one iteration. You can assume the same initial
weights, and threshold as in Exercise 2. Assume that the
learning rate = 0.2.

For applying a stochastistic gradient descent algorithm we use the following formula where:
    •    Threshold (‘t’) = 0
    •    Learning rate (‘n’) = 0.2


        Wi   =   wi + n(t-o) * xi



“The difference between the approach that we've used before we now recalculate every output value for
the instance that will be calculated. We take the newest/updated weights into account after every
calculation. In the previous example we updated the weight after the entire iteration.”

Instance 1


       O1 = w0 + ( X1 * W1 )
       O1 = 0.4 + ( 1 * 0.8 )
       O1 = 1.2

       w0 = Δw        + n   ( t1 – o1 ) * X0
       w0 = 0.4       + 0.2 ( 1 – 1.2 ) * 1
       w0 = 0.36

       w1 = Δw1       + n   ( t1 – o1 ) * X1
       w1 = 0.8       + 0.2 ( 1 – 1.2 ) * -0.2
       w1 = 0.76



Instance 2


       O2 = w0 + ( X1 * W1 )
       O2 = 0.36 + ( 0.5 * 0.76 )
       O2 = 0.74

       w0 = w1   + n    ( t2 – o2 ) * X0
       w0 = 0.36    + 0.2 ( 1 – 0.74 ) * 1
       w0 = 0.412
       w1 = w1   + n   ( t2 – o2 ) * X1
       w1 = 0.76   + 0.2 ( 1 – 0.74 ) * 0.5
       w1 = 0.786


Instance 3


       O3 = w0 + ( X1 * W1 )
       O3 = 0.412 + ( (-0.8)        * 0.786 )
       O3 = -0.217

       w0 = w1   + n  ( t3 – o3 ) * X0
       w0 = 0.412 + 0.2 ( -1 + 0.217) * 1
       w0 = 0.255
       w1 = w1   + n    ( t3 – o3 ) * X1
       w1 = 0.786    + 0.2 ( -1 + 0.217) * -0.8
       w1 = 0.911




7
Assignment	
  5	
  
	
  
Instance 4


       O4 = w0 + ( X1 * W1 )
       O4 = 0.225 + ( (-0.2)   * 0.911 )
       O4 = 0.073

       w0 = w1   + n    ( t4 – o4 ) * X0
       w0 = 0.255    + 0.2 ( -1 – 0.073 ) * 1
       w0 = 0.041
       w1 = w1   + n    ( t4 – o4 ) * X1
       w1 = 0.911    + 0.2 ( -1 – 0.073 ) * -0.2
       w1 = 0.954




8

More Related Content

What's hot

Rules of derivatives 2.2
Rules of derivatives 2.2Rules of derivatives 2.2
Rules of derivatives 2.2
Lorie Blickhan
 
Arrays and structures
Arrays and structuresArrays and structures
Arrays and structures
Mohd Arif
 
Engr 371 final exam april 1996
Engr 371 final exam april 1996Engr 371 final exam april 1996
Engr 371 final exam april 1996
amnesiann
 

What's hot (16)

Maxima & Minima of Functions - Differential Calculus by Arun Umrao
Maxima & Minima of Functions - Differential Calculus by Arun UmraoMaxima & Minima of Functions - Differential Calculus by Arun Umrao
Maxima & Minima of Functions - Differential Calculus by Arun Umrao
 
Rules of derivatives 2.2
Rules of derivatives 2.2Rules of derivatives 2.2
Rules of derivatives 2.2
 
Think Like Scilab and Become a Numerical Programming Expert- Notes for Beginn...
Think Like Scilab and Become a Numerical Programming Expert- Notes for Beginn...Think Like Scilab and Become a Numerical Programming Expert- Notes for Beginn...
Think Like Scilab and Become a Numerical Programming Expert- Notes for Beginn...
 
Decreasing and increasing functions by arun umrao
Decreasing and increasing functions by arun umraoDecreasing and increasing functions by arun umrao
Decreasing and increasing functions by arun umrao
 
statistics assignment help
statistics assignment helpstatistics assignment help
statistics assignment help
 
What is meaning of epsilon and delta in limits of a function by Arun Umrao
What is meaning of epsilon and delta in limits of a function by Arun UmraoWhat is meaning of epsilon and delta in limits of a function by Arun Umrao
What is meaning of epsilon and delta in limits of a function by Arun Umrao
 
27 power series x
27 power series x27 power series x
27 power series x
 
Principle of Definite Integra - Integral Calculus - by Arun Umrao
Principle of Definite Integra - Integral Calculus - by Arun UmraoPrinciple of Definite Integra - Integral Calculus - by Arun Umrao
Principle of Definite Integra - Integral Calculus - by Arun Umrao
 
Jacobians new
Jacobians newJacobians new
Jacobians new
 
Algo>Arrays
Algo>ArraysAlgo>Arrays
Algo>Arrays
 
Limit & Continuity of Functions - Differential Calculus by Arun Umrao
Limit & Continuity of Functions - Differential Calculus by Arun UmraoLimit & Continuity of Functions - Differential Calculus by Arun Umrao
Limit & Continuity of Functions - Differential Calculus by Arun Umrao
 
Lesson 1: Functions and their representations (slides)
Lesson 1: Functions and their representations (slides)Lesson 1: Functions and their representations (slides)
Lesson 1: Functions and their representations (slides)
 
Arrays and structures
Arrays and structuresArrays and structures
Arrays and structures
 
Engr 371 final exam april 1996
Engr 371 final exam april 1996Engr 371 final exam april 1996
Engr 371 final exam april 1996
 
Matlab lab manual
Matlab lab manualMatlab lab manual
Matlab lab manual
 
Principle of Function Analysis - by Arun Umrao
Principle of Function Analysis - by Arun UmraoPrinciple of Function Analysis - by Arun Umrao
Principle of Function Analysis - by Arun Umrao
 

Viewers also liked

Csc1100 lecture04 ch04
Csc1100 lecture04 ch04Csc1100 lecture04 ch04
Csc1100 lecture04 ch04
IIUM
 
01 10 speech channel assignment
01 10 speech channel assignment01 10 speech channel assignment
01 10 speech channel assignment
Ericsson Saudi
 
Data Engineering - Data Mining Assignment
Data Engineering - Data Mining AssignmentData Engineering - Data Mining Assignment
Data Engineering - Data Mining Assignment
Darran Mottershead
 
Data mining to predict academic performance.
Data mining to predict academic performance. Data mining to predict academic performance.
Data mining to predict academic performance.
Ranjith Gowda
 
Data Mining – analyse Bank Marketing Data Set
Data Mining – analyse Bank Marketing Data SetData Mining – analyse Bank Marketing Data Set
Data Mining – analyse Bank Marketing Data Set
Mateusz Brzoska
 
DATA MINING on WEKA
DATA MINING on WEKADATA MINING on WEKA
DATA MINING on WEKA
satyamkhatri
 

Viewers also liked (19)

Tree pruning
Tree pruningTree pruning
Tree pruning
 
Data mining assignment 1
Data mining assignment 1Data mining assignment 1
Data mining assignment 1
 
DATA MINING IN RETAIL SECTOR
DATA MINING IN RETAIL SECTORDATA MINING IN RETAIL SECTOR
DATA MINING IN RETAIL SECTOR
 
Csc1100 lecture04 ch04
Csc1100 lecture04 ch04Csc1100 lecture04 ch04
Csc1100 lecture04 ch04
 
05 Conditional statements
05 Conditional statements05 Conditional statements
05 Conditional statements
 
01 10 speech channel assignment
01 10 speech channel assignment01 10 speech channel assignment
01 10 speech channel assignment
 
Project_702
Project_702Project_702
Project_702
 
С++ without new and delete
С++ without new and deleteС++ without new and delete
С++ without new and delete
 
Data Engineering - Data Mining Assignment
Data Engineering - Data Mining AssignmentData Engineering - Data Mining Assignment
Data Engineering - Data Mining Assignment
 
Data mining notes
Data mining notesData mining notes
Data mining notes
 
Lecture 01 Data Mining
Lecture 01 Data MiningLecture 01 Data Mining
Lecture 01 Data Mining
 
Data mining with weka
Data mining with wekaData mining with weka
Data mining with weka
 
Data mining to predict academic performance.
Data mining to predict academic performance. Data mining to predict academic performance.
Data mining to predict academic performance.
 
4.2 bst
4.2 bst4.2 bst
4.2 bst
 
Data Mining – analyse Bank Marketing Data Set
Data Mining – analyse Bank Marketing Data SetData Mining – analyse Bank Marketing Data Set
Data Mining – analyse Bank Marketing Data Set
 
DATA MINING on WEKA
DATA MINING on WEKADATA MINING on WEKA
DATA MINING on WEKA
 
Ch06
Ch06Ch06
Ch06
 
Decision trees
Decision treesDecision trees
Decision trees
 
Naive bayes
Naive bayesNaive bayes
Naive bayes
 

Similar to Data mining assignment 5

Neurvvvvvvvvvvvvvvvvvvvval Networks.pptx
Neurvvvvvvvvvvvvvvvvvvvval Networks.pptxNeurvvvvvvvvvvvvvvvvvvvval Networks.pptx
Neurvvvvvvvvvvvvvvvvvvvval Networks.pptx
eman458700
 

Similar to Data mining assignment 5 (20)

Artificial neural networks - A gentle introduction to ANNS.pptx
Artificial neural networks - A gentle introduction to ANNS.pptxArtificial neural networks - A gentle introduction to ANNS.pptx
Artificial neural networks - A gentle introduction to ANNS.pptx
 
Neurvvvvvvvvvvvvvvvvvvvval Networks.pptx
Neurvvvvvvvvvvvvvvvvvvvval Networks.pptxNeurvvvvvvvvvvvvvvvvvvvval Networks.pptx
Neurvvvvvvvvvvvvvvvvvvvval Networks.pptx
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
 
CS767_Lecture_04.pptx
CS767_Lecture_04.pptxCS767_Lecture_04.pptx
CS767_Lecture_04.pptx
 
Initial value problems
Initial value problemsInitial value problems
Initial value problems
 
Introduction to Algorithms
Introduction to AlgorithmsIntroduction to Algorithms
Introduction to Algorithms
 
Lecture 2: Artificial Neural Network
Lecture 2: Artificial Neural NetworkLecture 2: Artificial Neural Network
Lecture 2: Artificial Neural Network
 
Neural Networks.pptx
Neural Networks.pptxNeural Networks.pptx
Neural Networks.pptx
 
Numerical method
Numerical methodNumerical method
Numerical method
 
Machine Learning.pdf
Machine Learning.pdfMachine Learning.pdf
Machine Learning.pdf
 
Artificial Neural Networks
Artificial Neural NetworksArtificial Neural Networks
Artificial Neural Networks
 
weights training of perceptron (using 3 training rules)
weights training of perceptron (using 3 training rules)weights training of perceptron (using 3 training rules)
weights training of perceptron (using 3 training rules)
 
Euler and improved euler method
Euler and improved euler methodEuler and improved euler method
Euler and improved euler method
 
Top school in India
Top school in IndiaTop school in India
Top school in India
 
Class 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfClass 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdf
 
Introduction to Differential Equations
Introduction to Differential EquationsIntroduction to Differential Equations
Introduction to Differential Equations
 
Back Propagation in Deep Neural Network
Back Propagation in Deep Neural NetworkBack Propagation in Deep Neural Network
Back Propagation in Deep Neural Network
 
Introduction to Neural networks (under graduate course) Lecture 4 of 9
Introduction to Neural networks (under graduate course) Lecture 4 of 9Introduction to Neural networks (under graduate course) Lecture 4 of 9
Introduction to Neural networks (under graduate course) Lecture 4 of 9
 
Max net
Max netMax net
Max net
 
Top School in delhi
Top School in delhiTop School in delhi
Top School in delhi
 

More from BarryK88

More from BarryK88 (11)

Data mining test notes (back)
Data mining test notes (back)Data mining test notes (back)
Data mining test notes (back)
 
Data mining test notes (front)
Data mining test notes (front)Data mining test notes (front)
Data mining test notes (front)
 
Data mining Computerassignment 3
Data mining Computerassignment 3Data mining Computerassignment 3
Data mining Computerassignment 3
 
Data mining assignment 2
Data mining assignment 2Data mining assignment 2
Data mining assignment 2
 
Data mining assignment 6
Data mining assignment 6Data mining assignment 6
Data mining assignment 6
 
Data mining Computerassignment 2
Data mining Computerassignment 2Data mining Computerassignment 2
Data mining Computerassignment 2
 
Data mining Computerassignment 1
Data mining Computerassignment 1Data mining Computerassignment 1
Data mining Computerassignment 1
 
Semantic web final assignment
Semantic web final assignmentSemantic web final assignment
Semantic web final assignment
 
Semantic web assignment 3
Semantic web assignment 3Semantic web assignment 3
Semantic web assignment 3
 
Semantic web assignment 2
Semantic web assignment 2Semantic web assignment 2
Semantic web assignment 2
 
Semantic web assignment1
Semantic web assignment1Semantic web assignment1
Semantic web assignment1
 

Recently uploaded

Recently uploaded (20)

Instructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptxInstructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptx
 
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
 
50 ĐỀ LUYỆN THI IOE LỚP 9 - NĂM HỌC 2022-2023 (CÓ LINK HÌNH, FILE AUDIO VÀ ĐÁ...
50 ĐỀ LUYỆN THI IOE LỚP 9 - NĂM HỌC 2022-2023 (CÓ LINK HÌNH, FILE AUDIO VÀ ĐÁ...50 ĐỀ LUYỆN THI IOE LỚP 9 - NĂM HỌC 2022-2023 (CÓ LINK HÌNH, FILE AUDIO VÀ ĐÁ...
50 ĐỀ LUYỆN THI IOE LỚP 9 - NĂM HỌC 2022-2023 (CÓ LINK HÌNH, FILE AUDIO VÀ ĐÁ...
 
INU_CAPSTONEDESIGN_비밀번호486_업로드용 발표자료.pdf
INU_CAPSTONEDESIGN_비밀번호486_업로드용 발표자료.pdfINU_CAPSTONEDESIGN_비밀번호486_업로드용 발표자료.pdf
INU_CAPSTONEDESIGN_비밀번호486_업로드용 발표자료.pdf
 
Mattingly "AI & Prompt Design: Limitations and Solutions with LLMs"
Mattingly "AI & Prompt Design: Limitations and Solutions with LLMs"Mattingly "AI & Prompt Design: Limitations and Solutions with LLMs"
Mattingly "AI & Prompt Design: Limitations and Solutions with LLMs"
 
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXXPhrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
 
Introduction to Quality Improvement Essentials
Introduction to Quality Improvement EssentialsIntroduction to Quality Improvement Essentials
Introduction to Quality Improvement Essentials
 
Danh sách HSG Bộ môn cấp trường - Cấp THPT.pdf
Danh sách HSG Bộ môn cấp trường - Cấp THPT.pdfDanh sách HSG Bộ môn cấp trường - Cấp THPT.pdf
Danh sách HSG Bộ môn cấp trường - Cấp THPT.pdf
 
Operations Management - Book1.p - Dr. Abdulfatah A. Salem
Operations Management - Book1.p  - Dr. Abdulfatah A. SalemOperations Management - Book1.p  - Dr. Abdulfatah A. Salem
Operations Management - Book1.p - Dr. Abdulfatah A. Salem
 
Application of Matrices in real life. Presentation on application of matrices
Application of Matrices in real life. Presentation on application of matricesApplication of Matrices in real life. Presentation on application of matrices
Application of Matrices in real life. Presentation on application of matrices
 
Benefits and Challenges of Using Open Educational Resources
Benefits and Challenges of Using Open Educational ResourcesBenefits and Challenges of Using Open Educational Resources
Benefits and Challenges of Using Open Educational Resources
 
Morse OER Some Benefits and Challenges.pptx
Morse OER Some Benefits and Challenges.pptxMorse OER Some Benefits and Challenges.pptx
Morse OER Some Benefits and Challenges.pptx
 
How to the fix Attribute Error in odoo 17
How to the fix Attribute Error in odoo 17How to the fix Attribute Error in odoo 17
How to the fix Attribute Error in odoo 17
 
The Benefits and Challenges of Open Educational Resources
The Benefits and Challenges of Open Educational ResourcesThe Benefits and Challenges of Open Educational Resources
The Benefits and Challenges of Open Educational Resources
 
Salient features of Environment protection Act 1986.pptx
Salient features of Environment protection Act 1986.pptxSalient features of Environment protection Act 1986.pptx
Salient features of Environment protection Act 1986.pptx
 
How to Break the cycle of negative Thoughts
How to Break the cycle of negative ThoughtsHow to Break the cycle of negative Thoughts
How to Break the cycle of negative Thoughts
 
Basic Civil Engg Notes_Chapter-6_Environment Pollution & Engineering
Basic Civil Engg Notes_Chapter-6_Environment Pollution & EngineeringBasic Civil Engg Notes_Chapter-6_Environment Pollution & Engineering
Basic Civil Engg Notes_Chapter-6_Environment Pollution & Engineering
 
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptx
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptxStudents, digital devices and success - Andreas Schleicher - 27 May 2024..pptx
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptx
 
B.ed spl. HI pdusu exam paper-2023-24.pdf
B.ed spl. HI pdusu exam paper-2023-24.pdfB.ed spl. HI pdusu exam paper-2023-24.pdf
B.ed spl. HI pdusu exam paper-2023-24.pdf
 
Pragya Champions Chalice 2024 Prelims & Finals Q/A set, General Quiz
Pragya Champions Chalice 2024 Prelims & Finals Q/A set, General QuizPragya Champions Chalice 2024 Prelims & Finals Q/A set, General Quiz
Pragya Champions Chalice 2024 Prelims & Finals Q/A set, General Quiz
 

Data mining assignment 5

  • 1. Data mining Assignment week 5 BARRY KOLLEE 10349863
  • 2. Assignment  5     Exercise 1: Perceptrons 1.1 What is the function of the learning rate in the perceptron training rule? Within our perceptrons we take a certain weight into account when calculating the difference between the target and outcome values. This weight it’s purpose is to adjust the actual value which we compare to a certain threshold (i.e. ‘do we play tennis; yes or no?’). The learning rate it’s goal is to define the extend of this weight adjustment. This learning rate can be described as a sensitivity for our calculation of the difference between the target and outcome value. In conclusion we can state that we give a value to this learning rate based on the difference between the target and outcome value. 1.2 What kind of Boolean functions can be modeled with perceptrons and which Boolean functions can not be modeled and why? Within the model of our perceptron we take several Boolean functions into account which we regularly see within the common programming languages. These Boolean conditions are: • AND (‘&&’) • OR (‘||’) • NAND (‘! &&’) • NOR (‘! ||’) The Boolean condition ‘XOR’ can’t be implemented within the perceptron it’s model. When using the XOR Boolean function the output can only be 1 if x1 is not equal to x2 (x1 != x2)1. The XOR Boolean condition can be represented by using combinations of perceptrons (more then 1-level) That’s because we can express the XOR statement using an AND and an OR condition.                                                                                                                 1 Objective C representation of x1 not equal to x2 2
  • 3. Assignment  5     Exercise 2: Weight Updating in Perceptrons Assume the following set of instances with the weights: w0 = 0.4 and w1 = 0.8. The threshold is 0. What are the output values for each instance before the threshold function is applied? What is the accuracy of the model when applying the threshold function? For calculating the output values of the instances we perform the following formula: Instance value = w0 + (x1 * w1) Output value: 1 if((w0 + (x1 * w1) +… +(xn * wn))) > 0 -1 otherwise With these formula’s we can find the output values of every instance within the table. If the result of the instance value is higher then 0 the output value is 1. If this is not the case we set it to -1. Underneath are all formula results and output values. They formula results are given in green and the output values are in red. Instance 1 : Instance 1 = 0.4 + (0.8 * 1.0) Instance 1 = 0.4 + 0.8 Instance 1 = 1.2 Instance 1 > threshold Output value for instance 1 = 1.0 Instance 2 : Instance 2 = 0.4 + (0.8 * 0.5) Instance 2 = 0.4 + 0.4 Instance 2 = 0.8 Instance 2 > threshold Output value for instance 2 = 1.0 Instance 3: Instance 3 = 0.4 + (0.8 * -0.8) Instance 3 = 0.4 - 0.64 Instance 3 = -0.24 Instance 3 < threshold Output value for instance 3 = -1.0 3
  • 4. Assignment  5     Instance 4: Instance 4 = 0.4 + (0.8 * 1.0) + (0.8 * -0.2) Instance 4 = 0.4 - 0.16 Instance 4 = 0.24 Instance 4 > threshold Output value for instance 4 = 1.0 If we compare these output values with the instance it’s Instance Target Output target value. We can state that we have a 75 % accuracy class value th because ¾ of the target classes is equal to it’s respective value output value. 1 1 1 2 1 1 3 -1 -1 4 -1 1 4
  • 5. Assignment  5     Exercise 3: Gradient Descent Consider the data in Exercise 2. Apply the gradient descent algorithm and compute the weight updates for one iteration. You can assume the same initial weights, and threshold as in Exercise 2. Assume that the learning rate = 0.2. To compute the weight update for one iteration we perform the following formula where: • ‘n’ represents the learning rate • ‘o’ represents the output value (from previous exercise) < 1.2 , 0.8, -0.24, 0.24 > • ‘xi’ represents the input value We calculate for every instance: for each wi (instance) { Δwi = n (t – 0) * xi Δwi = wi + Δwi } Instance 1 (output is 1.2) Δw0 = Δw + n ( t2 – o2 ) * X0 Δw0 = 0 + 0.2 ( 1 – 1.2) * 1.0 Δw0 = -0.04 Δw1 = Δw1 + n ( t2 – o2 ) * X1 Δw1 = 0 + 0.2 (1 – 1.2 ) * 1.0 Δw1 = -0.04 Instance 2 (output is 0.8) Δw0 = Δw + n ( t2 – o2 ) * X0 Δw0 = -0.04 + 0.2 ( 1 - 0.8) * 1 Δw0 = 0 Δw1 = Δw1 + n ( t2 – o2 ) * X1 Δw1 = -0.04 + 0.2 (1 – 0.8 ) * 0.5 Δw1 = -0.02 Instance 3 (output 3 = -0.24) Δw0 = Δw + n ( t2 - o2 ) * X0 Δw0 = 0 + 0.2 ( -1 - (-0.24) ) * 1 Δw0 = 0 + 0.2 ( -1 + 0.24 ) * 1 Δw0 = 0.152 Δw1 = Δw1 + n ( t2 – o2 ) * X1 Δw1 = -0.02 + 0.2 ( -1 – (-0.24) ) * ( -0.8 ) Δw1 = -0.02 + 0.2 ( -1 + 0.24 ) * ( -0.8 ) Δw1 = 0.1016 Instance 4 (output 4 = 0.24) Δw0 = Δw + n ( t2 – o2 ) * X0 Δw0 = 0.152 + 0.2 ( -1 - 0.24 ) * 1 Δw0 = -0.4 Δw1 = Δw1 + n ( t2 – o2 ) * X1 Δw1 = -0.1016 + 0.2 ( -1 - 0.24 ) * -0.2 Δw1 = 0.1016 5
  • 6. Assignment  5     Now we do our weight updating: W0 = W0 + ΔW0 W0 = 0.4 + (-o.4) W0 = 0 W1 = W1 + ΔW1 W1 = 0.8 + 0.1512 W1 = 0.9512 Now we could perform another iteration by starting all over again… 6
  • 7. Assignment  5     Exercise 4: Stochastic Gradient Descent Consider the data in Exercise 2. Apply the stochastic gradient descent algorithm and compute the weight updates for one iteration. You can assume the same initial weights, and threshold as in Exercise 2. Assume that the learning rate = 0.2. For applying a stochastistic gradient descent algorithm we use the following formula where: • Threshold (‘t’) = 0 • Learning rate (‘n’) = 0.2 Wi = wi + n(t-o) * xi “The difference between the approach that we've used before we now recalculate every output value for the instance that will be calculated. We take the newest/updated weights into account after every calculation. In the previous example we updated the weight after the entire iteration.” Instance 1 O1 = w0 + ( X1 * W1 ) O1 = 0.4 + ( 1 * 0.8 ) O1 = 1.2 w0 = Δw + n ( t1 – o1 ) * X0 w0 = 0.4 + 0.2 ( 1 – 1.2 ) * 1 w0 = 0.36 w1 = Δw1 + n ( t1 – o1 ) * X1 w1 = 0.8 + 0.2 ( 1 – 1.2 ) * -0.2 w1 = 0.76 Instance 2 O2 = w0 + ( X1 * W1 ) O2 = 0.36 + ( 0.5 * 0.76 ) O2 = 0.74 w0 = w1 + n ( t2 – o2 ) * X0 w0 = 0.36 + 0.2 ( 1 – 0.74 ) * 1 w0 = 0.412 w1 = w1 + n ( t2 – o2 ) * X1 w1 = 0.76 + 0.2 ( 1 – 0.74 ) * 0.5 w1 = 0.786 Instance 3 O3 = w0 + ( X1 * W1 ) O3 = 0.412 + ( (-0.8) * 0.786 ) O3 = -0.217 w0 = w1 + n ( t3 – o3 ) * X0 w0 = 0.412 + 0.2 ( -1 + 0.217) * 1 w0 = 0.255 w1 = w1 + n ( t3 – o3 ) * X1 w1 = 0.786 + 0.2 ( -1 + 0.217) * -0.8 w1 = 0.911 7
  • 8. Assignment  5     Instance 4 O4 = w0 + ( X1 * W1 ) O4 = 0.225 + ( (-0.2) * 0.911 ) O4 = 0.073 w0 = w1 + n ( t4 – o4 ) * X0 w0 = 0.255 + 0.2 ( -1 – 0.073 ) * 1 w0 = 0.041 w1 = w1 + n ( t4 – o4 ) * X1 w1 = 0.911 + 0.2 ( -1 – 0.073 ) * -0.2 w1 = 0.954 8