SlideShare a Scribd company logo
CSC 367 2.0 Mathematical Computing

Assignment 3
Radial Basis Functions

AS2010377
M.K.H.Gunasekara

Special Part 1
Department of Computer Science
UNIVERSITY OF SRI JAYEWARDENEPURA
M.K.H.Gunasekara - AS2010377

CSC 367 2.0 Mathematical Computing

Table of Contents
-

Introduction ............................................................................................................................................ 2
Methodology........................................................................................................................................... 3
Implementation ...................................................................................................................................... 5
Results ..................................................................................................................................................... 6
Discussion.............................................................................................................................................. 10
Appendices............................................................................................................................................ 11

1|Page
M.K.H.Gunasekara - AS2010377

CSC 367 2.0 Mathematical Computing

Introduction
Neural Networks offer a powerful framework for representing nonlinear mappings from
several inputs to one or more outputs.
An important application of neural networks is regression. Instead of mapping the inputs
into a discrete class label, the neural network maps the input variables into continuous
values. A major class of neural networks is the radial basis function (RBF) neural network.
We will look at the architecture of RBF neural networks, followed by its applications in both
regression and classification.
In this report Radial Basis function is discussed for clustering as unsupervised learning
algorithm. Radial basis function is simulated to cluster three flowers in a given data set
which is available in http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data.

2|Page
M.K.H.Gunasekara - AS2010377

CSC 367 2.0 Mathematical Computing

Methodology
Radial Basis Function

Figure 01 : One hidden layer with Radial Basis Activation Functions
Radial basis function (RBF) networks typically have three layers
1. Input Layer
2. A hidden layer with a non-linear RBF activation function
3. Output Layer
Where N is the number of neurons in the hidden layer,
is the center vector for neuron i, and is
the weight of neuron i in the linear output neuron. Functions that depend only on the distance from
a center vector are radially symmetric about that vector, hence the name radial basis function. In the
basic form all inputs are connected to each hidden neuron. The norm is typically taken to be the
Euclidean distance and the radial basis function is commonly taken to be Gaussian Function
(

)

(

‖

‖

)

------ (1)

There are some other Radial Basis functions
Logistic Basis Function
( )

( )

Multi-quadratics
( )
√

3|Page
M.K.H.Gunasekara - AS2010377

CSC 367 2.0 Mathematical Computing

Input nodes connected by weights to a set of RBF neurons fire proportionately to the distance
between the input and the neuron in the weight space

The activation of these nodes is used as inputs to the second layer. The second layer (output layer) is
treated as a simple Perceptron network
Training the RBF Network
This can be done positioning the RBF nodes and using the activation of RBF nodes to train the linear
outputs.
Positioning RBF nodes can be done in two ways; First method is randomly picking some of the data
points to act as basis functions. And the second method is trying to position the nodes so that they
are representative of typical inputs, like using k-means clustering algorithm.
In Activation function there is standard deviation parameter.
One option is, giving all nodes the same size, and testing lots of different sizes using a validation set
to select one that works. Alternatively we can select the size of RBF nodes so that the whole space is
coved by the receptive fields. So the width of the Gaussian should be set according to the maximum
distance between the locations of the hidden nodes (d), and the number of hidden nodes (M)
------ (2)

√

We can use this normalized Gaussian function also.
(

‖

(

)
∑

(

‖

‖

)
‖

------ (3)
)

Outputs of the RBF Network:

(

‖

‖

)

Training the Perceptron Network
We can train Pereceptron Network by using supervised learning method. Therefore we train the
MLP Network according to targets.

4|Page
M.K.H.Gunasekara - AS2010377

CSC 367 2.0 Mathematical Computing

Implementation
Implementation was done using MATLAB 7.10 (2010). Implementation was done according to
following methods
1.
2.
3.
4.
5.

Locate RBF nodes into centers
Calculate for the Gaussian function
Calculate outputs of the RBF layer – Unsupervised Training
Make Perceptron Network for second layer –( I used MLP network without a hidden layer)
Train MLP Network according to targets and inputs (inputs are the output of RBF network) –
Supervised Training
6. Simulate the network

I have implement RBF Network with different strategies to compare the results






Using Randomly selected centers
Using K-Means Cluster centers
Using Non-normalized Gaussian function
Using Normalized Gaussian function
Using SVM for second layer

5|Page
M.K.H.Gunasekara - AS2010377

CSC 367 2.0 Mathematical Computing

Results
sepal length
5.1
4.9
4.7
4.6
5
5.4
4.6
5
4.4
4.9
5.4
4.8
4.8
4.3
5.8
5.7
5.4
5.1
5.7
5.1
5.4
5.1
4.6
5.1
4.8
5
5
5.2
5.2
4.7
4.8
5.4
5.2
5.5
4.9
5
5.5
4.9

6|Page

sepal width
3.5
3
3.2
3.1
3.6
3.9
3.4
3.4
2.9
3.1
3.7
3.4
3
3
4
4.4
3.9
3.5
3.8
3.8
3.4
3.7
3.6
3.3
3.4
3
3.4
3.5
3.4
3.2
3.1
3.4
4.1
4.2
3.1
3.2
3.5
3.1

petal length
1.4
1.4
1.3
1.5
1.4
1.7
1.4
1.5
1.4
1.5
1.5
1.6
1.4
1.1
1.2
1.5
1.3
1.4
1.7
1.5
1.7
1.5
1
1.7
1.9
1.6
1.6
1.5
1.4
1.6
1.6
1.5
1.5
1.4
1.5
1.2
1.3
1.5

petal width
0.2
0.2
0.2
0.2
0.2
0.4
0.3
0.2
0.2
0.1
0.2
0.2
0.1
0.1
0.2
0.4
0.4
0.3
0.3
0.3
0.2
0.4
0.2
0.5
0.2
0.2
0.4
0.2
0.2
0.2
0.2
0.4
0.1
0.2
0.1
0.2
0.2
0.1

Expected Target
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa

Actual Output
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
M.K.H.Gunasekara - AS2010377
4.4
5.1
5
4.5
4.4
5
5.1
4.8
5.1
4.6
5.3
5
7
6.4
6.9
5.5
6.5
5.7
6.3
4.9
6.6
5.2
5
5.9
6
6.1
5.6
6.7
5.6
5.8
6.2
5.6
5.9
6.1
6.3
6.1
6.4
6.6
6.8
6.7
6
5.7
5.5

7|Page

3
3.4
3.5
2.3
3.2
3.5
3.8
3
3.8
3.2
3.7
3.3
3.2
3.2
3.1
2.3
2.8
2.8
3.3
2.4
2.9
2.7
2
3
2.2
2.9
2.9
3.1
3
2.7
2.2
2.5
3.2
2.8
2.5
2.8
2.9
3
2.8
3
2.9
2.6
2.4

1.3
1.5
1.3
1.3
1.3
1.6
1.9
1.4
1.6
1.4
1.5
1.4
4.7
4.5
4.9
4
4.6
4.5
4.7
3.3
4.6
3.9
3.5
4.2
4
4.7
3.6
4.4
4.5
4.1
4.5
3.9
4.8
4
4.9
4.7
4.3
4.4
4.8
5
4.5
3.5
3.8

CSC 367 2.0 Mathematical Computing
0.2
0.2
0.3
0.3
0.2
0.6
0.4
0.3
0.2
0.2
0.2
0.2
1.4
1.5
1.5
1.3
1.5
1.3
1.6
1
1.3
1.4
1
1.5
1
1.4
1.3
1.4
1.5
1
1.5
1.1
1.8
1.3
1.5
1.2
1.3
1.4
1.4
1.7
1.5
1
1.1

Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor

Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
Iris-setosa
FALSE
Iris-versicolor
FALSE
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
FALSE
Iris-versicolor
Iris-versicolor
Iris-versicolor
FALSE
FALSE
Iris-versicolor
Iris-versicolor
Iris-versicolor
M.K.H.Gunasekara - AS2010377
5.5
5.8
6
5.4
6
6.7
6.3
5.6
5.5
5.5
6.1
5.8
5
5.6
5.7
5.7
6.2
5.1
5.7
6.3
5.8
7.1
6.3
6.5
7.6
4.9
7.3
6.7
7.2
6.5
6.4
6.8
5.7
5.8
6.4
6.5
7.7
7.7
6
6.9
5.6
7.7
6.3

8|Page

2.4
2.7
2.7
3
3.4
3.1
2.3
3
2.5
2.6
3
2.6
2.3
2.7
3
2.9
2.9
2.5
2.8
3.3
2.7
3
2.9
3
3
2.5
2.9
2.5
3.6
3.2
2.7
3
2.5
2.8
3.2
3
3.8
2.6
2.2
3.2
2.8
2.8
2.7

3.7
3.9
5.1
4.5
4.5
4.7
4.4
4.1
4
4.4
4.6
4
3.3
4.2
4.2
4.2
4.3
3
4.1
6
5.1
5.9
5.6
5.8
6.6
4.5
6.3
5.8
6.1
5.1
5.3
5.5
5
5.1
5.3
5.5
6.7
6.9
5
5.7
4.9
6.7
4.9

CSC 367 2.0 Mathematical Computing
1
1.2
1.6
1.5
1.6
1.5
1.3
1.3
1.3
1.2
1.4
1.2
1
1.3
1.2
1.3
1.3
1.1
1.3
2.5
1.9
2.1
1.8
2.2
2.1
1.7
1.8
1.8
2.5
2
1.9
2.1
2
2.4
2.3
1.8
2.2
2.3
1.5
2.3
2
2
1.8

Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica

Iris-versicolor
Iris-versicolor
FALSE
Iris-versicolor
Iris-versicolor
FALSE
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-versicolor
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
FALSE
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
M.K.H.Gunasekara - AS2010377
6.7
7.2
6.2
6.1
6.4
7.2
7.4
7.9
6.4
6.3
6.1
7.7
6.3
6.4
6
6.9
6.7
6.9
5.8
6.8
6.7
6.7
6.3
6.5
6.2
5.9

3.3
3.2
2.8
3
2.8
3
2.8
3.8
2.8
2.8
2.6
3
3.4
3.1
3
3.1
3.1
3.1
2.7
3.2
3.3
3
2.5
3
3.4
3

CSC 367 2.0 Mathematical Computing

5.7
6
4.8
4.9
5.6
5.8
6.1
6.4
5.6
5.1
5.6
6.1
5.6
5.5
4.8
5.4
5.6
5.1
5.1
5.9
5.7
5.2
5
5.2
5.4
5.1

2.1
1.8
1.8
1.8
2.1
1.6
1.9
2
2.2
1.5
1.4
2.3
2.4
1.8
1.8
2.1
2.4
2.3
1.9
2.3
2.5
2.3
1.9
2
2.3
1.8

Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica

Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
FALSE
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica
Iris-virginica

I found best results using RBF Network with Non-Normalized Gaussian activation function with 9
mismatches. And I found best results using MLP Network with 4 mismatches.
MLP Network as Second Layer

Non-Normalized Gaussian
function
Normalized Gaussian function

Random Center
9

K Means Center
9

11

11

Support Vector Machine as Second Layer

Non-Normalized Gaussian
function
Normalized Gaussian function

9|Page

Random Center
14

K Means Center
10

14

17
M.K.H.Gunasekara - AS2010377

CSC 367 2.0 Mathematical Computing

Discussion
1. There are some drawbacks of unsupervised center selection in radial basis functions
2. We can use an SVM for the second layer instead of a perceptron but it is not efficient for more
than 2 classes classification

10 | P a g e
M.K.H.Gunasekara - AS2010377

CSC 367 2.0 Mathematical Computing

Appendices
MATLAB Sourcecode for RBF Network with MLP Network
clc
clear all
% M.K.H. Gunasekara
% AS2010377
% Machine Learning
% Radial Basis Function
[arr tx] = xlsread('data.xls');
Centers=zeros(3,4);

% I found centers as mean of the same cluster values
for i=1:50
Centers(1,1)=arr(i,1)+Centers(1,1);
Centers(1,2)=arr(i,2)+Centers(1,2);
Centers(1,3)=arr(i,3)+Centers(1,3);
Centers(1,4)=arr(i,4)+Centers(1,4);
end
for i=51:100
Centers(2,1)=arr(i,1)+Centers(2,1);
Centers(2,2)=arr(i,2)+Centers(2,2);
Centers(2,3)=arr(i,3)+Centers(2,3);
Centers(2,4)=arr(i,4)+Centers(2,4);
end
for i=101:150
Centers(3,1)=arr(i,1)+Centers(3,1);
Centers(3,2)=arr(i,2)+Centers(3,2);
Centers(3,3)=arr(i,3)+Centers(3,3);
Centers(3,4)=arr(i,4)+Centers(3,4);
end
for j= 1:3
Centers(j,1)=Centers(j,1)/50;
Centers(j,2)=Centers(j,2)/50;
Centers(j,3)=Centers(j,3)/50;
Centers(j,4)=Centers(j,4)/50;
end
Centers

% OR we can use k means algorithms calculate cluster centers
k=3; %number of clusters
[IDX,C]=kmeans(arr,k);
C %RBF centres
%Uncomment following line to use k means
%Centers=C;

11 | P a g e
M.K.H.Gunasekara - AS2010377

CSC 367 2.0 Mathematical Computing

% distance between hidden nodes
%distance between hidden node 1 & 2
dist1= sqrt((Centers(1,1)-Centers(2,1))^2 + (Centers(1,2)-Centers(2,2))^2 +
(Centers(1,3)-Centers(2,3))^2 + (Centers(1,4)-Centers(2,4))^2);
%distance between hidden node 1 & 3
dist2= sqrt((Centers(1,1)-Centers(3,1))^2 + (Centers(1,2)-Centers(3,2))^2 +
(Centers(1,3)-Centers(3,3))^2 + (Centers(1,4)-Centers(3,4))^2);
%distance between hidden node 3 & 2
dist3= sqrt((Centers(3,1)-Centers(2,1))^2 + (Centers(3,2)-Centers(2,2))^2 +
(Centers(3,3)-Centers(2,3))^2 + (Centers(3,4)-Centers(2,4))^2);

% finding maximum distance
maxdist=0;
if ( dist1>dist2) & (dist1>dist3)
maxdist=dist1;
end
if ( dist2>dist1) & (dist2>dist3)
maxdist=dist2;
end
if ( dist3>dist1) & (dist3>dist2)
maxdist=dist3;
end
% calculating width
sigma= maxdist/sqrt(2*3);

maxdist;
% Gaussian
%calculating outputs of RBF networks
RBFoutput=zeros(150,3);
d1=zeros(1,4);
Centers;
d=zeros(1,3);
%Unnormalized method
% calculate output for gaussian function
%Uncomment following lines (98-106) to use Non-Normalized Activation
%functions
%
for i=1:150
for j=1:3
d(1,j)= (arr(i,1)- Centers(j,1))^2 + (arr(i,2)- Centers(j,2))^2 +
(arr(i,3)- Centers(j,3))^2 + (arr(i,4)- Centers(j,4))^2;
RBFoutput(i,j)= exp(-(d(1,j)/(2*(sigma^2))));
end
end

12 | P a g e
M.K.H.Gunasekara - AS2010377

CSC 367 2.0 Mathematical Computing

% %

%Normalized method
%Summation
%Uncomment following lines (114-130) to use Gaussian Normalized Activation
functions
% RBFNormSum=zeros(150,1);
% for i=1:150
%
for j=1:3
%
d(1,j)= (arr(i,1)- Centers(j,1))^2 + (arr(i,2)- Centers(j,2))^2 +
(arr(i,3)- Centers(j,3))^2 + (arr(i,4)- Centers(j,4))^2;
%
RBFNormSum(i,1)= exp(-(d(1,j)/(2*(sigma^2))))+ RBFNormSum(i,1);
%
end
%
% d=[0 0 0];
% end
%
% % calculate output for gaussian function
% for i=1:150
%
for j=1:3
%
d(1,j)= (arr(i,1)- Centers(j,1))^2 + (arr(i,2)- Centers(j,2))^2 +
(arr(i,3)- Centers(j,3))^2 + (arr(i,4)- Centers(j,4))^2;
%
%
RBFoutput(i,j)= exp(-(d(1,j)/(2*(sigma^2))))/RBFNormSum(i,1);
%
end
%
% d=[0 0 0];
% end

RBFoutput
RBFo=RBFoutput.'
% making MLP network
% T=zeros(1,150);
T=[1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3
3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
3]
S=[3 1]
;
R=[0 1;0 1;0 1]

% used feedforward neural network as MLP [3 1]
MLPnet=newff(RBFo,S);
MLPnet.trainParam.epochs = 500;
MLPnet.trainParam.lr = 0.1;
MLPnet.trainParam.mc = 0.9;
MLPnet.trainParam.show = 40;

13 | P a g e
M.K.H.Gunasekara - AS2010377

CSC 367 2.0 Mathematical Computing

MLPnet.trainParam.perf = 'mse';
MLPnet.trainParam.goal = 0.001;
MLPnet.trainParam.min_grad = 0.00001;
MLPnet.trainParam.max_fail=4;

MLPnet = train(MLPnet,RBFo,T);
%simulating neural network
y=sim(MLPnet,RBFo);
output=round(y.');
Target=T.';
compare= [T.' output]
count=0;
for i=1:150
if(output(i)~=Target(i))
count=count+1;
end
end
Unmatched=count

MATLAB Source code for RBF Network with SVM
clc
clear all
% M.K.H. Gunasekara
% AS2010377
% Machine Learning
% Radial Basis Function with Support Vector Machine
[arr tx] = xlsread('data.xls');
Centers=zeros(3,4);

% I found centers as mean of the same cluster values
for i=1:50
Centers(1,1)=arr(i,1)+Centers(1,1);
Centers(1,2)=arr(i,2)+Centers(1,2);
Centers(1,3)=arr(i,3)+Centers(1,3);
Centers(1,4)=arr(i,4)+Centers(1,4);
end
for i=51:100
Centers(2,1)=arr(i,1)+Centers(2,1);
Centers(2,2)=arr(i,2)+Centers(2,2);
Centers(2,3)=arr(i,3)+Centers(2,3);
Centers(2,4)=arr(i,4)+Centers(2,4);
end
for i=101:150
Centers(3,1)=arr(i,1)+Centers(3,1);
Centers(3,2)=arr(i,2)+Centers(3,2);
Centers(3,3)=arr(i,3)+Centers(3,3);

14 | P a g e
M.K.H.Gunasekara - AS2010377

CSC 367 2.0 Mathematical Computing

Centers(3,4)=arr(i,4)+Centers(3,4);
end
for j= 1:3
Centers(j,1)=Centers(j,1)/50;
Centers(j,2)=Centers(j,2)/50;
Centers(j,3)=Centers(j,3)/50;
Centers(j,4)=Centers(j,4)/50;
end
Centers

% OR we can use k means algorithms calculate cluster centers
k=3; %number of clusters
[IDX,C]=kmeans(arr,k);
C %RBF centres
%Uncomment following line to use k means
Centers=C;

% distance between hidden nodes
%distance between hidden node 1 & 2
dist1= sqrt((Centers(1,1)-Centers(2,1))^2 + (Centers(1,2)-Centers(2,2))^2 +
(Centers(1,3)-Centers(2,3))^2 + (Centers(1,4)-Centers(2,4))^2);
%distance between hidden node 1 & 3
dist2= sqrt((Centers(1,1)-Centers(3,1))^2 + (Centers(1,2)-Centers(3,2))^2 +
(Centers(1,3)-Centers(3,3))^2 + (Centers(1,4)-Centers(3,4))^2);
%distance between hidden node 3 & 2
dist3= sqrt((Centers(3,1)-Centers(2,1))^2 + (Centers(3,2)-Centers(2,2))^2 +
(Centers(3,3)-Centers(2,3))^2 + (Centers(3,4)-Centers(2,4))^2);

% finding maximum distance
maxdist=0;
if ( dist1>dist2) & (dist1>dist3)
maxdist=dist1;
end
if ( dist2>dist1) & (dist2>dist3)
maxdist=dist2;
end
if ( dist3>dist1) & (dist3>dist2)
maxdist=dist3;
end
% calculating width
sigma= maxdist/sqrt(2*3);

15 | P a g e
M.K.H.Gunasekara - AS2010377

CSC 367 2.0 Mathematical Computing

maxdist;
% Gaussian
%calculating outputs of RBF networks
RBFoutput=zeros(150,3);
d1=zeros(1,4);
Centers;
%Unnormalized method
% calculate output for gaussian function
%Uncomment following lines (98-106) to use Non-Normalized Activation
%functions
d=zeros(1,3);
for i=1:150
for j=1:3
d(1,j)= (arr(i,1)- Centers(j,1))^2 + (arr(i,2)- Centers(j,2))^2 +
(arr(i,3)- Centers(j,3))^2 + (arr(i,4)- Centers(j,4))^2;
RBFoutput(i,j)= exp(-(d(1,j)/(2*(sigma^2))));
end
% d=[0 0 0];
end
%

%Normalized method
%Summation
%Uncomment following lines (114-130) to use Gaussian Normalized Activation
functions
% RBFNormSum=zeros(150,1);
% for i=1:150
%
for j=1:3
%
d(1,j)= (arr(i,1)- Centers(j,1))^2 + (arr(i,2)- Centers(j,2))^2 +
(arr(i,3)- Centers(j,3))^2 + (arr(i,4)- Centers(j,4))^2;
%
RBFNormSum(i,1)= exp(-(d(1,j)/(2*(sigma^2))))+ RBFNormSum(i,1);
%
end
%
% d=[0 0 0];
% end
%
% % calculate output for gaussian function
% for i=1:150
%
for j=1:3
%
d(1,j)= (arr(i,1)- Centers(j,1))^2 + (arr(i,2)- Centers(j,2))^2 +
(arr(i,3)- Centers(j,3))^2 + (arr(i,4)- Centers(j,4))^2;
%
%
RBFoutput(i,j)= exp(-(d(1,j)/(2*(sigma^2))))/RBFNormSum(i,1);
%
end
%
% d=[0 0 0];
% end

RBFoutput
RBFo=RBFoutput.'
% making SVM network

16 | P a g e
M.K.H.Gunasekara - AS2010377

CSC 367 2.0 Mathematical Computing

group=cell(3,1)
group{1,1}=zeros(150,1);
for n=1:150;
tclass(n,1)=tx(n,5);
end
group{1,1}=ismember(tclass,'Iris-setosa')
group{2,1}=ismember(tclass,'Iris-versicolor')
group{3,1}=ismember(tclass,'Iris-virginica')

[train, test] = crossvalind('holdOut',group{1,1});
cp = classperf(group{1,1});
for i=1:3
%svmStruct(i) =
svmtrain(RBFoutput(train,:),group{i,1}(train),'showplot',true);
svmStruct(i) = svmtrain(RBFoutput,group{i,1},'showplot',true);
end
for j=1:size(RBFoutput)
for k=1:3
if(svmclassify(svmStruct(k),RBFoutput(j,:)))
break;
end
end
result(j) = k;
end
T=[1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3
3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
3]
compare=[T.' result.']
Target=T.'
output=result.'
count=0;
for i=1:150
if(output(i)~=Target(i))
count=count+1;
end
end
Unmatched=count

MATLAB Source Code MLP Network
clc
clear all
% M.K.H. Gunasekara
% AS2010377
% Machine Learning
% MLP Network
[arr tx] = xlsread('data.xls');

17 | P a g e
M.K.H.Gunasekara - AS2010377

CSC 367 2.0 Mathematical Computing

inputs=arr.';
T=[1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3
3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
3]
%Multilayer network with hidden layer with 3 nodes
MLPnet=newff(inputs,[4 3 1]);
MLPnet.trainParam.epochs = 500;
MLPnet.trainParam.lr = 0.1;
MLPnet.trainParam.mc = 0.9;
MLPnet.trainParam.show = 40;
MLPnet.trainParam.perf = 'mse';
MLPnet.trainParam.goal = 0.001;
MLPnet.trainParam.min_grad = 0.00001;
MLPnet.trainParam.max_fail=4;

MLPnet = train(MLPnet,inputs,T);
%simulating neural network
y=sim(MLPnet,inputs);
output=round(y.');
Target=T.';
compare= [T.' output]
count=0;
for i=1:150
if(output(i)~=Target(i))
count=count+1;
end
end
Unmatched=count

18 | P a g e

More Related Content

What's hot

Neural Networks: Self-Organizing Maps (SOM)
Neural Networks:  Self-Organizing Maps (SOM)Neural Networks:  Self-Organizing Maps (SOM)
Neural Networks: Self-Organizing Maps (SOM)
Mostafa G. M. Mostafa
 
Neural Networks: Rosenblatt's Perceptron
Neural Networks: Rosenblatt's PerceptronNeural Networks: Rosenblatt's Perceptron
Neural Networks: Rosenblatt's Perceptron
Mostafa G. M. Mostafa
 
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Simplilearn
 
Perceptron (neural network)
Perceptron (neural network)Perceptron (neural network)
Perceptron (neural network)
EdutechLearners
 
Artificial Neural Network(Artificial intelligence)
Artificial Neural Network(Artificial intelligence)Artificial Neural Network(Artificial intelligence)
Artificial Neural Network(Artificial intelligence)
spartacus131211
 
Adaptive Resonance Theory
Adaptive Resonance TheoryAdaptive Resonance Theory
Adaptive Resonance Theory
Naveen Kumar
 
Naive Bayes
Naive BayesNaive Bayes
Naive Bayes
CloudxLab
 
Convolution Neural Network (CNN)
Convolution Neural Network (CNN)Convolution Neural Network (CNN)
Convolution Neural Network (CNN)
Suraj Aavula
 
Associative memory network
Associative memory networkAssociative memory network
Associative memory network
Dr. C.V. Suresh Babu
 
Cnn
CnnCnn
Artifical Neural Network and its applications
Artifical Neural Network and its applicationsArtifical Neural Network and its applications
Artifical Neural Network and its applications
Sangeeta Tiwari
 
Convolutional Neural Network (CNN)
Convolutional Neural Network (CNN)Convolutional Neural Network (CNN)
Convolutional Neural Network (CNN)
Muhammad Haroon
 
Backpropagation And Gradient Descent In Neural Networks | Neural Network Tuto...
Backpropagation And Gradient Descent In Neural Networks | Neural Network Tuto...Backpropagation And Gradient Descent In Neural Networks | Neural Network Tuto...
Backpropagation And Gradient Descent In Neural Networks | Neural Network Tuto...
Simplilearn
 
Fukushima Cognitron
Fukushima CognitronFukushima Cognitron
Fukushima Cognitron
ESCOM
 
Artificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rulesArtificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rules
Mohammed Bennamoun
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
Vajiheh Zoghiyan
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
Prakash K
 
Resnet
ResnetResnet
Feature Extraction and Principal Component Analysis
Feature Extraction and Principal Component AnalysisFeature Extraction and Principal Component Analysis
Feature Extraction and Principal Component Analysis
Sayed Abulhasan Quadri
 
Deep learning lecture - part 1 (basics, CNN)
Deep learning lecture - part 1 (basics, CNN)Deep learning lecture - part 1 (basics, CNN)
Deep learning lecture - part 1 (basics, CNN)
SungminYou
 

What's hot (20)

Neural Networks: Self-Organizing Maps (SOM)
Neural Networks:  Self-Organizing Maps (SOM)Neural Networks:  Self-Organizing Maps (SOM)
Neural Networks: Self-Organizing Maps (SOM)
 
Neural Networks: Rosenblatt's Perceptron
Neural Networks: Rosenblatt's PerceptronNeural Networks: Rosenblatt's Perceptron
Neural Networks: Rosenblatt's Perceptron
 
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
 
Perceptron (neural network)
Perceptron (neural network)Perceptron (neural network)
Perceptron (neural network)
 
Artificial Neural Network(Artificial intelligence)
Artificial Neural Network(Artificial intelligence)Artificial Neural Network(Artificial intelligence)
Artificial Neural Network(Artificial intelligence)
 
Adaptive Resonance Theory
Adaptive Resonance TheoryAdaptive Resonance Theory
Adaptive Resonance Theory
 
Naive Bayes
Naive BayesNaive Bayes
Naive Bayes
 
Convolution Neural Network (CNN)
Convolution Neural Network (CNN)Convolution Neural Network (CNN)
Convolution Neural Network (CNN)
 
Associative memory network
Associative memory networkAssociative memory network
Associative memory network
 
Cnn
CnnCnn
Cnn
 
Artifical Neural Network and its applications
Artifical Neural Network and its applicationsArtifical Neural Network and its applications
Artifical Neural Network and its applications
 
Convolutional Neural Network (CNN)
Convolutional Neural Network (CNN)Convolutional Neural Network (CNN)
Convolutional Neural Network (CNN)
 
Backpropagation And Gradient Descent In Neural Networks | Neural Network Tuto...
Backpropagation And Gradient Descent In Neural Networks | Neural Network Tuto...Backpropagation And Gradient Descent In Neural Networks | Neural Network Tuto...
Backpropagation And Gradient Descent In Neural Networks | Neural Network Tuto...
 
Fukushima Cognitron
Fukushima CognitronFukushima Cognitron
Fukushima Cognitron
 
Artificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rulesArtificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rules
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Resnet
ResnetResnet
Resnet
 
Feature Extraction and Principal Component Analysis
Feature Extraction and Principal Component AnalysisFeature Extraction and Principal Component Analysis
Feature Extraction and Principal Component Analysis
 
Deep learning lecture - part 1 (basics, CNN)
Deep learning lecture - part 1 (basics, CNN)Deep learning lecture - part 1 (basics, CNN)
Deep learning lecture - part 1 (basics, CNN)
 

Viewers also liked

Pertemuan 4 Dioda1
Pertemuan 4   Dioda1Pertemuan 4   Dioda1
Pertemuan 4 Dioda1
ahmad haidaroh
 
Radial Basis Function - Example
Radial Basis Function - ExampleRadial Basis Function - Example
Radial Basis Function - Example
ahmad haidaroh
 
Neural Networks: Radial Bases Functions (RBF)
Neural Networks: Radial Bases Functions (RBF)Neural Networks: Radial Bases Functions (RBF)
Neural Networks: Radial Bases Functions (RBF)
Mostafa G. M. Mostafa
 
Introduction to Radial Basis Function Networks
Introduction to Radial Basis Function NetworksIntroduction to Radial Basis Function Networks
Introduction to Radial Basis Function Networks
ESCOM
 
Section5 Rbf
Section5 RbfSection5 Rbf
Section5 Rbf
kylin
 
Radial Basis Function Interpolation
Radial Basis Function InterpolationRadial Basis Function Interpolation
Radial Basis Function Interpolation
Jesse Bettencourt
 

Viewers also liked (6)

Pertemuan 4 Dioda1
Pertemuan 4   Dioda1Pertemuan 4   Dioda1
Pertemuan 4 Dioda1
 
Radial Basis Function - Example
Radial Basis Function - ExampleRadial Basis Function - Example
Radial Basis Function - Example
 
Neural Networks: Radial Bases Functions (RBF)
Neural Networks: Radial Bases Functions (RBF)Neural Networks: Radial Bases Functions (RBF)
Neural Networks: Radial Bases Functions (RBF)
 
Introduction to Radial Basis Function Networks
Introduction to Radial Basis Function NetworksIntroduction to Radial Basis Function Networks
Introduction to Radial Basis Function Networks
 
Section5 Rbf
Section5 RbfSection5 Rbf
Section5 Rbf
 
Radial Basis Function Interpolation
Radial Basis Function InterpolationRadial Basis Function Interpolation
Radial Basis Function Interpolation
 

Similar to Radial Basis Function

Recognition of handwritten digits using rbf neural network
Recognition of handwritten digits using rbf neural networkRecognition of handwritten digits using rbf neural network
Recognition of handwritten digits using rbf neural network
eSAT Journals
 
Recognition of handwritten digits using rbf neural network
Recognition of handwritten digits using rbf neural networkRecognition of handwritten digits using rbf neural network
Recognition of handwritten digits using rbf neural network
eSAT Publishing House
 
Wireless Positioning using Ellipsoidal Constraints
Wireless Positioning using Ellipsoidal ConstraintsWireless Positioning using Ellipsoidal Constraints
Wireless Positioning using Ellipsoidal Constraints
Giovanni Soldi
 
L14.pdf
L14.pdfL14.pdf
An efficient technique for color image classification based on lower feature ...
An efficient technique for color image classification based on lower feature ...An efficient technique for color image classification based on lower feature ...
An efficient technique for color image classification based on lower feature ...
Alexander Decker
 
Ml srhwt-machine-learning-based-superlative-rapid-haar-wavelet-transformation...
Ml srhwt-machine-learning-based-superlative-rapid-haar-wavelet-transformation...Ml srhwt-machine-learning-based-superlative-rapid-haar-wavelet-transformation...
Ml srhwt-machine-learning-based-superlative-rapid-haar-wavelet-transformation...
Jumlesha Shaik
 
IRJET- Clustering the Real Time Moving Object Adjacent Tracking
IRJET-  	  Clustering the Real Time Moving Object Adjacent TrackingIRJET-  	  Clustering the Real Time Moving Object Adjacent Tracking
IRJET- Clustering the Real Time Moving Object Adjacent Tracking
IRJET Journal
 
E035425030
E035425030E035425030
E035425030
ijceronline
 
Hybrid nearest neighbour and feed forward neural networks algorithm for indoo...
Hybrid nearest neighbour and feed forward neural networks algorithm for indoo...Hybrid nearest neighbour and feed forward neural networks algorithm for indoo...
Hybrid nearest neighbour and feed forward neural networks algorithm for indoo...
Conference Papers
 
Support Vector Machine Optimal Kernel Selection
Support Vector Machine Optimal Kernel SelectionSupport Vector Machine Optimal Kernel Selection
Support Vector Machine Optimal Kernel Selection
IRJET Journal
 
Particle Swarm Optimization Based QoS Aware Routing for Wireless Sensor Networks
Particle Swarm Optimization Based QoS Aware Routing for Wireless Sensor NetworksParticle Swarm Optimization Based QoS Aware Routing for Wireless Sensor Networks
Particle Swarm Optimization Based QoS Aware Routing for Wireless Sensor Networks
ijsrd.com
 
PSO-based Training, Pruning, and Ensembling of Extreme Learning Machine RBF N...
PSO-based Training, Pruning, and Ensembling of Extreme Learning Machine RBF N...PSO-based Training, Pruning, and Ensembling of Extreme Learning Machine RBF N...
PSO-based Training, Pruning, and Ensembling of Extreme Learning Machine RBF N...
ijceronline
 
APPLIED MACHINE LEARNING
APPLIED MACHINE LEARNINGAPPLIED MACHINE LEARNING
APPLIED MACHINE LEARNING
Revanth Kumar
 
Robust Adaptive Threshold Algorithm based on Kernel Fuzzy Clustering on Image...
Robust Adaptive Threshold Algorithm based on Kernel Fuzzy Clustering on Image...Robust Adaptive Threshold Algorithm based on Kernel Fuzzy Clustering on Image...
Robust Adaptive Threshold Algorithm based on Kernel Fuzzy Clustering on Image...
cscpconf
 
Localization for wireless sensor
Localization for wireless sensorLocalization for wireless sensor
Localization for wireless sensor
IJCNCJournal
 
Poster_Reseau_Neurones_Journees_2013
Poster_Reseau_Neurones_Journees_2013Poster_Reseau_Neurones_Journees_2013
Poster_Reseau_Neurones_Journees_2013
Pedro Lopes
 
CSC 347 – Computer Hardware and Maintenance
CSC 347 – Computer Hardware and MaintenanceCSC 347 – Computer Hardware and Maintenance
CSC 347 – Computer Hardware and Maintenance
Sumaiya Ismail
 
Redundant Actor Based Multi-Hole Healing System for Mobile Sensor Networks
Redundant Actor Based Multi-Hole Healing System for Mobile Sensor NetworksRedundant Actor Based Multi-Hole Healing System for Mobile Sensor Networks
Redundant Actor Based Multi-Hole Healing System for Mobile Sensor Networks
Editor IJCATR
 
Localization based range map stitching in wireless sensor network under non l...
Localization based range map stitching in wireless sensor network under non l...Localization based range map stitching in wireless sensor network under non l...
Localization based range map stitching in wireless sensor network under non l...
eSAT Publishing House
 
Single to multiple kernel learning with four popular svm kernels (survey)
Single to multiple kernel learning with four popular svm kernels (survey)Single to multiple kernel learning with four popular svm kernels (survey)
Single to multiple kernel learning with four popular svm kernels (survey)
eSAT Journals
 

Similar to Radial Basis Function (20)

Recognition of handwritten digits using rbf neural network
Recognition of handwritten digits using rbf neural networkRecognition of handwritten digits using rbf neural network
Recognition of handwritten digits using rbf neural network
 
Recognition of handwritten digits using rbf neural network
Recognition of handwritten digits using rbf neural networkRecognition of handwritten digits using rbf neural network
Recognition of handwritten digits using rbf neural network
 
Wireless Positioning using Ellipsoidal Constraints
Wireless Positioning using Ellipsoidal ConstraintsWireless Positioning using Ellipsoidal Constraints
Wireless Positioning using Ellipsoidal Constraints
 
L14.pdf
L14.pdfL14.pdf
L14.pdf
 
An efficient technique for color image classification based on lower feature ...
An efficient technique for color image classification based on lower feature ...An efficient technique for color image classification based on lower feature ...
An efficient technique for color image classification based on lower feature ...
 
Ml srhwt-machine-learning-based-superlative-rapid-haar-wavelet-transformation...
Ml srhwt-machine-learning-based-superlative-rapid-haar-wavelet-transformation...Ml srhwt-machine-learning-based-superlative-rapid-haar-wavelet-transformation...
Ml srhwt-machine-learning-based-superlative-rapid-haar-wavelet-transformation...
 
IRJET- Clustering the Real Time Moving Object Adjacent Tracking
IRJET-  	  Clustering the Real Time Moving Object Adjacent TrackingIRJET-  	  Clustering the Real Time Moving Object Adjacent Tracking
IRJET- Clustering the Real Time Moving Object Adjacent Tracking
 
E035425030
E035425030E035425030
E035425030
 
Hybrid nearest neighbour and feed forward neural networks algorithm for indoo...
Hybrid nearest neighbour and feed forward neural networks algorithm for indoo...Hybrid nearest neighbour and feed forward neural networks algorithm for indoo...
Hybrid nearest neighbour and feed forward neural networks algorithm for indoo...
 
Support Vector Machine Optimal Kernel Selection
Support Vector Machine Optimal Kernel SelectionSupport Vector Machine Optimal Kernel Selection
Support Vector Machine Optimal Kernel Selection
 
Particle Swarm Optimization Based QoS Aware Routing for Wireless Sensor Networks
Particle Swarm Optimization Based QoS Aware Routing for Wireless Sensor NetworksParticle Swarm Optimization Based QoS Aware Routing for Wireless Sensor Networks
Particle Swarm Optimization Based QoS Aware Routing for Wireless Sensor Networks
 
PSO-based Training, Pruning, and Ensembling of Extreme Learning Machine RBF N...
PSO-based Training, Pruning, and Ensembling of Extreme Learning Machine RBF N...PSO-based Training, Pruning, and Ensembling of Extreme Learning Machine RBF N...
PSO-based Training, Pruning, and Ensembling of Extreme Learning Machine RBF N...
 
APPLIED MACHINE LEARNING
APPLIED MACHINE LEARNINGAPPLIED MACHINE LEARNING
APPLIED MACHINE LEARNING
 
Robust Adaptive Threshold Algorithm based on Kernel Fuzzy Clustering on Image...
Robust Adaptive Threshold Algorithm based on Kernel Fuzzy Clustering on Image...Robust Adaptive Threshold Algorithm based on Kernel Fuzzy Clustering on Image...
Robust Adaptive Threshold Algorithm based on Kernel Fuzzy Clustering on Image...
 
Localization for wireless sensor
Localization for wireless sensorLocalization for wireless sensor
Localization for wireless sensor
 
Poster_Reseau_Neurones_Journees_2013
Poster_Reseau_Neurones_Journees_2013Poster_Reseau_Neurones_Journees_2013
Poster_Reseau_Neurones_Journees_2013
 
CSC 347 – Computer Hardware and Maintenance
CSC 347 – Computer Hardware and MaintenanceCSC 347 – Computer Hardware and Maintenance
CSC 347 – Computer Hardware and Maintenance
 
Redundant Actor Based Multi-Hole Healing System for Mobile Sensor Networks
Redundant Actor Based Multi-Hole Healing System for Mobile Sensor NetworksRedundant Actor Based Multi-Hole Healing System for Mobile Sensor Networks
Redundant Actor Based Multi-Hole Healing System for Mobile Sensor Networks
 
Localization based range map stitching in wireless sensor network under non l...
Localization based range map stitching in wireless sensor network under non l...Localization based range map stitching in wireless sensor network under non l...
Localization based range map stitching in wireless sensor network under non l...
 
Single to multiple kernel learning with four popular svm kernels (survey)
Single to multiple kernel learning with four popular svm kernels (survey)Single to multiple kernel learning with four popular svm kernels (survey)
Single to multiple kernel learning with four popular svm kernels (survey)
 

More from Madhawa Gunasekara

Evolutionary Computing
Evolutionary ComputingEvolutionary Computing
Evolutionary Computing
Madhawa Gunasekara
 
Customer interface - Business Ontology Model
Customer interface - Business Ontology ModelCustomer interface - Business Ontology Model
Customer interface - Business Ontology Model
Madhawa Gunasekara
 
Semiotics final
Semiotics finalSemiotics final
Semiotics final
Madhawa Gunasekara
 
Research: Automatic Diabetic Retinopathy Detection
Research: Automatic Diabetic Retinopathy DetectionResearch: Automatic Diabetic Retinopathy Detection
Research: Automatic Diabetic Retinopathy Detection
Madhawa Gunasekara
 
How to prepare Title and Abstract for Research Articles
How to prepare Title and Abstract for Research ArticlesHow to prepare Title and Abstract for Research Articles
How to prepare Title and Abstract for Research Articles
Madhawa Gunasekara
 
Radix sorting
Radix sortingRadix sorting
Radix sorting
Madhawa Gunasekara
 
Low cost self driven car system
Low cost self driven car systemLow cost self driven car system
Low cost self driven car system
Madhawa Gunasekara
 
Audio compression
Audio compressionAudio compression
Audio compression
Madhawa Gunasekara
 

More from Madhawa Gunasekara (8)

Evolutionary Computing
Evolutionary ComputingEvolutionary Computing
Evolutionary Computing
 
Customer interface - Business Ontology Model
Customer interface - Business Ontology ModelCustomer interface - Business Ontology Model
Customer interface - Business Ontology Model
 
Semiotics final
Semiotics finalSemiotics final
Semiotics final
 
Research: Automatic Diabetic Retinopathy Detection
Research: Automatic Diabetic Retinopathy DetectionResearch: Automatic Diabetic Retinopathy Detection
Research: Automatic Diabetic Retinopathy Detection
 
How to prepare Title and Abstract for Research Articles
How to prepare Title and Abstract for Research ArticlesHow to prepare Title and Abstract for Research Articles
How to prepare Title and Abstract for Research Articles
 
Radix sorting
Radix sortingRadix sorting
Radix sorting
 
Low cost self driven car system
Low cost self driven car systemLow cost self driven car system
Low cost self driven car system
 
Audio compression
Audio compressionAudio compression
Audio compression
 

Recently uploaded

UiPath Test Automation using UiPath Test Suite series, part 6
UiPath Test Automation using UiPath Test Suite series, part 6UiPath Test Automation using UiPath Test Suite series, part 6
UiPath Test Automation using UiPath Test Suite series, part 6
DianaGray10
 
20240609 QFM020 Irresponsible AI Reading List May 2024
20240609 QFM020 Irresponsible AI Reading List May 202420240609 QFM020 Irresponsible AI Reading List May 2024
20240609 QFM020 Irresponsible AI Reading List May 2024
Matthew Sinclair
 
TrustArc Webinar - 2024 Global Privacy Survey
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc Webinar - 2024 Global Privacy Survey
TrustArc Webinar - 2024 Global Privacy Survey
TrustArc
 
Serial Arm Control in Real Time Presentation
Serial Arm Control in Real Time PresentationSerial Arm Control in Real Time Presentation
Serial Arm Control in Real Time Presentation
tolgahangng
 
“I’m still / I’m still / Chaining from the Block”
“I’m still / I’m still / Chaining from the Block”“I’m still / I’m still / Chaining from the Block”
“I’m still / I’m still / Chaining from the Block”
Claudio Di Ciccio
 
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
Speck&Tech
 
GraphRAG for Life Science to increase LLM accuracy
GraphRAG for Life Science to increase LLM accuracyGraphRAG for Life Science to increase LLM accuracy
GraphRAG for Life Science to increase LLM accuracy
Tomaz Bratanic
 
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with SlackLet's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
shyamraj55
 
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfObservability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Paige Cruz
 
20240605 QFM017 Machine Intelligence Reading List May 2024
20240605 QFM017 Machine Intelligence Reading List May 202420240605 QFM017 Machine Intelligence Reading List May 2024
20240605 QFM017 Machine Intelligence Reading List May 2024
Matthew Sinclair
 
Presentation of the OECD Artificial Intelligence Review of Germany
Presentation of the OECD Artificial Intelligence Review of GermanyPresentation of the OECD Artificial Intelligence Review of Germany
Presentation of the OECD Artificial Intelligence Review of Germany
innovationoecd
 
Video Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the FutureVideo Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the Future
Alpen-Adria-Universität
 
Essentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FMEEssentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FME
Safe Software
 
Introduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - CybersecurityIntroduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - Cybersecurity
mikeeftimakis1
 
Programming Foundation Models with DSPy - Meetup Slides
Programming Foundation Models with DSPy - Meetup SlidesProgramming Foundation Models with DSPy - Meetup Slides
Programming Foundation Models with DSPy - Meetup Slides
Zilliz
 
Pushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 daysPushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 days
Adtran
 
Uni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdfUni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems S.M.S.A.
 
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
Neo4j
 
Driving Business Innovation: Latest Generative AI Advancements & Success Story
Driving Business Innovation: Latest Generative AI Advancements & Success StoryDriving Business Innovation: Latest Generative AI Advancements & Success Story
Driving Business Innovation: Latest Generative AI Advancements & Success Story
Safe Software
 
How to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptxHow to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptx
danishmna97
 

Recently uploaded (20)

UiPath Test Automation using UiPath Test Suite series, part 6
UiPath Test Automation using UiPath Test Suite series, part 6UiPath Test Automation using UiPath Test Suite series, part 6
UiPath Test Automation using UiPath Test Suite series, part 6
 
20240609 QFM020 Irresponsible AI Reading List May 2024
20240609 QFM020 Irresponsible AI Reading List May 202420240609 QFM020 Irresponsible AI Reading List May 2024
20240609 QFM020 Irresponsible AI Reading List May 2024
 
TrustArc Webinar - 2024 Global Privacy Survey
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc Webinar - 2024 Global Privacy Survey
TrustArc Webinar - 2024 Global Privacy Survey
 
Serial Arm Control in Real Time Presentation
Serial Arm Control in Real Time PresentationSerial Arm Control in Real Time Presentation
Serial Arm Control in Real Time Presentation
 
“I’m still / I’m still / Chaining from the Block”
“I’m still / I’m still / Chaining from the Block”“I’m still / I’m still / Chaining from the Block”
“I’m still / I’m still / Chaining from the Block”
 
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
 
GraphRAG for Life Science to increase LLM accuracy
GraphRAG for Life Science to increase LLM accuracyGraphRAG for Life Science to increase LLM accuracy
GraphRAG for Life Science to increase LLM accuracy
 
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with SlackLet's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
 
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfObservability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdf
 
20240605 QFM017 Machine Intelligence Reading List May 2024
20240605 QFM017 Machine Intelligence Reading List May 202420240605 QFM017 Machine Intelligence Reading List May 2024
20240605 QFM017 Machine Intelligence Reading List May 2024
 
Presentation of the OECD Artificial Intelligence Review of Germany
Presentation of the OECD Artificial Intelligence Review of GermanyPresentation of the OECD Artificial Intelligence Review of Germany
Presentation of the OECD Artificial Intelligence Review of Germany
 
Video Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the FutureVideo Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the Future
 
Essentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FMEEssentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FME
 
Introduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - CybersecurityIntroduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - Cybersecurity
 
Programming Foundation Models with DSPy - Meetup Slides
Programming Foundation Models with DSPy - Meetup SlidesProgramming Foundation Models with DSPy - Meetup Slides
Programming Foundation Models with DSPy - Meetup Slides
 
Pushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 daysPushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 days
 
Uni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdfUni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdf
 
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
 
Driving Business Innovation: Latest Generative AI Advancements & Success Story
Driving Business Innovation: Latest Generative AI Advancements & Success StoryDriving Business Innovation: Latest Generative AI Advancements & Success Story
Driving Business Innovation: Latest Generative AI Advancements & Success Story
 
How to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptxHow to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptx
 

Radial Basis Function

  • 1. CSC 367 2.0 Mathematical Computing Assignment 3 Radial Basis Functions AS2010377 M.K.H.Gunasekara Special Part 1 Department of Computer Science UNIVERSITY OF SRI JAYEWARDENEPURA
  • 2. M.K.H.Gunasekara - AS2010377 CSC 367 2.0 Mathematical Computing Table of Contents - Introduction ............................................................................................................................................ 2 Methodology........................................................................................................................................... 3 Implementation ...................................................................................................................................... 5 Results ..................................................................................................................................................... 6 Discussion.............................................................................................................................................. 10 Appendices............................................................................................................................................ 11 1|Page
  • 3. M.K.H.Gunasekara - AS2010377 CSC 367 2.0 Mathematical Computing Introduction Neural Networks offer a powerful framework for representing nonlinear mappings from several inputs to one or more outputs. An important application of neural networks is regression. Instead of mapping the inputs into a discrete class label, the neural network maps the input variables into continuous values. A major class of neural networks is the radial basis function (RBF) neural network. We will look at the architecture of RBF neural networks, followed by its applications in both regression and classification. In this report Radial Basis function is discussed for clustering as unsupervised learning algorithm. Radial basis function is simulated to cluster three flowers in a given data set which is available in http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data. 2|Page
  • 4. M.K.H.Gunasekara - AS2010377 CSC 367 2.0 Mathematical Computing Methodology Radial Basis Function Figure 01 : One hidden layer with Radial Basis Activation Functions Radial basis function (RBF) networks typically have three layers 1. Input Layer 2. A hidden layer with a non-linear RBF activation function 3. Output Layer Where N is the number of neurons in the hidden layer, is the center vector for neuron i, and is the weight of neuron i in the linear output neuron. Functions that depend only on the distance from a center vector are radially symmetric about that vector, hence the name radial basis function. In the basic form all inputs are connected to each hidden neuron. The norm is typically taken to be the Euclidean distance and the radial basis function is commonly taken to be Gaussian Function ( ) ( ‖ ‖ ) ------ (1) There are some other Radial Basis functions Logistic Basis Function ( ) ( ) Multi-quadratics ( ) √ 3|Page
  • 5. M.K.H.Gunasekara - AS2010377 CSC 367 2.0 Mathematical Computing Input nodes connected by weights to a set of RBF neurons fire proportionately to the distance between the input and the neuron in the weight space The activation of these nodes is used as inputs to the second layer. The second layer (output layer) is treated as a simple Perceptron network Training the RBF Network This can be done positioning the RBF nodes and using the activation of RBF nodes to train the linear outputs. Positioning RBF nodes can be done in two ways; First method is randomly picking some of the data points to act as basis functions. And the second method is trying to position the nodes so that they are representative of typical inputs, like using k-means clustering algorithm. In Activation function there is standard deviation parameter. One option is, giving all nodes the same size, and testing lots of different sizes using a validation set to select one that works. Alternatively we can select the size of RBF nodes so that the whole space is coved by the receptive fields. So the width of the Gaussian should be set according to the maximum distance between the locations of the hidden nodes (d), and the number of hidden nodes (M) ------ (2) √ We can use this normalized Gaussian function also. ( ‖ ( ) ∑ ( ‖ ‖ ) ‖ ------ (3) ) Outputs of the RBF Network: ( ‖ ‖ ) Training the Perceptron Network We can train Pereceptron Network by using supervised learning method. Therefore we train the MLP Network according to targets. 4|Page
  • 6. M.K.H.Gunasekara - AS2010377 CSC 367 2.0 Mathematical Computing Implementation Implementation was done using MATLAB 7.10 (2010). Implementation was done according to following methods 1. 2. 3. 4. 5. Locate RBF nodes into centers Calculate for the Gaussian function Calculate outputs of the RBF layer – Unsupervised Training Make Perceptron Network for second layer –( I used MLP network without a hidden layer) Train MLP Network according to targets and inputs (inputs are the output of RBF network) – Supervised Training 6. Simulate the network I have implement RBF Network with different strategies to compare the results      Using Randomly selected centers Using K-Means Cluster centers Using Non-normalized Gaussian function Using Normalized Gaussian function Using SVM for second layer 5|Page
  • 7. M.K.H.Gunasekara - AS2010377 CSC 367 2.0 Mathematical Computing Results sepal length 5.1 4.9 4.7 4.6 5 5.4 4.6 5 4.4 4.9 5.4 4.8 4.8 4.3 5.8 5.7 5.4 5.1 5.7 5.1 5.4 5.1 4.6 5.1 4.8 5 5 5.2 5.2 4.7 4.8 5.4 5.2 5.5 4.9 5 5.5 4.9 6|Page sepal width 3.5 3 3.2 3.1 3.6 3.9 3.4 3.4 2.9 3.1 3.7 3.4 3 3 4 4.4 3.9 3.5 3.8 3.8 3.4 3.7 3.6 3.3 3.4 3 3.4 3.5 3.4 3.2 3.1 3.4 4.1 4.2 3.1 3.2 3.5 3.1 petal length 1.4 1.4 1.3 1.5 1.4 1.7 1.4 1.5 1.4 1.5 1.5 1.6 1.4 1.1 1.2 1.5 1.3 1.4 1.7 1.5 1.7 1.5 1 1.7 1.9 1.6 1.6 1.5 1.4 1.6 1.6 1.5 1.5 1.4 1.5 1.2 1.3 1.5 petal width 0.2 0.2 0.2 0.2 0.2 0.4 0.3 0.2 0.2 0.1 0.2 0.2 0.1 0.1 0.2 0.4 0.4 0.3 0.3 0.3 0.2 0.4 0.2 0.5 0.2 0.2 0.4 0.2 0.2 0.2 0.2 0.4 0.1 0.2 0.1 0.2 0.2 0.1 Expected Target Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Actual Output Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa
  • 8. M.K.H.Gunasekara - AS2010377 4.4 5.1 5 4.5 4.4 5 5.1 4.8 5.1 4.6 5.3 5 7 6.4 6.9 5.5 6.5 5.7 6.3 4.9 6.6 5.2 5 5.9 6 6.1 5.6 6.7 5.6 5.8 6.2 5.6 5.9 6.1 6.3 6.1 6.4 6.6 6.8 6.7 6 5.7 5.5 7|Page 3 3.4 3.5 2.3 3.2 3.5 3.8 3 3.8 3.2 3.7 3.3 3.2 3.2 3.1 2.3 2.8 2.8 3.3 2.4 2.9 2.7 2 3 2.2 2.9 2.9 3.1 3 2.7 2.2 2.5 3.2 2.8 2.5 2.8 2.9 3 2.8 3 2.9 2.6 2.4 1.3 1.5 1.3 1.3 1.3 1.6 1.9 1.4 1.6 1.4 1.5 1.4 4.7 4.5 4.9 4 4.6 4.5 4.7 3.3 4.6 3.9 3.5 4.2 4 4.7 3.6 4.4 4.5 4.1 4.5 3.9 4.8 4 4.9 4.7 4.3 4.4 4.8 5 4.5 3.5 3.8 CSC 367 2.0 Mathematical Computing 0.2 0.2 0.3 0.3 0.2 0.6 0.4 0.3 0.2 0.2 0.2 0.2 1.4 1.5 1.5 1.3 1.5 1.3 1.6 1 1.3 1.4 1 1.5 1 1.4 1.3 1.4 1.5 1 1.5 1.1 1.8 1.3 1.5 1.2 1.3 1.4 1.4 1.7 1.5 1 1.1 Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa Iris-setosa FALSE Iris-versicolor FALSE Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor FALSE Iris-versicolor Iris-versicolor Iris-versicolor FALSE FALSE Iris-versicolor Iris-versicolor Iris-versicolor
  • 9. M.K.H.Gunasekara - AS2010377 5.5 5.8 6 5.4 6 6.7 6.3 5.6 5.5 5.5 6.1 5.8 5 5.6 5.7 5.7 6.2 5.1 5.7 6.3 5.8 7.1 6.3 6.5 7.6 4.9 7.3 6.7 7.2 6.5 6.4 6.8 5.7 5.8 6.4 6.5 7.7 7.7 6 6.9 5.6 7.7 6.3 8|Page 2.4 2.7 2.7 3 3.4 3.1 2.3 3 2.5 2.6 3 2.6 2.3 2.7 3 2.9 2.9 2.5 2.8 3.3 2.7 3 2.9 3 3 2.5 2.9 2.5 3.6 3.2 2.7 3 2.5 2.8 3.2 3 3.8 2.6 2.2 3.2 2.8 2.8 2.7 3.7 3.9 5.1 4.5 4.5 4.7 4.4 4.1 4 4.4 4.6 4 3.3 4.2 4.2 4.2 4.3 3 4.1 6 5.1 5.9 5.6 5.8 6.6 4.5 6.3 5.8 6.1 5.1 5.3 5.5 5 5.1 5.3 5.5 6.7 6.9 5 5.7 4.9 6.7 4.9 CSC 367 2.0 Mathematical Computing 1 1.2 1.6 1.5 1.6 1.5 1.3 1.3 1.3 1.2 1.4 1.2 1 1.3 1.2 1.3 1.3 1.1 1.3 2.5 1.9 2.1 1.8 2.2 2.1 1.7 1.8 1.8 2.5 2 1.9 2.1 2 2.4 2.3 1.8 2.2 2.3 1.5 2.3 2 2 1.8 Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-versicolor Iris-versicolor FALSE Iris-versicolor Iris-versicolor FALSE Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-versicolor Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica FALSE Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica
  • 10. M.K.H.Gunasekara - AS2010377 6.7 7.2 6.2 6.1 6.4 7.2 7.4 7.9 6.4 6.3 6.1 7.7 6.3 6.4 6 6.9 6.7 6.9 5.8 6.8 6.7 6.7 6.3 6.5 6.2 5.9 3.3 3.2 2.8 3 2.8 3 2.8 3.8 2.8 2.8 2.6 3 3.4 3.1 3 3.1 3.1 3.1 2.7 3.2 3.3 3 2.5 3 3.4 3 CSC 367 2.0 Mathematical Computing 5.7 6 4.8 4.9 5.6 5.8 6.1 6.4 5.6 5.1 5.6 6.1 5.6 5.5 4.8 5.4 5.6 5.1 5.1 5.9 5.7 5.2 5 5.2 5.4 5.1 2.1 1.8 1.8 1.8 2.1 1.6 1.9 2 2.2 1.5 1.4 2.3 2.4 1.8 1.8 2.1 2.4 2.3 1.9 2.3 2.5 2.3 1.9 2 2.3 1.8 Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica FALSE Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica Iris-virginica I found best results using RBF Network with Non-Normalized Gaussian activation function with 9 mismatches. And I found best results using MLP Network with 4 mismatches. MLP Network as Second Layer Non-Normalized Gaussian function Normalized Gaussian function Random Center 9 K Means Center 9 11 11 Support Vector Machine as Second Layer Non-Normalized Gaussian function Normalized Gaussian function 9|Page Random Center 14 K Means Center 10 14 17
  • 11. M.K.H.Gunasekara - AS2010377 CSC 367 2.0 Mathematical Computing Discussion 1. There are some drawbacks of unsupervised center selection in radial basis functions 2. We can use an SVM for the second layer instead of a perceptron but it is not efficient for more than 2 classes classification 10 | P a g e
  • 12. M.K.H.Gunasekara - AS2010377 CSC 367 2.0 Mathematical Computing Appendices MATLAB Sourcecode for RBF Network with MLP Network clc clear all % M.K.H. Gunasekara % AS2010377 % Machine Learning % Radial Basis Function [arr tx] = xlsread('data.xls'); Centers=zeros(3,4); % I found centers as mean of the same cluster values for i=1:50 Centers(1,1)=arr(i,1)+Centers(1,1); Centers(1,2)=arr(i,2)+Centers(1,2); Centers(1,3)=arr(i,3)+Centers(1,3); Centers(1,4)=arr(i,4)+Centers(1,4); end for i=51:100 Centers(2,1)=arr(i,1)+Centers(2,1); Centers(2,2)=arr(i,2)+Centers(2,2); Centers(2,3)=arr(i,3)+Centers(2,3); Centers(2,4)=arr(i,4)+Centers(2,4); end for i=101:150 Centers(3,1)=arr(i,1)+Centers(3,1); Centers(3,2)=arr(i,2)+Centers(3,2); Centers(3,3)=arr(i,3)+Centers(3,3); Centers(3,4)=arr(i,4)+Centers(3,4); end for j= 1:3 Centers(j,1)=Centers(j,1)/50; Centers(j,2)=Centers(j,2)/50; Centers(j,3)=Centers(j,3)/50; Centers(j,4)=Centers(j,4)/50; end Centers % OR we can use k means algorithms calculate cluster centers k=3; %number of clusters [IDX,C]=kmeans(arr,k); C %RBF centres %Uncomment following line to use k means %Centers=C; 11 | P a g e
  • 13. M.K.H.Gunasekara - AS2010377 CSC 367 2.0 Mathematical Computing % distance between hidden nodes %distance between hidden node 1 & 2 dist1= sqrt((Centers(1,1)-Centers(2,1))^2 + (Centers(1,2)-Centers(2,2))^2 + (Centers(1,3)-Centers(2,3))^2 + (Centers(1,4)-Centers(2,4))^2); %distance between hidden node 1 & 3 dist2= sqrt((Centers(1,1)-Centers(3,1))^2 + (Centers(1,2)-Centers(3,2))^2 + (Centers(1,3)-Centers(3,3))^2 + (Centers(1,4)-Centers(3,4))^2); %distance between hidden node 3 & 2 dist3= sqrt((Centers(3,1)-Centers(2,1))^2 + (Centers(3,2)-Centers(2,2))^2 + (Centers(3,3)-Centers(2,3))^2 + (Centers(3,4)-Centers(2,4))^2); % finding maximum distance maxdist=0; if ( dist1>dist2) & (dist1>dist3) maxdist=dist1; end if ( dist2>dist1) & (dist2>dist3) maxdist=dist2; end if ( dist3>dist1) & (dist3>dist2) maxdist=dist3; end % calculating width sigma= maxdist/sqrt(2*3); maxdist; % Gaussian %calculating outputs of RBF networks RBFoutput=zeros(150,3); d1=zeros(1,4); Centers; d=zeros(1,3); %Unnormalized method % calculate output for gaussian function %Uncomment following lines (98-106) to use Non-Normalized Activation %functions % for i=1:150 for j=1:3 d(1,j)= (arr(i,1)- Centers(j,1))^2 + (arr(i,2)- Centers(j,2))^2 + (arr(i,3)- Centers(j,3))^2 + (arr(i,4)- Centers(j,4))^2; RBFoutput(i,j)= exp(-(d(1,j)/(2*(sigma^2)))); end end 12 | P a g e
  • 14. M.K.H.Gunasekara - AS2010377 CSC 367 2.0 Mathematical Computing % % %Normalized method %Summation %Uncomment following lines (114-130) to use Gaussian Normalized Activation functions % RBFNormSum=zeros(150,1); % for i=1:150 % for j=1:3 % d(1,j)= (arr(i,1)- Centers(j,1))^2 + (arr(i,2)- Centers(j,2))^2 + (arr(i,3)- Centers(j,3))^2 + (arr(i,4)- Centers(j,4))^2; % RBFNormSum(i,1)= exp(-(d(1,j)/(2*(sigma^2))))+ RBFNormSum(i,1); % end % % d=[0 0 0]; % end % % % calculate output for gaussian function % for i=1:150 % for j=1:3 % d(1,j)= (arr(i,1)- Centers(j,1))^2 + (arr(i,2)- Centers(j,2))^2 + (arr(i,3)- Centers(j,3))^2 + (arr(i,4)- Centers(j,4))^2; % % RBFoutput(i,j)= exp(-(d(1,j)/(2*(sigma^2))))/RBFNormSum(i,1); % end % % d=[0 0 0]; % end RBFoutput RBFo=RBFoutput.' % making MLP network % T=zeros(1,150); T=[1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3] S=[3 1] ; R=[0 1;0 1;0 1] % used feedforward neural network as MLP [3 1] MLPnet=newff(RBFo,S); MLPnet.trainParam.epochs = 500; MLPnet.trainParam.lr = 0.1; MLPnet.trainParam.mc = 0.9; MLPnet.trainParam.show = 40; 13 | P a g e
  • 15. M.K.H.Gunasekara - AS2010377 CSC 367 2.0 Mathematical Computing MLPnet.trainParam.perf = 'mse'; MLPnet.trainParam.goal = 0.001; MLPnet.trainParam.min_grad = 0.00001; MLPnet.trainParam.max_fail=4; MLPnet = train(MLPnet,RBFo,T); %simulating neural network y=sim(MLPnet,RBFo); output=round(y.'); Target=T.'; compare= [T.' output] count=0; for i=1:150 if(output(i)~=Target(i)) count=count+1; end end Unmatched=count MATLAB Source code for RBF Network with SVM clc clear all % M.K.H. Gunasekara % AS2010377 % Machine Learning % Radial Basis Function with Support Vector Machine [arr tx] = xlsread('data.xls'); Centers=zeros(3,4); % I found centers as mean of the same cluster values for i=1:50 Centers(1,1)=arr(i,1)+Centers(1,1); Centers(1,2)=arr(i,2)+Centers(1,2); Centers(1,3)=arr(i,3)+Centers(1,3); Centers(1,4)=arr(i,4)+Centers(1,4); end for i=51:100 Centers(2,1)=arr(i,1)+Centers(2,1); Centers(2,2)=arr(i,2)+Centers(2,2); Centers(2,3)=arr(i,3)+Centers(2,3); Centers(2,4)=arr(i,4)+Centers(2,4); end for i=101:150 Centers(3,1)=arr(i,1)+Centers(3,1); Centers(3,2)=arr(i,2)+Centers(3,2); Centers(3,3)=arr(i,3)+Centers(3,3); 14 | P a g e
  • 16. M.K.H.Gunasekara - AS2010377 CSC 367 2.0 Mathematical Computing Centers(3,4)=arr(i,4)+Centers(3,4); end for j= 1:3 Centers(j,1)=Centers(j,1)/50; Centers(j,2)=Centers(j,2)/50; Centers(j,3)=Centers(j,3)/50; Centers(j,4)=Centers(j,4)/50; end Centers % OR we can use k means algorithms calculate cluster centers k=3; %number of clusters [IDX,C]=kmeans(arr,k); C %RBF centres %Uncomment following line to use k means Centers=C; % distance between hidden nodes %distance between hidden node 1 & 2 dist1= sqrt((Centers(1,1)-Centers(2,1))^2 + (Centers(1,2)-Centers(2,2))^2 + (Centers(1,3)-Centers(2,3))^2 + (Centers(1,4)-Centers(2,4))^2); %distance between hidden node 1 & 3 dist2= sqrt((Centers(1,1)-Centers(3,1))^2 + (Centers(1,2)-Centers(3,2))^2 + (Centers(1,3)-Centers(3,3))^2 + (Centers(1,4)-Centers(3,4))^2); %distance between hidden node 3 & 2 dist3= sqrt((Centers(3,1)-Centers(2,1))^2 + (Centers(3,2)-Centers(2,2))^2 + (Centers(3,3)-Centers(2,3))^2 + (Centers(3,4)-Centers(2,4))^2); % finding maximum distance maxdist=0; if ( dist1>dist2) & (dist1>dist3) maxdist=dist1; end if ( dist2>dist1) & (dist2>dist3) maxdist=dist2; end if ( dist3>dist1) & (dist3>dist2) maxdist=dist3; end % calculating width sigma= maxdist/sqrt(2*3); 15 | P a g e
  • 17. M.K.H.Gunasekara - AS2010377 CSC 367 2.0 Mathematical Computing maxdist; % Gaussian %calculating outputs of RBF networks RBFoutput=zeros(150,3); d1=zeros(1,4); Centers; %Unnormalized method % calculate output for gaussian function %Uncomment following lines (98-106) to use Non-Normalized Activation %functions d=zeros(1,3); for i=1:150 for j=1:3 d(1,j)= (arr(i,1)- Centers(j,1))^2 + (arr(i,2)- Centers(j,2))^2 + (arr(i,3)- Centers(j,3))^2 + (arr(i,4)- Centers(j,4))^2; RBFoutput(i,j)= exp(-(d(1,j)/(2*(sigma^2)))); end % d=[0 0 0]; end % %Normalized method %Summation %Uncomment following lines (114-130) to use Gaussian Normalized Activation functions % RBFNormSum=zeros(150,1); % for i=1:150 % for j=1:3 % d(1,j)= (arr(i,1)- Centers(j,1))^2 + (arr(i,2)- Centers(j,2))^2 + (arr(i,3)- Centers(j,3))^2 + (arr(i,4)- Centers(j,4))^2; % RBFNormSum(i,1)= exp(-(d(1,j)/(2*(sigma^2))))+ RBFNormSum(i,1); % end % % d=[0 0 0]; % end % % % calculate output for gaussian function % for i=1:150 % for j=1:3 % d(1,j)= (arr(i,1)- Centers(j,1))^2 + (arr(i,2)- Centers(j,2))^2 + (arr(i,3)- Centers(j,3))^2 + (arr(i,4)- Centers(j,4))^2; % % RBFoutput(i,j)= exp(-(d(1,j)/(2*(sigma^2))))/RBFNormSum(i,1); % end % % d=[0 0 0]; % end RBFoutput RBFo=RBFoutput.' % making SVM network 16 | P a g e
  • 18. M.K.H.Gunasekara - AS2010377 CSC 367 2.0 Mathematical Computing group=cell(3,1) group{1,1}=zeros(150,1); for n=1:150; tclass(n,1)=tx(n,5); end group{1,1}=ismember(tclass,'Iris-setosa') group{2,1}=ismember(tclass,'Iris-versicolor') group{3,1}=ismember(tclass,'Iris-virginica') [train, test] = crossvalind('holdOut',group{1,1}); cp = classperf(group{1,1}); for i=1:3 %svmStruct(i) = svmtrain(RBFoutput(train,:),group{i,1}(train),'showplot',true); svmStruct(i) = svmtrain(RBFoutput,group{i,1},'showplot',true); end for j=1:size(RBFoutput) for k=1:3 if(svmclassify(svmStruct(k),RBFoutput(j,:))) break; end end result(j) = k; end T=[1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3] compare=[T.' result.'] Target=T.' output=result.' count=0; for i=1:150 if(output(i)~=Target(i)) count=count+1; end end Unmatched=count MATLAB Source Code MLP Network clc clear all % M.K.H. Gunasekara % AS2010377 % Machine Learning % MLP Network [arr tx] = xlsread('data.xls'); 17 | P a g e
  • 19. M.K.H.Gunasekara - AS2010377 CSC 367 2.0 Mathematical Computing inputs=arr.'; T=[1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3] %Multilayer network with hidden layer with 3 nodes MLPnet=newff(inputs,[4 3 1]); MLPnet.trainParam.epochs = 500; MLPnet.trainParam.lr = 0.1; MLPnet.trainParam.mc = 0.9; MLPnet.trainParam.show = 40; MLPnet.trainParam.perf = 'mse'; MLPnet.trainParam.goal = 0.001; MLPnet.trainParam.min_grad = 0.00001; MLPnet.trainParam.max_fail=4; MLPnet = train(MLPnet,inputs,T); %simulating neural network y=sim(MLPnet,inputs); output=round(y.'); Target=T.'; compare= [T.' output] count=0; for i=1:150 if(output(i)~=Target(i)) count=count+1; end end Unmatched=count 18 | P a g e