Like this document? Why not share!

# Bio Inspired Computing Cw1

## by Manoj P, Artificial Intelligence Student at University of Leeds on Jan 11, 2013

• 773 views

understnaing how a perceptor only does linear classification. Training neural networks and backpropagation.

understnaing how a perceptor only does linear classification. Training neural networks and backpropagation.

### Views

Total Views
773
Views on SlideShare
773
Embed Views
0

Likes
0
7
0

No embeds

## Bio Inspired Computing Cw1Document Transcript

• Bio Inspired ComputingCourse Work 1IntroductionThis coursework is about algorithms that learn to classify data. Linear classification is done by aperceptron, which is adapted to the Pocket algorithm. Multi-layered neurons using back propagationis used to classify non linearly separable data. Their implementation and results are discussed below.AnswersQuestion 1Make a scatter plot of petal length against petal width. This means that for each entry inthe data set you plot a point in the 2D plane with petal length as ycoordinate, and petalwidth as xcoordinate. Use three different markers (or different colors), one for setosa, onefor versicolor and one for virginica. Plot other sepal/petal length/width combinations aswell.
• Question 2Based on the plots, do you believe that setosa vs. non-setosa can be learnt by a perceptron?That is, given a perceptron with 4 inputs, for petal length, petal width sepal length, andsepal width, can you _nd four weights and a bias which can classify setosa vs. non-setosa?Explain your answer.Apart from Sepal Length Vs Sepal Width graph, the other graphs of Sepal Length vs Petal Length toPetal Length vs petal Width from the above clearly indicate Iris Setosa and Iris-Non-Setosa arelinearly separable. Thus with just two inputs say Petal Length and Petal Wdith a perceptron willdifferentiate setosa vs non setosa, we can assume with 4 inputs the other two inputs can be weightedof very low value or even ignored such that the perceptron will still be able to differentiate setosa vsnon setosa.Example:
• The blue dots represent the setosa and red dots the non setosa and the black line represents apossible way to linearly separate them. Thus in the above perceptron w1 and w2 can be computedand w3 and w4 can be made less significant for the output to classify our problem.Question 3Implement the perceptron algorithm and train a perceptron for the setosa vs. non-setosaclassification problem. Do you expect the algorithm to converge? Why? Does the algorithmconverge? Is the output correct? If it does not converge, can you come up with a sensiblestopping criterion for the algorithm? The same questions (3 and 4) about virginica vs. non-virginica? versicolor vs non-versicolor? There is a major difference between versicolor andthe other two. Explain the difference using the plots you made. • Perceptron Algorithm converges with Setosa vs Non-Setosa as its completely linearly separable. The output is correct and is verifiable by testing each input from the dataset with the perceptron. • The Algorithm does not converge for Verginica vs Non-Verginica in the same way as they are clearly not linearly seperable thus a few points of Verginica or Non-Verginica are wrongly classified. Thus a Threshold which allows such a wrong classification is needed for the perceptron algorithm to converge • The Algorithm, with an increasing threshold converges for both Verginica and Versicolor. who’s sample Test results are below:
• The Code:The program takes rows of training data from an comma separated input file and output class asinputs defined in the program or by command line argument, loads it into python lists and performsthe perceptron algorithm and prints a set of weights which classify the data. As the program runsmore loops, it accepts weights which classify the data with an error too.The Results:Question 4Run the script backprop.py (make sure that the libraries libmiidconnectionism.so, Connectionism.so
• and libmiidsparseimplementation.so, and the Python module connectionism.py (see section1.6) are present in the directory where you run this script), which trains a neural network tosolve the XOR-classification problem. Adapt the script so that it trains a network to solvethe iris-classification problem. How many hidden neurons do you need?The Code:The back propagation XOR Implementation is adapted to take training sample data file and output class as inputs and provideThe Results:Hidden neurons count: 5 • The algorithm produces lot of errors with hidden neurons 1 to 2 and took time • Hidden neurons 3,4 were better compared to 1 and 2 in terms of error but still took time • Hidden neurons 5 was better with 0 error and less time • Hidden neurons above 5 produced no error • Also hidden neurons of 10, 15 showed interesting results in terms of time
• Question 5Results: • Linearly classifiable data required very less data for training or did not matter • Looking at samples of training from above grapy, 50% training and above seems good with anything less providing errorsQuestion 6The Results: • Pocket algorithm enables the perceptron to converge • It progresses towards an optimal solution • It doesn’t loose optimal weights which it see’s unlike the perceptron algorithm
• Question 7The Results:Backpropagation, Perceptron steepest gradient and pocket algorithm have all been given inputs asbig datafiles with different noise levels and the results are included below: • Though back propagation takes a bit longer to complete it provides less errorBackpropagationPerceptron Steepest Gradient
• The Pocket AlgorithmSummaryThe programs starting perceptron, pocket algorithm and back propagation have been implementedin order and attached along with this report. It is observed that the algorithms are better off inclassifying linear to nonlinear and increasing noise. Back propagation handles noise by constantlytraining itself with the evolve function which is of more intelligence.
• The Pocket AlgorithmSummaryThe programs starting perceptron, pocket algorithm and back propagation have been implementedin order and attached along with this report. It is observed that the algorithms are better off inclassifying linear to nonlinear and increasing noise. Back propagation handles noise by constantlytraining itself with the evolve function which is of more intelligence.
• The Pocket AlgorithmSummaryThe programs starting perceptron, pocket algorithm and back propagation have been implementedin order and attached along with this report. It is observed that the algorithms are better off inclassifying linear to nonlinear and increasing noise. Back propagation handles noise by constantlytraining itself with the evolve function which is of more intelligence.
• The Pocket AlgorithmSummaryThe programs starting perceptron, pocket algorithm and back propagation have been implementedin order and attached along with this report. It is observed that the algorithms are better off inclassifying linear to nonlinear and increasing noise. Back propagation handles noise by constantlytraining itself with the evolve function which is of more intelligence.