SlideShare a Scribd company logo
Autocorrect and Neural Networks
Adam Blevins
Dr. Kasper Peeters
Basic Idea: A Neural Network is similar to Artificial Intelligence - it can teach itself like autocorrect.
Input = miktex
Network
Calculations
Output = Molten
Target = MikTeX
Network
Learning Occurs
Trained Output
= MikTeX
You type ”miktex” into your phone and it autocorrects to ”molten”. You then delete molten and once again type MikTeX as
required and the phone learns the word MikTeX as the correction for miktex for the future.
What is a Neural Network?
An Artificial Neural Network (ANN) is effectively a computer
program that learns to interpret large amounts of data. We
present the network with a training set, a learning algorithm
makes corrections to the calculations within and this repeats
until the network is suitably trained. One of the simplest
examples of an ANN takes the following form:
x
b
σ
σ
b
Inputs Hidden Units Output Unit
w
f(x)
Figure 1: A small ANN architecture example
The circles are called nodes and the text within them defines
their outputs. x is the input with f(x) the respective network
output, b is a constant and σ is a sigmoid function for example,
tanh(y) where the input to the node is y. w represents the
weight of a connection between nodes where each arrow
represents a weighted connection. A network may have more
units in each layer and may have many more layers to allow for
more complex calculations.
Uses of Neural Networks
In addition to their use with autocorrect and ability to discover
patterns in large data sets, Neural Networks are also commonly
used for:
1. Image recognition, particularly facial recognition, for
example identifying a criminal using a database of
mugshots.
2. Character recognition which is very popular nowadays,
especially within technology like the Galaxy Note in which
writing with a stylus is commonplace.
3. Function interpolation
The Network Calculations
Each node has a set of connections to nodes from the previous
layer. Defining the output of a node as yi, then the input to a
node j is the sum of the outputs multiplied by their respective
weighted connection:
inputj =
i
yiwji (1)
An example output yi of node i for the sigmoid nodes in Figure
1 is tanh(inputi). Now the next important thing is the
learning algorithm. The most common algorithm is called the
Backpropagation algorithm [1]. The weights are changed with
regards to the following equation:
∆wji = η
∂E
∂wji
(2)
where η is a constant which controls the magnitude of change
and E is the error as a function of the weights. The error
function E has many local minima and for an accurate system
we want to converge on the global minimum for smallest error.
The Error Function
The error depends upon the weights of the network. If we
consider a very simple network in which the error only depends
on one weight, we can imagine it looks something like this:
Figure 2: An error function dependent on one weight
To maximise the accuracy of the trained network, the
Backpropagation algorithm needs to converge on the global
minimum. This depends on a number of factors, not least the
learning rate η from Equation (2). If our first error is stuck in
the local minima, then for η too small, the weight change could
be too small to escape the local minima. If η is too big we
could jump over the global minimum entirely and cause
divergence. We want a method of initialising weights that gives
us the greatest chance of reaching the global minimum.
Pre-training a Neural Network
Pre-training is a method of finding initial weights for the
Neural Network before normal training. The common
technique uses autoencoders. Autoencoders take 2 consecutive
layers, beginning with the input and first hidden layer, and use
the Backpropagation algorithm to train this subnetwork. The
autoencoder mirrors the leftmost layer (represented by the
magenta) to function as shown:
Figure 3: An example autoencoder with two input nodes and
one hidden node
The original training data X is used, and the fewer nodes in
the hidden layer allow for dimension reduction, providing a
simplified output of X, say set Y . Y holds the key
characteristics of set X. The next two layers are trained
similarly using set Y and so on until the entire network is
pre-trained. The network is then rebuilt with the pre-trained
weights. This allows a starting set of weights closer to the
global minimum which gives a greater chance of convergence, as
well as a greater rate of convergence.
Recommended Further Reading
The work of Geoffrey Hinton, a leading researcher in Neural
Nets (whom gave significant contribution to the understanding
of the Backpropagation algorithm alongside David Rumelhart
and Ronald Williams in 1985 [2]) would be encouraged.
References
[1] Michael Nielsen, 2014,
http://neuralnetworksanddeeplearning.com/chap2.html
[2] Rumelhart et. al, 1985, Learning internal representations
by error propagation.

More Related Content

What's hot

Filtering an image is to apply a convolution
Filtering an image is to apply a convolutionFiltering an image is to apply a convolution
Filtering an image is to apply a convolution
Abhishek Mukherjee
 
Image Enhancement using Frequency Domain Filters
Image Enhancement using Frequency Domain FiltersImage Enhancement using Frequency Domain Filters
Image Enhancement using Frequency Domain Filters
Karthika Ramachandran
 
Cat and dog classification
Cat and dog classificationCat and dog classification
Cat and dog classification
omaraldabash
 
COM2304: Intensity Transformation and Spatial Filtering – I (Intensity Transf...
COM2304: Intensity Transformation and Spatial Filtering – I (Intensity Transf...COM2304: Intensity Transformation and Spatial Filtering – I (Intensity Transf...
COM2304: Intensity Transformation and Spatial Filtering – I (Intensity Transf...
Hemantha Kulathilake
 
Digital Image Processing
Digital Image ProcessingDigital Image Processing
Digital Image Processing
lalithambiga kamaraj
 
Video compression
Video compressionVideo compression
Video compression
nnmaurya
 
Representation image
Representation imageRepresentation image
Representation image
Zena Abo-Altaheen
 
Lecture 12
Lecture 12Lecture 12
Lecture 12
Wael Sharba
 
Matlab Image Restoration Techniques
Matlab Image Restoration TechniquesMatlab Image Restoration Techniques
Matlab Image Restoration Techniques
DataminingTools Inc
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
Sagacious IT Solution
 
Digital image processing
Digital image processingDigital image processing
Digital image processingparul4d
 
Sharpening spatial filters
Sharpening spatial filtersSharpening spatial filters
Artificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rulesArtificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rules
Mohammed Bennamoun
 
Digital image processing
Digital image processingDigital image processing
Digital image processing
ABIRAMI M
 
Bit plane coding
Bit plane codingBit plane coding
Bit plane coding
priyadharshini murugan
 
Machine model to classify dogs and cat
Machine model to classify dogs and catMachine model to classify dogs and cat
Machine model to classify dogs and cat
Akash Parui
 
Wavelet based image compression technique
Wavelet based image compression techniqueWavelet based image compression technique
Wavelet based image compression techniquePriyanka Pachori
 
Chuong1 dsp1
Chuong1 dsp1Chuong1 dsp1
Chuong1 dsp1
Trần Đức Anh
 
Lzw coding technique for image compression
Lzw coding technique for image compressionLzw coding technique for image compression
Lzw coding technique for image compression
Tata Consultancy Services
 
NOISE FILTERS IN IMAGE PROCESSING
NOISE FILTERS IN IMAGE PROCESSINGNOISE FILTERS IN IMAGE PROCESSING
NOISE FILTERS IN IMAGE PROCESSING
Animesh Singh Sengar
 

What's hot (20)

Filtering an image is to apply a convolution
Filtering an image is to apply a convolutionFiltering an image is to apply a convolution
Filtering an image is to apply a convolution
 
Image Enhancement using Frequency Domain Filters
Image Enhancement using Frequency Domain FiltersImage Enhancement using Frequency Domain Filters
Image Enhancement using Frequency Domain Filters
 
Cat and dog classification
Cat and dog classificationCat and dog classification
Cat and dog classification
 
COM2304: Intensity Transformation and Spatial Filtering – I (Intensity Transf...
COM2304: Intensity Transformation and Spatial Filtering – I (Intensity Transf...COM2304: Intensity Transformation and Spatial Filtering – I (Intensity Transf...
COM2304: Intensity Transformation and Spatial Filtering – I (Intensity Transf...
 
Digital Image Processing
Digital Image ProcessingDigital Image Processing
Digital Image Processing
 
Video compression
Video compressionVideo compression
Video compression
 
Representation image
Representation imageRepresentation image
Representation image
 
Lecture 12
Lecture 12Lecture 12
Lecture 12
 
Matlab Image Restoration Techniques
Matlab Image Restoration TechniquesMatlab Image Restoration Techniques
Matlab Image Restoration Techniques
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
 
Digital image processing
Digital image processingDigital image processing
Digital image processing
 
Sharpening spatial filters
Sharpening spatial filtersSharpening spatial filters
Sharpening spatial filters
 
Artificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rulesArtificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rules
 
Digital image processing
Digital image processingDigital image processing
Digital image processing
 
Bit plane coding
Bit plane codingBit plane coding
Bit plane coding
 
Machine model to classify dogs and cat
Machine model to classify dogs and catMachine model to classify dogs and cat
Machine model to classify dogs and cat
 
Wavelet based image compression technique
Wavelet based image compression techniqueWavelet based image compression technique
Wavelet based image compression technique
 
Chuong1 dsp1
Chuong1 dsp1Chuong1 dsp1
Chuong1 dsp1
 
Lzw coding technique for image compression
Lzw coding technique for image compressionLzw coding technique for image compression
Lzw coding technique for image compression
 
NOISE FILTERS IN IMAGE PROCESSING
NOISE FILTERS IN IMAGE PROCESSINGNOISE FILTERS IN IMAGE PROCESSING
NOISE FILTERS IN IMAGE PROCESSING
 

Similar to Neural Networks on Steroids (Poster)

Web Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network AlgorithmsWeb Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
aciijournal
 
Web spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithmsWeb spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithms
aciijournal
 
Artificial Neural Network for machine learning
Artificial Neural Network for machine learningArtificial Neural Network for machine learning
Artificial Neural Network for machine learning
2303oyxxxjdeepak
 
MNN
MNNMNN
Unit ii supervised ii
Unit ii supervised iiUnit ii supervised ii
Unit ii supervised ii
Indira Priyadarsini
 
Modeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technologyModeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technology
theijes
 
Handwritten Digit Recognition using Convolutional Neural Networks
Handwritten Digit Recognition using Convolutional Neural  NetworksHandwritten Digit Recognition using Convolutional Neural  Networks
Handwritten Digit Recognition using Convolutional Neural Networks
IRJET Journal
 
ai7.ppt
ai7.pptai7.ppt
ai7.ppt
qwerty432737
 
2.5 backpropagation
2.5 backpropagation2.5 backpropagation
2.5 backpropagation
Krish_ver2
 
Multi Layer Network
Multi Layer NetworkMulti Layer Network
Cryptography using artificial neural network
Cryptography using artificial neural networkCryptography using artificial neural network
Cryptography using artificial neural networkMahira Banu
 
Cnn
CnnCnn
Image Recognition With the Help of Auto-Associative Neural Network
Image Recognition With the Help of Auto-Associative Neural NetworkImage Recognition With the Help of Auto-Associative Neural Network
Image Recognition With the Help of Auto-Associative Neural Network
CSCJournals
 
ai7.ppt
ai7.pptai7.ppt
ai7.ppt
MrHacker61
 
A Study On Deep Learning
A Study On Deep LearningA Study On Deep Learning
A Study On Deep Learning
Abdelrahman Hosny
 
Deep learning (2)
Deep learning (2)Deep learning (2)
Deep learning (2)
Muhanad Al-khalisy
 
Understanding Deep Learning & Parameter Tuning with MXnet, H2o Package in R
Understanding Deep Learning & Parameter Tuning with MXnet, H2o Package in RUnderstanding Deep Learning & Parameter Tuning with MXnet, H2o Package in R
Understanding Deep Learning & Parameter Tuning with MXnet, H2o Package in R
Manish Saraswat
 

Similar to Neural Networks on Steroids (Poster) (20)

Web Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network AlgorithmsWeb Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
 
Web spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithmsWeb spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithms
 
N ns 1
N ns 1N ns 1
N ns 1
 
Artificial Neural Network for machine learning
Artificial Neural Network for machine learningArtificial Neural Network for machine learning
Artificial Neural Network for machine learning
 
MNN
MNNMNN
MNN
 
Unit ii supervised ii
Unit ii supervised iiUnit ii supervised ii
Unit ii supervised ii
 
Modeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technologyModeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technology
 
Neural network
Neural networkNeural network
Neural network
 
Handwritten Digit Recognition using Convolutional Neural Networks
Handwritten Digit Recognition using Convolutional Neural  NetworksHandwritten Digit Recognition using Convolutional Neural  Networks
Handwritten Digit Recognition using Convolutional Neural Networks
 
ai7.ppt
ai7.pptai7.ppt
ai7.ppt
 
2.5 backpropagation
2.5 backpropagation2.5 backpropagation
2.5 backpropagation
 
Multi Layer Network
Multi Layer NetworkMulti Layer Network
Multi Layer Network
 
Cryptography using artificial neural network
Cryptography using artificial neural networkCryptography using artificial neural network
Cryptography using artificial neural network
 
Cnn
CnnCnn
Cnn
 
Image Recognition With the Help of Auto-Associative Neural Network
Image Recognition With the Help of Auto-Associative Neural NetworkImage Recognition With the Help of Auto-Associative Neural Network
Image Recognition With the Help of Auto-Associative Neural Network
 
ai7.ppt
ai7.pptai7.ppt
ai7.ppt
 
A Study On Deep Learning
A Study On Deep LearningA Study On Deep Learning
A Study On Deep Learning
 
Deep Learning Survey
Deep Learning SurveyDeep Learning Survey
Deep Learning Survey
 
Deep learning (2)
Deep learning (2)Deep learning (2)
Deep learning (2)
 
Understanding Deep Learning & Parameter Tuning with MXnet, H2o Package in R
Understanding Deep Learning & Parameter Tuning with MXnet, H2o Package in RUnderstanding Deep Learning & Parameter Tuning with MXnet, H2o Package in R
Understanding Deep Learning & Parameter Tuning with MXnet, H2o Package in R
 

Recently uploaded

Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...
Ana Luísa Pinho
 
ESR spectroscopy in liquid food and beverages.pptx
ESR spectroscopy in liquid food and beverages.pptxESR spectroscopy in liquid food and beverages.pptx
ESR spectroscopy in liquid food and beverages.pptx
PRIYANKA PATEL
 
What is greenhouse gasses and how many gasses are there to affect the Earth.
What is greenhouse gasses and how many gasses are there to affect the Earth.What is greenhouse gasses and how many gasses are there to affect the Earth.
What is greenhouse gasses and how many gasses are there to affect the Earth.
moosaasad1975
 
Topic: SICKLE CELL DISEASE IN CHILDREN-3.pdf
Topic: SICKLE CELL DISEASE IN CHILDREN-3.pdfTopic: SICKLE CELL DISEASE IN CHILDREN-3.pdf
Topic: SICKLE CELL DISEASE IN CHILDREN-3.pdf
TinyAnderson
 
20240520 Planning a Circuit Simulator in JavaScript.pptx
20240520 Planning a Circuit Simulator in JavaScript.pptx20240520 Planning a Circuit Simulator in JavaScript.pptx
20240520 Planning a Circuit Simulator in JavaScript.pptx
Sharon Liu
 
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptx
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxThe use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptx
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptx
MAGOTI ERNEST
 
Nucleophilic Addition of carbonyl compounds.pptx
Nucleophilic Addition of carbonyl  compounds.pptxNucleophilic Addition of carbonyl  compounds.pptx
Nucleophilic Addition of carbonyl compounds.pptx
SSR02
 
Introduction to Mean Field Theory(MFT).pptx
Introduction to Mean Field Theory(MFT).pptxIntroduction to Mean Field Theory(MFT).pptx
Introduction to Mean Field Theory(MFT).pptx
zeex60
 
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...
Travis Hills MN
 
BREEDING METHODS FOR DISEASE RESISTANCE.pptx
BREEDING METHODS FOR DISEASE RESISTANCE.pptxBREEDING METHODS FOR DISEASE RESISTANCE.pptx
BREEDING METHODS FOR DISEASE RESISTANCE.pptx
RASHMI M G
 
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...
University of Maribor
 
Mudde & Rovira Kaltwasser. - Populism in Europe and the Americas - Threat Or...
Mudde &  Rovira Kaltwasser. - Populism in Europe and the Americas - Threat Or...Mudde &  Rovira Kaltwasser. - Populism in Europe and the Americas - Threat Or...
Mudde & Rovira Kaltwasser. - Populism in Europe and the Americas - Threat Or...
frank0071
 
Richard's aventures in two entangled wonderlands
Richard's aventures in two entangled wonderlandsRichard's aventures in two entangled wonderlands
Richard's aventures in two entangled wonderlands
Richard Gill
 
ANAMOLOUS SECONDARY GROWTH IN DICOT ROOTS.pptx
ANAMOLOUS SECONDARY GROWTH IN DICOT ROOTS.pptxANAMOLOUS SECONDARY GROWTH IN DICOT ROOTS.pptx
ANAMOLOUS SECONDARY GROWTH IN DICOT ROOTS.pptx
RASHMI M G
 
如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样
如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样
如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样
yqqaatn0
 
platelets_clotting_biogenesis.clot retractionpptx
platelets_clotting_biogenesis.clot retractionpptxplatelets_clotting_biogenesis.clot retractionpptx
platelets_clotting_biogenesis.clot retractionpptx
muralinath2
 
Red blood cells- genesis-maturation.pptx
Red blood cells- genesis-maturation.pptxRed blood cells- genesis-maturation.pptx
Red blood cells- genesis-maturation.pptx
muralinath2
 
Unveiling the Energy Potential of Marshmallow Deposits.pdf
Unveiling the Energy Potential of Marshmallow Deposits.pdfUnveiling the Energy Potential of Marshmallow Deposits.pdf
Unveiling the Energy Potential of Marshmallow Deposits.pdf
Erdal Coalmaker
 
Toxic effects of heavy metals : Lead and Arsenic
Toxic effects of heavy metals : Lead and ArsenicToxic effects of heavy metals : Lead and Arsenic
Toxic effects of heavy metals : Lead and Arsenic
sanjana502982
 
Orion Air Quality Monitoring Systems - CWS
Orion Air Quality Monitoring Systems - CWSOrion Air Quality Monitoring Systems - CWS
Orion Air Quality Monitoring Systems - CWS
Columbia Weather Systems
 

Recently uploaded (20)

Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...
 
ESR spectroscopy in liquid food and beverages.pptx
ESR spectroscopy in liquid food and beverages.pptxESR spectroscopy in liquid food and beverages.pptx
ESR spectroscopy in liquid food and beverages.pptx
 
What is greenhouse gasses and how many gasses are there to affect the Earth.
What is greenhouse gasses and how many gasses are there to affect the Earth.What is greenhouse gasses and how many gasses are there to affect the Earth.
What is greenhouse gasses and how many gasses are there to affect the Earth.
 
Topic: SICKLE CELL DISEASE IN CHILDREN-3.pdf
Topic: SICKLE CELL DISEASE IN CHILDREN-3.pdfTopic: SICKLE CELL DISEASE IN CHILDREN-3.pdf
Topic: SICKLE CELL DISEASE IN CHILDREN-3.pdf
 
20240520 Planning a Circuit Simulator in JavaScript.pptx
20240520 Planning a Circuit Simulator in JavaScript.pptx20240520 Planning a Circuit Simulator in JavaScript.pptx
20240520 Planning a Circuit Simulator in JavaScript.pptx
 
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptx
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxThe use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptx
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptx
 
Nucleophilic Addition of carbonyl compounds.pptx
Nucleophilic Addition of carbonyl  compounds.pptxNucleophilic Addition of carbonyl  compounds.pptx
Nucleophilic Addition of carbonyl compounds.pptx
 
Introduction to Mean Field Theory(MFT).pptx
Introduction to Mean Field Theory(MFT).pptxIntroduction to Mean Field Theory(MFT).pptx
Introduction to Mean Field Theory(MFT).pptx
 
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...
 
BREEDING METHODS FOR DISEASE RESISTANCE.pptx
BREEDING METHODS FOR DISEASE RESISTANCE.pptxBREEDING METHODS FOR DISEASE RESISTANCE.pptx
BREEDING METHODS FOR DISEASE RESISTANCE.pptx
 
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...
 
Mudde & Rovira Kaltwasser. - Populism in Europe and the Americas - Threat Or...
Mudde &  Rovira Kaltwasser. - Populism in Europe and the Americas - Threat Or...Mudde &  Rovira Kaltwasser. - Populism in Europe and the Americas - Threat Or...
Mudde & Rovira Kaltwasser. - Populism in Europe and the Americas - Threat Or...
 
Richard's aventures in two entangled wonderlands
Richard's aventures in two entangled wonderlandsRichard's aventures in two entangled wonderlands
Richard's aventures in two entangled wonderlands
 
ANAMOLOUS SECONDARY GROWTH IN DICOT ROOTS.pptx
ANAMOLOUS SECONDARY GROWTH IN DICOT ROOTS.pptxANAMOLOUS SECONDARY GROWTH IN DICOT ROOTS.pptx
ANAMOLOUS SECONDARY GROWTH IN DICOT ROOTS.pptx
 
如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样
如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样
如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样
 
platelets_clotting_biogenesis.clot retractionpptx
platelets_clotting_biogenesis.clot retractionpptxplatelets_clotting_biogenesis.clot retractionpptx
platelets_clotting_biogenesis.clot retractionpptx
 
Red blood cells- genesis-maturation.pptx
Red blood cells- genesis-maturation.pptxRed blood cells- genesis-maturation.pptx
Red blood cells- genesis-maturation.pptx
 
Unveiling the Energy Potential of Marshmallow Deposits.pdf
Unveiling the Energy Potential of Marshmallow Deposits.pdfUnveiling the Energy Potential of Marshmallow Deposits.pdf
Unveiling the Energy Potential of Marshmallow Deposits.pdf
 
Toxic effects of heavy metals : Lead and Arsenic
Toxic effects of heavy metals : Lead and ArsenicToxic effects of heavy metals : Lead and Arsenic
Toxic effects of heavy metals : Lead and Arsenic
 
Orion Air Quality Monitoring Systems - CWS
Orion Air Quality Monitoring Systems - CWSOrion Air Quality Monitoring Systems - CWS
Orion Air Quality Monitoring Systems - CWS
 

Neural Networks on Steroids (Poster)

  • 1. Autocorrect and Neural Networks Adam Blevins Dr. Kasper Peeters Basic Idea: A Neural Network is similar to Artificial Intelligence - it can teach itself like autocorrect. Input = miktex Network Calculations Output = Molten Target = MikTeX Network Learning Occurs Trained Output = MikTeX You type ”miktex” into your phone and it autocorrects to ”molten”. You then delete molten and once again type MikTeX as required and the phone learns the word MikTeX as the correction for miktex for the future. What is a Neural Network? An Artificial Neural Network (ANN) is effectively a computer program that learns to interpret large amounts of data. We present the network with a training set, a learning algorithm makes corrections to the calculations within and this repeats until the network is suitably trained. One of the simplest examples of an ANN takes the following form: x b σ σ b Inputs Hidden Units Output Unit w f(x) Figure 1: A small ANN architecture example The circles are called nodes and the text within them defines their outputs. x is the input with f(x) the respective network output, b is a constant and σ is a sigmoid function for example, tanh(y) where the input to the node is y. w represents the weight of a connection between nodes where each arrow represents a weighted connection. A network may have more units in each layer and may have many more layers to allow for more complex calculations. Uses of Neural Networks In addition to their use with autocorrect and ability to discover patterns in large data sets, Neural Networks are also commonly used for: 1. Image recognition, particularly facial recognition, for example identifying a criminal using a database of mugshots. 2. Character recognition which is very popular nowadays, especially within technology like the Galaxy Note in which writing with a stylus is commonplace. 3. Function interpolation The Network Calculations Each node has a set of connections to nodes from the previous layer. Defining the output of a node as yi, then the input to a node j is the sum of the outputs multiplied by their respective weighted connection: inputj = i yiwji (1) An example output yi of node i for the sigmoid nodes in Figure 1 is tanh(inputi). Now the next important thing is the learning algorithm. The most common algorithm is called the Backpropagation algorithm [1]. The weights are changed with regards to the following equation: ∆wji = η ∂E ∂wji (2) where η is a constant which controls the magnitude of change and E is the error as a function of the weights. The error function E has many local minima and for an accurate system we want to converge on the global minimum for smallest error. The Error Function The error depends upon the weights of the network. If we consider a very simple network in which the error only depends on one weight, we can imagine it looks something like this: Figure 2: An error function dependent on one weight To maximise the accuracy of the trained network, the Backpropagation algorithm needs to converge on the global minimum. This depends on a number of factors, not least the learning rate η from Equation (2). If our first error is stuck in the local minima, then for η too small, the weight change could be too small to escape the local minima. If η is too big we could jump over the global minimum entirely and cause divergence. We want a method of initialising weights that gives us the greatest chance of reaching the global minimum. Pre-training a Neural Network Pre-training is a method of finding initial weights for the Neural Network before normal training. The common technique uses autoencoders. Autoencoders take 2 consecutive layers, beginning with the input and first hidden layer, and use the Backpropagation algorithm to train this subnetwork. The autoencoder mirrors the leftmost layer (represented by the magenta) to function as shown: Figure 3: An example autoencoder with two input nodes and one hidden node The original training data X is used, and the fewer nodes in the hidden layer allow for dimension reduction, providing a simplified output of X, say set Y . Y holds the key characteristics of set X. The next two layers are trained similarly using set Y and so on until the entire network is pre-trained. The network is then rebuilt with the pre-trained weights. This allows a starting set of weights closer to the global minimum which gives a greater chance of convergence, as well as a greater rate of convergence. Recommended Further Reading The work of Geoffrey Hinton, a leading researcher in Neural Nets (whom gave significant contribution to the understanding of the Backpropagation algorithm alongside David Rumelhart and Ronald Williams in 1985 [2]) would be encouraged. References [1] Michael Nielsen, 2014, http://neuralnetworksanddeeplearning.com/chap2.html [2] Rumelhart et. al, 1985, Learning internal representations by error propagation.