SlideShare a Scribd company logo
1 of 57
Mobile Game Programming:
Artificial Neural Network
jintaeks@gmail.com
Division of Digital Contents, DongSeo University.
December 2016
Standard Deviation
 The standard deviation (SD, also represented by
sigma σ or s) is a measure that is used to quantify the
amount of variation of a set of data values
2
Root mean square
 The root mean square (abbreviated RMS) is defined as
the square root of mean square (the arithmetic mean of
the squares of a set of numbers).[
3
Neural Network
 Neuron
– More than 100 billion neurons in a brain
 Dendrites
– receive input signals and, based on those inputs, fire an output signal
via an axon
 Axon
4
Learning
 Supervised Learning
 Unsupervised Learning
 Reinforcement Learning
5
 Supervised Learning —Essentially, a strategy that involves a
teacher that is smarter than the network itself. For example,
let’s take the facial recognition example. The teacher shows
the network a bunch of faces, and the teacher already knows
the name associated with each face. The network makes its
guesses, then the teacher provides the network with the
answers. The network can then compare its answers to the
known “correct” ones and make adjustments according to its
errors.
6
 Unsupervised Learning —Required when there isn’t an
example data set with known answers. Imagine searching for a
hidden pattern in a data set. An application of this is
clustering, i.e. dividing a set of elements into groups
according to some unknown pattern.
 Reinforcement Learning —A strategy built on observation.
Think of a little mouse running through a maze. If it turns left,
it gets a piece of cheese; if it turns right, it receives a little
shock. Presumably, the mouse will learn over time to turn left.
Its neural network makes a decision with an outcome (turn
left or right) and observes its environment (yum or ouch). If
the observation is negative, the network can adjust its weights
in order to make a different decision the next time.
Reinforcement learning is common in robotics.
7
Standard Uses of Neural Networks
 Pattern Recognition
– it’s probably the most common application. Examples are facial
recognition, optical character recognition, etc.
 Control
– You may have read about recent research advances in self driving
cars. Neural networks are often used to manage steering decisions of
physical vehicles (or simulated ones).
 Anomaly Detection
– Because neural networks are so good at recognizing patterns, they
can also be trained to generate an output when something occurs
that doesn’t fit the pattern.
8
Perceptron
 Invented in 1957 by Frank Rosenblatt at the Cornell
Aeronautical Laboratory, a perceptron is the simplest neural
network possible: a computational model of a single neuron.
Step1 : Receive inputs
Input 0: = 12
Input 1: = 4
9
0.5
-1.0
Step 2: Weight inputs
Weight 0: 0.5
Weight 1: -1
 We take each input and multiply it by its weight.
Input 0 * Weight 0 ⇒ 12 * 0.5 = 6
Input 1 * Weight 1 ⇒ 4 * -1 = -4
Step 3: Sum inputs
 The weighted inputs are then summed.
Sum = 6 + -4 = 2
Step 4: Generate output
Output = sign(sum) ⇒ sign(2) ⇒ +1
10
The Perceptron Algorithm:
1. For every input, multiply that input by its weight.
2. Sum all of the weighted inputs.
3. Compute the output of the perceptron based on that sum
passed through an activation function (the sign of the sum).
float inputs[] = {12 , 4};
float weights[] = {0.5,-1};
float sum = 0;
for (int i = 0; i < inputs.length; i++) {
sum += inputs[i]*weights[i];
}
11
The Perceptron Algorithm:
1. For every input, multiply that input by its weight.
2. Sum all of the weighted inputs.
3. Compute the output of the perceptron based on that sum
passed through an activation function (the sign of the sum).
float output = activate(sum);
int activate(float sum) { // Return a 1 if positive, -1 if negative.
if (sum > 0) return 1;
else return -1;
}
12
Simple Pattern Recognition using a Perceptron
 Consider a line in two-dimensional space.
 Points in that space can be classified as living on either one
side of the line or the other.
13
 Let’s say a perceptron has 2 inputs (the x- and y-coordinates
of a point).
 Using a sign activation function, the output will either be -1 or
+1.
 In the previous diagram, we can see how each point is either
below the line (-1) or above (+1).
14
Bias
 Let’s consider the point (0,0).
– No matter what the weights are, the sum will always be 0!
0 * weight for x = 0
0 * weight for y = 0
1 * weight for bias = weight for bias
15
Coding the Perceptron
class Perceptron
{
private:
//The Perceptron stores its weights and learning constants.
float* spWeights;
int mWeightsSize = 0;
float c = 0.01; // learning constant
16
Initialize
Perceptron(int n)
{
mWeightsSize = n;
spWeights = new float[ n ];
//Weights start off random.
for (int i = 0; i < mWeightsSize; i++) {
spWeights[ i ] = random( -1, 1 );
}
}
17
Feed forward
int feedforward(float inputs[])
{
float sum = 0;
for (int i = 0; i < mWeightsSize; i++) {
sum += inputs[ i ] * spWeights[ i ];
}
return activate(sum);
}
//Output is a +1 or -1.
int activate(float sum)
{
if (sum > 0) return 1;
else return -1;
}
18
Use the Perceptron
Perceptron p = new Perceptron(3);
float point[] = {50,-12,1}; // The input is 3 values: x,y and bias.
int result = p.feedforward(point);
19
Supervised Learning
① Provide the perceptron with inputs for which there is a
known answer.
② Ask the perceptron to guess an answer.
③ Compute the error. (Did it get the answer right or wrong?)
④ Adjust all the weights according to the error.
⑤ Return to Step 1 and repeat!
20
The Perceptron's error
ERROR = DESIRED OUTPUT - GUESS OUTPUT
 The error is the determining factor in how the perceptron’s
weights should be adjusted
21
 For any given weight, what we are looking to calculate is the
change in weight, often called Δweight.
NEW WEIGHT = WEIGHT + ΔWEIGHT
 Δweight is calculated as the error multiplied by the input.
ΔWEIGHT = ERROR × INPUT
 Therefore:
NEW WEIGHT = WEIGHT + ERROR × INPUT
22
Learning Constant
NEW WEIGHT = WEIGHT + ERROR * INPUT * LEARNING CONSTANT
 With a small learning constant, the weights will be adjusted
slowly, requiring more training time but allowing the network
to make very small adjustments that could improve the
network’s overall accuracy.
float c = 0.01; // learning constant
//Train the network against known data.
void train(float inputs[], int desired) {
int guess = feedforward(inputs);
float error = desired - guess;
for (int i = 0; i < mWeightsSize; i++) {
spWeights[ i ] += c * error * inputs[ i ];
}
}
23
Trainer
 To train the perceptron, we need a set of inputs with a
known answer.
class Trainer
{
public:
//A "Trainer" object stores the inputs and the correct answer.
float mInputs[ 3 ];
int mAnswer;
void SetData( float x, float y, int a )
{
mInputs[ 0 ] = x;
mInputs[ 1 ] = y;
//Note that the Trainer has the bias input built into its array.
mInputs[ 2 ] = 1;
mAnswer = a;
}24
//The formula for a line
float f( float x )
{
return 2 * x + 1;
}
25
void Setup()
{
srand( time( 0 ) );
//size( 640, 360 );
spPerceptron.reset( new Perceptron( 3 ) );
// Make 2,000 training points.
for( int i = 0; i < gTrainerSize; i++ ) {
float x = random( -gWidth / 2, gWidth / 2 );
float y = random( -gHeight / 2, gHeight / 2 );
//Is the correct answer 1 or - 1 ?
int answer = 1;
if( y < f( x ) ) answer = -1;
gTraining[ i ].SetData( x, y, answer );
}
}26
void Training()
{
for( int i = 0; i < gTrainerSize; i++ ) {
spPerceptron->train( gTraining[ i ].mInputs, gTraining[ i
].mAnswer );
}
}
void main()
{
Setup();
Training();
}
27
Second Example: Computer Go
28
29
30
31
32
33
34
Practice
 Calculate weights of the Perceptron for Go by using paper
and pencil.
35
Limitations of Perceptron
 Perceptrons are limited in their abilities. It can only solve
linearly separable problems.
– If you can classify the data with a straight line, then it is linearly
separable. On the right, however, is nonlinearly separable data
36
 One of the simplest examples of a nonlinearly separable
problem is XOR(exclusive or).
37
Multi Layered Perceptron
 So perceptrons can’t even solve something as simple as XOR.
But what if we made a network out of two perceptrons? If
one perceptron can solve OR and one perceptron can solve
NOT AND, then two perceptrons combined can solve XOR.
38
Backpropagation
 The output of the network is generated in the same manner
as a perceptron. The inputs multiplied by the weights are
summed and fed forward through the network.
 The difference here is that they pass through additional layers
of neurons before reaching the output. Training the network
(i.e. adjusting the weights) also involves taking the
error(desired result guess).
 The error, however, must be fed backwards through the
network. The final error ultimately adjusts the weights of all
the connections.
39
Neural Network
 Single Layer
40
 Two Layers
41
Neural Network
42
43
44
45
46
47
48
49
50
51
0.56 = 1 / (1 + e-0.25)
Activation Functions
52
Sigmoid Function
 A sigmoid function is a mathematical function having an "S"
shape (sigmoid curve).
 A wide variety of sigmoid functions have been used as
the activation function of artificial neurons.
53
0.56 = 1 / (1 + e-0.25)
54
Practice
 Write a neural network program which recognize a digit on a
8×8 image.
55
References
 http://natureofcode.com/book/chapter-10-neural-networks/
 http://rimstar.org/science_electronics_projects/backpropagatio
n_neural_network_software_3_layer.htm
 https://en.wikipedia.org/wiki/Sigmoid_function
 https://www.youtube.com/watch?v=WZDMNM36PsM
56
Neural network 20161210_jintaekseo

More Related Content

What's hot

Neural networks
Neural networksNeural networks
Neural networksSlideshare
 
Neural networks
Neural networksNeural networks
Neural networksSlideshare
 
Artificial Neural Network (draft)
Artificial Neural Network (draft)Artificial Neural Network (draft)
Artificial Neural Network (draft)James Boulie
 
Neural network
Neural networkNeural network
Neural networkmarada0033
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networksmadhu sudhakar
 
Neural network & its applications
Neural network & its applications Neural network & its applications
Neural network & its applications Ahmed_hashmi
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural NetworkKnoldus Inc.
 
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNSArtificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNSMohammed Bennamoun
 
Artifical Neural Network and its applications
Artifical Neural Network and its applicationsArtifical Neural Network and its applications
Artifical Neural Network and its applicationsSangeeta Tiwari
 
Artificial Neural Networks Lect1: Introduction & neural computation
Artificial Neural Networks Lect1: Introduction & neural computationArtificial Neural Networks Lect1: Introduction & neural computation
Artificial Neural Networks Lect1: Introduction & neural computationMohammed Bennamoun
 
Artificial neural networks (2)
Artificial neural networks (2)Artificial neural networks (2)
Artificial neural networks (2)sai anjaneya
 
Artificial Neural Network
Artificial Neural Network Artificial Neural Network
Artificial Neural Network Iman Ardekani
 
Artificial neural network - Architectures
Artificial neural network - ArchitecturesArtificial neural network - Architectures
Artificial neural network - ArchitecturesErin Brunston
 
Introduction to Neural networks (under graduate course) Lecture 3 of 9
Introduction to Neural networks (under graduate course) Lecture 3 of 9Introduction to Neural networks (under graduate course) Lecture 3 of 9
Introduction to Neural networks (under graduate course) Lecture 3 of 9Randa Elanwar
 
(Artificial) Neural Network
(Artificial) Neural Network(Artificial) Neural Network
(Artificial) Neural NetworkPutri Wikie
 

What's hot (20)

Neural networks
Neural networksNeural networks
Neural networks
 
Neural networks
Neural networksNeural networks
Neural networks
 
Artificial Neural Network (draft)
Artificial Neural Network (draft)Artificial Neural Network (draft)
Artificial Neural Network (draft)
 
Neural network
Neural networkNeural network
Neural network
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networks
 
Neural network & its applications
Neural network & its applications Neural network & its applications
Neural network & its applications
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNSArtificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
 
Artifical Neural Network and its applications
Artifical Neural Network and its applicationsArtifical Neural Network and its applications
Artifical Neural Network and its applications
 
Artificial Neural Networks Lect1: Introduction & neural computation
Artificial Neural Networks Lect1: Introduction & neural computationArtificial Neural Networks Lect1: Introduction & neural computation
Artificial Neural Networks Lect1: Introduction & neural computation
 
Artifical Neural Network
Artifical Neural NetworkArtifical Neural Network
Artifical Neural Network
 
Artificial Neuron network
Artificial Neuron network Artificial Neuron network
Artificial Neuron network
 
Artificial neural networks (2)
Artificial neural networks (2)Artificial neural networks (2)
Artificial neural networks (2)
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
 
Artificial Neural Network
Artificial Neural Network Artificial Neural Network
Artificial Neural Network
 
Artificial neural network - Architectures
Artificial neural network - ArchitecturesArtificial neural network - Architectures
Artificial neural network - Architectures
 
Introduction to Neural networks (under graduate course) Lecture 3 of 9
Introduction to Neural networks (under graduate course) Lecture 3 of 9Introduction to Neural networks (under graduate course) Lecture 3 of 9
Introduction to Neural networks (under graduate course) Lecture 3 of 9
 
Neural network
Neural networkNeural network
Neural network
 
Neural network
Neural networkNeural network
Neural network
 
(Artificial) Neural Network
(Artificial) Neural Network(Artificial) Neural Network
(Artificial) Neural Network
 

Similar to Neural network 20161210_jintaekseo

lecture07.ppt
lecture07.pptlecture07.ppt
lecture07.pptbutest
 
Perceptron Study Material with XOR example
Perceptron Study Material with XOR examplePerceptron Study Material with XOR example
Perceptron Study Material with XOR exampleGSURESHKUMAR11
 
Perceptron (neural network)
Perceptron (neural network)Perceptron (neural network)
Perceptron (neural network)EdutechLearners
 
Artificial Neural Networks (ANNs) focusing on the perceptron Algorithm.pptx
Artificial Neural Networks (ANNs) focusing on the perceptron Algorithm.pptxArtificial Neural Networks (ANNs) focusing on the perceptron Algorithm.pptx
Artificial Neural Networks (ANNs) focusing on the perceptron Algorithm.pptxMDYasin34
 
SOFT COMPUTERING TECHNICS -Unit 1
SOFT COMPUTERING TECHNICS -Unit 1SOFT COMPUTERING TECHNICS -Unit 1
SOFT COMPUTERING TECHNICS -Unit 1sravanthi computers
 
lecture13-NN-basics.pptx
lecture13-NN-basics.pptxlecture13-NN-basics.pptx
lecture13-NN-basics.pptxAbijahRoseline1
 
Neural-Networks.ppt
Neural-Networks.pptNeural-Networks.ppt
Neural-Networks.pptRINUSATHYAN
 
Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...
Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...
Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...cscpconf
 
Supervised Learning
Supervised LearningSupervised Learning
Supervised Learningbutest
 
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptxACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptxgnans Kgnanshek
 
Deep learning: Mathematical Perspective
Deep learning: Mathematical PerspectiveDeep learning: Mathematical Perspective
Deep learning: Mathematical PerspectiveYounusS2
 
Chapter3 bp
Chapter3   bpChapter3   bp
Chapter3 bpkumar tm
 

Similar to Neural network 20161210_jintaekseo (20)

lecture07.ppt
lecture07.pptlecture07.ppt
lecture07.ppt
 
Perceptron in ANN
Perceptron in ANNPerceptron in ANN
Perceptron in ANN
 
Perceptron Study Material with XOR example
Perceptron Study Material with XOR examplePerceptron Study Material with XOR example
Perceptron Study Material with XOR example
 
10-Perceptron.pdf
10-Perceptron.pdf10-Perceptron.pdf
10-Perceptron.pdf
 
Perceptron (neural network)
Perceptron (neural network)Perceptron (neural network)
Perceptron (neural network)
 
19_Learning.ppt
19_Learning.ppt19_Learning.ppt
19_Learning.ppt
 
Artificial Neural Networks (ANNs) focusing on the perceptron Algorithm.pptx
Artificial Neural Networks (ANNs) focusing on the perceptron Algorithm.pptxArtificial Neural Networks (ANNs) focusing on the perceptron Algorithm.pptx
Artificial Neural Networks (ANNs) focusing on the perceptron Algorithm.pptx
 
ANN.pptx
ANN.pptxANN.pptx
ANN.pptx
 
SOFT COMPUTERING TECHNICS -Unit 1
SOFT COMPUTERING TECHNICS -Unit 1SOFT COMPUTERING TECHNICS -Unit 1
SOFT COMPUTERING TECHNICS -Unit 1
 
lecture13-NN-basics.pptx
lecture13-NN-basics.pptxlecture13-NN-basics.pptx
lecture13-NN-basics.pptx
 
Neural-Networks.ppt
Neural-Networks.pptNeural-Networks.ppt
Neural-Networks.ppt
 
Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...
Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...
Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...
 
Supervised Learning
Supervised LearningSupervised Learning
Supervised Learning
 
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptxACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
 
071bct537 lab4
071bct537 lab4071bct537 lab4
071bct537 lab4
 
Deep learning: Mathematical Perspective
Deep learning: Mathematical PerspectiveDeep learning: Mathematical Perspective
Deep learning: Mathematical Perspective
 
Perceptron
PerceptronPerceptron
Perceptron
 
Chapter3 bp
Chapter3   bpChapter3   bp
Chapter3 bp
 
m3 (2).pdf
m3 (2).pdfm3 (2).pdf
m3 (2).pdf
 
2-Perceptrons.pdf
2-Perceptrons.pdf2-Perceptrons.pdf
2-Perceptrons.pdf
 

More from JinTaek Seo

05 heap 20161110_jintaeks
05 heap 20161110_jintaeks05 heap 20161110_jintaeks
05 heap 20161110_jintaeksJinTaek Seo
 
02 linked list_20160217_jintaekseo
02 linked list_20160217_jintaekseo02 linked list_20160217_jintaekseo
02 linked list_20160217_jintaekseoJinTaek Seo
 
Hermite spline english_20161201_jintaeks
Hermite spline english_20161201_jintaeksHermite spline english_20161201_jintaeks
Hermite spline english_20161201_jintaeksJinTaek Seo
 
01 stack 20160908_jintaek_seo
01 stack 20160908_jintaek_seo01 stack 20160908_jintaek_seo
01 stack 20160908_jintaek_seoJinTaek Seo
 
03 fsm how_toimplementai_state_20161006_jintaeks
03 fsm how_toimplementai_state_20161006_jintaeks03 fsm how_toimplementai_state_20161006_jintaeks
03 fsm how_toimplementai_state_20161006_jintaeksJinTaek Seo
 
Beginning direct3d gameprogramming10_shaderdetail_20160506_jintaeks
Beginning direct3d gameprogramming10_shaderdetail_20160506_jintaeksBeginning direct3d gameprogramming10_shaderdetail_20160506_jintaeks
Beginning direct3d gameprogramming10_shaderdetail_20160506_jintaeksJinTaek Seo
 
Beginning direct3d gameprogramming09_shaderprogramming_20160505_jintaeks
Beginning direct3d gameprogramming09_shaderprogramming_20160505_jintaeksBeginning direct3d gameprogramming09_shaderprogramming_20160505_jintaeks
Beginning direct3d gameprogramming09_shaderprogramming_20160505_jintaeksJinTaek Seo
 
Beginning direct3d gameprogramming08_usingtextures_20160428_jintaeks
Beginning direct3d gameprogramming08_usingtextures_20160428_jintaeksBeginning direct3d gameprogramming08_usingtextures_20160428_jintaeks
Beginning direct3d gameprogramming08_usingtextures_20160428_jintaeksJinTaek Seo
 
Beginning direct3d gameprogramming07_lightsandmaterials_20161117_jintaeks
Beginning direct3d gameprogramming07_lightsandmaterials_20161117_jintaeksBeginning direct3d gameprogramming07_lightsandmaterials_20161117_jintaeks
Beginning direct3d gameprogramming07_lightsandmaterials_20161117_jintaeksJinTaek Seo
 
Beginning direct3d gameprogramming06_firststepstoanimation_20161115_jintaeks
Beginning direct3d gameprogramming06_firststepstoanimation_20161115_jintaeksBeginning direct3d gameprogramming06_firststepstoanimation_20161115_jintaeks
Beginning direct3d gameprogramming06_firststepstoanimation_20161115_jintaeksJinTaek Seo
 
Beginning direct3d gameprogramming05_thebasics_20160421_jintaeks
Beginning direct3d gameprogramming05_thebasics_20160421_jintaeksBeginning direct3d gameprogramming05_thebasics_20160421_jintaeks
Beginning direct3d gameprogramming05_thebasics_20160421_jintaeksJinTaek Seo
 
Beginning direct3d gameprogramming04_3dfundamentals_20160414_jintaeks
Beginning direct3d gameprogramming04_3dfundamentals_20160414_jintaeksBeginning direct3d gameprogramming04_3dfundamentals_20160414_jintaeks
Beginning direct3d gameprogramming04_3dfundamentals_20160414_jintaeksJinTaek Seo
 
Beginning direct3d gameprogramming02_overviewofhalandcom_20160408_jintaeks
Beginning direct3d gameprogramming02_overviewofhalandcom_20160408_jintaeksBeginning direct3d gameprogramming02_overviewofhalandcom_20160408_jintaeks
Beginning direct3d gameprogramming02_overviewofhalandcom_20160408_jintaeksJinTaek Seo
 
Beginning direct3d gameprogramming01_thehistoryofdirect3dgraphics_20160407_ji...
Beginning direct3d gameprogramming01_thehistoryofdirect3dgraphics_20160407_ji...Beginning direct3d gameprogramming01_thehistoryofdirect3dgraphics_20160407_ji...
Beginning direct3d gameprogramming01_thehistoryofdirect3dgraphics_20160407_ji...JinTaek Seo
 
Beginning direct3d gameprogramming01_20161102_jintaeks
Beginning direct3d gameprogramming01_20161102_jintaeksBeginning direct3d gameprogramming01_20161102_jintaeks
Beginning direct3d gameprogramming01_20161102_jintaeksJinTaek Seo
 
Beginning direct3d gameprogramming03_programmingconventions_20160414_jintaeks
Beginning direct3d gameprogramming03_programmingconventions_20160414_jintaeksBeginning direct3d gameprogramming03_programmingconventions_20160414_jintaeks
Beginning direct3d gameprogramming03_programmingconventions_20160414_jintaeksJinTaek Seo
 
Beginning direct3d gameprogrammingmath06_transformations_20161019_jintaeks
Beginning direct3d gameprogrammingmath06_transformations_20161019_jintaeksBeginning direct3d gameprogrammingmath06_transformations_20161019_jintaeks
Beginning direct3d gameprogrammingmath06_transformations_20161019_jintaeksJinTaek Seo
 
Beginning direct3d gameprogrammingmath05_matrices_20160515_jintaeks
Beginning direct3d gameprogrammingmath05_matrices_20160515_jintaeksBeginning direct3d gameprogrammingmath05_matrices_20160515_jintaeks
Beginning direct3d gameprogrammingmath05_matrices_20160515_jintaeksJinTaek Seo
 
Beginning direct3d gameprogrammingmath04_calculus_20160324_jintaeks
Beginning direct3d gameprogrammingmath04_calculus_20160324_jintaeksBeginning direct3d gameprogrammingmath04_calculus_20160324_jintaeks
Beginning direct3d gameprogrammingmath04_calculus_20160324_jintaeksJinTaek Seo
 
Beginning direct3d gameprogrammingmath03_vectors_20160328_jintaeks
Beginning direct3d gameprogrammingmath03_vectors_20160328_jintaeksBeginning direct3d gameprogrammingmath03_vectors_20160328_jintaeks
Beginning direct3d gameprogrammingmath03_vectors_20160328_jintaeksJinTaek Seo
 

More from JinTaek Seo (20)

05 heap 20161110_jintaeks
05 heap 20161110_jintaeks05 heap 20161110_jintaeks
05 heap 20161110_jintaeks
 
02 linked list_20160217_jintaekseo
02 linked list_20160217_jintaekseo02 linked list_20160217_jintaekseo
02 linked list_20160217_jintaekseo
 
Hermite spline english_20161201_jintaeks
Hermite spline english_20161201_jintaeksHermite spline english_20161201_jintaeks
Hermite spline english_20161201_jintaeks
 
01 stack 20160908_jintaek_seo
01 stack 20160908_jintaek_seo01 stack 20160908_jintaek_seo
01 stack 20160908_jintaek_seo
 
03 fsm how_toimplementai_state_20161006_jintaeks
03 fsm how_toimplementai_state_20161006_jintaeks03 fsm how_toimplementai_state_20161006_jintaeks
03 fsm how_toimplementai_state_20161006_jintaeks
 
Beginning direct3d gameprogramming10_shaderdetail_20160506_jintaeks
Beginning direct3d gameprogramming10_shaderdetail_20160506_jintaeksBeginning direct3d gameprogramming10_shaderdetail_20160506_jintaeks
Beginning direct3d gameprogramming10_shaderdetail_20160506_jintaeks
 
Beginning direct3d gameprogramming09_shaderprogramming_20160505_jintaeks
Beginning direct3d gameprogramming09_shaderprogramming_20160505_jintaeksBeginning direct3d gameprogramming09_shaderprogramming_20160505_jintaeks
Beginning direct3d gameprogramming09_shaderprogramming_20160505_jintaeks
 
Beginning direct3d gameprogramming08_usingtextures_20160428_jintaeks
Beginning direct3d gameprogramming08_usingtextures_20160428_jintaeksBeginning direct3d gameprogramming08_usingtextures_20160428_jintaeks
Beginning direct3d gameprogramming08_usingtextures_20160428_jintaeks
 
Beginning direct3d gameprogramming07_lightsandmaterials_20161117_jintaeks
Beginning direct3d gameprogramming07_lightsandmaterials_20161117_jintaeksBeginning direct3d gameprogramming07_lightsandmaterials_20161117_jintaeks
Beginning direct3d gameprogramming07_lightsandmaterials_20161117_jintaeks
 
Beginning direct3d gameprogramming06_firststepstoanimation_20161115_jintaeks
Beginning direct3d gameprogramming06_firststepstoanimation_20161115_jintaeksBeginning direct3d gameprogramming06_firststepstoanimation_20161115_jintaeks
Beginning direct3d gameprogramming06_firststepstoanimation_20161115_jintaeks
 
Beginning direct3d gameprogramming05_thebasics_20160421_jintaeks
Beginning direct3d gameprogramming05_thebasics_20160421_jintaeksBeginning direct3d gameprogramming05_thebasics_20160421_jintaeks
Beginning direct3d gameprogramming05_thebasics_20160421_jintaeks
 
Beginning direct3d gameprogramming04_3dfundamentals_20160414_jintaeks
Beginning direct3d gameprogramming04_3dfundamentals_20160414_jintaeksBeginning direct3d gameprogramming04_3dfundamentals_20160414_jintaeks
Beginning direct3d gameprogramming04_3dfundamentals_20160414_jintaeks
 
Beginning direct3d gameprogramming02_overviewofhalandcom_20160408_jintaeks
Beginning direct3d gameprogramming02_overviewofhalandcom_20160408_jintaeksBeginning direct3d gameprogramming02_overviewofhalandcom_20160408_jintaeks
Beginning direct3d gameprogramming02_overviewofhalandcom_20160408_jintaeks
 
Beginning direct3d gameprogramming01_thehistoryofdirect3dgraphics_20160407_ji...
Beginning direct3d gameprogramming01_thehistoryofdirect3dgraphics_20160407_ji...Beginning direct3d gameprogramming01_thehistoryofdirect3dgraphics_20160407_ji...
Beginning direct3d gameprogramming01_thehistoryofdirect3dgraphics_20160407_ji...
 
Beginning direct3d gameprogramming01_20161102_jintaeks
Beginning direct3d gameprogramming01_20161102_jintaeksBeginning direct3d gameprogramming01_20161102_jintaeks
Beginning direct3d gameprogramming01_20161102_jintaeks
 
Beginning direct3d gameprogramming03_programmingconventions_20160414_jintaeks
Beginning direct3d gameprogramming03_programmingconventions_20160414_jintaeksBeginning direct3d gameprogramming03_programmingconventions_20160414_jintaeks
Beginning direct3d gameprogramming03_programmingconventions_20160414_jintaeks
 
Beginning direct3d gameprogrammingmath06_transformations_20161019_jintaeks
Beginning direct3d gameprogrammingmath06_transformations_20161019_jintaeksBeginning direct3d gameprogrammingmath06_transformations_20161019_jintaeks
Beginning direct3d gameprogrammingmath06_transformations_20161019_jintaeks
 
Beginning direct3d gameprogrammingmath05_matrices_20160515_jintaeks
Beginning direct3d gameprogrammingmath05_matrices_20160515_jintaeksBeginning direct3d gameprogrammingmath05_matrices_20160515_jintaeks
Beginning direct3d gameprogrammingmath05_matrices_20160515_jintaeks
 
Beginning direct3d gameprogrammingmath04_calculus_20160324_jintaeks
Beginning direct3d gameprogrammingmath04_calculus_20160324_jintaeksBeginning direct3d gameprogrammingmath04_calculus_20160324_jintaeks
Beginning direct3d gameprogrammingmath04_calculus_20160324_jintaeks
 
Beginning direct3d gameprogrammingmath03_vectors_20160328_jintaeks
Beginning direct3d gameprogrammingmath03_vectors_20160328_jintaeksBeginning direct3d gameprogrammingmath03_vectors_20160328_jintaeks
Beginning direct3d gameprogrammingmath03_vectors_20160328_jintaeks
 

Recently uploaded

complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...asadnawaz62
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxwendy cai
 
8251 universal synchronous asynchronous receiver transmitter
8251 universal synchronous asynchronous receiver transmitter8251 universal synchronous asynchronous receiver transmitter
8251 universal synchronous asynchronous receiver transmitterShivangiSharma879191
 
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...srsj9000
 
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETEINFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETEroselinkalist12
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024hassan khalil
 
Work Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvWork Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvLewisJB
 
Churning of Butter, Factors affecting .
Churning of Butter, Factors affecting  .Churning of Butter, Factors affecting  .
Churning of Butter, Factors affecting .Satyam Kumar
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxDeepakSakkari2
 
Instrumentation, measurement and control of bio process parameters ( Temperat...
Instrumentation, measurement and control of bio process parameters ( Temperat...Instrumentation, measurement and control of bio process parameters ( Temperat...
Instrumentation, measurement and control of bio process parameters ( Temperat...121011101441
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionDr.Costas Sachpazis
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024Mark Billinghurst
 
An introduction to Semiconductor and its types.pptx
An introduction to Semiconductor and its types.pptxAn introduction to Semiconductor and its types.pptx
An introduction to Semiconductor and its types.pptxPurva Nikam
 
Introduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECHIntroduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECHC Sai Kiran
 
Artificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxArtificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxbritheesh05
 
Concrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptxConcrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptxKartikeyaDwivedi3
 
An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...Chandu841456
 
Application of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptxApplication of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptx959SahilShah
 

Recently uploaded (20)

complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...complete construction, environmental and economics information of biomass com...
complete construction, environmental and economics information of biomass com...
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptx
 
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
 
8251 universal synchronous asynchronous receiver transmitter
8251 universal synchronous asynchronous receiver transmitter8251 universal synchronous asynchronous receiver transmitter
8251 universal synchronous asynchronous receiver transmitter
 
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
 
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETEINFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
 
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCRCall Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024
 
Work Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvvWork Experience-Dalton Park.pptxfvvvvvvv
Work Experience-Dalton Park.pptxfvvvvvvv
 
Churning of Butter, Factors affecting .
Churning of Butter, Factors affecting  .Churning of Butter, Factors affecting  .
Churning of Butter, Factors affecting .
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptx
 
Instrumentation, measurement and control of bio process parameters ( Temperat...
Instrumentation, measurement and control of bio process parameters ( Temperat...Instrumentation, measurement and control of bio process parameters ( Temperat...
Instrumentation, measurement and control of bio process parameters ( Temperat...
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
 
IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024IVE Industry Focused Event - Defence Sector 2024
IVE Industry Focused Event - Defence Sector 2024
 
An introduction to Semiconductor and its types.pptx
An introduction to Semiconductor and its types.pptxAn introduction to Semiconductor and its types.pptx
An introduction to Semiconductor and its types.pptx
 
Introduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECHIntroduction to Machine Learning Unit-3 for II MECH
Introduction to Machine Learning Unit-3 for II MECH
 
Artificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxArtificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptx
 
Concrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptxConcrete Mix Design - IS 10262-2019 - .pptx
Concrete Mix Design - IS 10262-2019 - .pptx
 
An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...An experimental study in using natural admixture as an alternative for chemic...
An experimental study in using natural admixture as an alternative for chemic...
 
Application of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptxApplication of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptx
 

Neural network 20161210_jintaekseo

  • 1. Mobile Game Programming: Artificial Neural Network jintaeks@gmail.com Division of Digital Contents, DongSeo University. December 2016
  • 2. Standard Deviation  The standard deviation (SD, also represented by sigma σ or s) is a measure that is used to quantify the amount of variation of a set of data values 2
  • 3. Root mean square  The root mean square (abbreviated RMS) is defined as the square root of mean square (the arithmetic mean of the squares of a set of numbers).[ 3
  • 4. Neural Network  Neuron – More than 100 billion neurons in a brain  Dendrites – receive input signals and, based on those inputs, fire an output signal via an axon  Axon 4
  • 5. Learning  Supervised Learning  Unsupervised Learning  Reinforcement Learning 5
  • 6.  Supervised Learning —Essentially, a strategy that involves a teacher that is smarter than the network itself. For example, let’s take the facial recognition example. The teacher shows the network a bunch of faces, and the teacher already knows the name associated with each face. The network makes its guesses, then the teacher provides the network with the answers. The network can then compare its answers to the known “correct” ones and make adjustments according to its errors. 6
  • 7.  Unsupervised Learning —Required when there isn’t an example data set with known answers. Imagine searching for a hidden pattern in a data set. An application of this is clustering, i.e. dividing a set of elements into groups according to some unknown pattern.  Reinforcement Learning —A strategy built on observation. Think of a little mouse running through a maze. If it turns left, it gets a piece of cheese; if it turns right, it receives a little shock. Presumably, the mouse will learn over time to turn left. Its neural network makes a decision with an outcome (turn left or right) and observes its environment (yum or ouch). If the observation is negative, the network can adjust its weights in order to make a different decision the next time. Reinforcement learning is common in robotics. 7
  • 8. Standard Uses of Neural Networks  Pattern Recognition – it’s probably the most common application. Examples are facial recognition, optical character recognition, etc.  Control – You may have read about recent research advances in self driving cars. Neural networks are often used to manage steering decisions of physical vehicles (or simulated ones).  Anomaly Detection – Because neural networks are so good at recognizing patterns, they can also be trained to generate an output when something occurs that doesn’t fit the pattern. 8
  • 9. Perceptron  Invented in 1957 by Frank Rosenblatt at the Cornell Aeronautical Laboratory, a perceptron is the simplest neural network possible: a computational model of a single neuron. Step1 : Receive inputs Input 0: = 12 Input 1: = 4 9 0.5 -1.0
  • 10. Step 2: Weight inputs Weight 0: 0.5 Weight 1: -1  We take each input and multiply it by its weight. Input 0 * Weight 0 ⇒ 12 * 0.5 = 6 Input 1 * Weight 1 ⇒ 4 * -1 = -4 Step 3: Sum inputs  The weighted inputs are then summed. Sum = 6 + -4 = 2 Step 4: Generate output Output = sign(sum) ⇒ sign(2) ⇒ +1 10
  • 11. The Perceptron Algorithm: 1. For every input, multiply that input by its weight. 2. Sum all of the weighted inputs. 3. Compute the output of the perceptron based on that sum passed through an activation function (the sign of the sum). float inputs[] = {12 , 4}; float weights[] = {0.5,-1}; float sum = 0; for (int i = 0; i < inputs.length; i++) { sum += inputs[i]*weights[i]; } 11
  • 12. The Perceptron Algorithm: 1. For every input, multiply that input by its weight. 2. Sum all of the weighted inputs. 3. Compute the output of the perceptron based on that sum passed through an activation function (the sign of the sum). float output = activate(sum); int activate(float sum) { // Return a 1 if positive, -1 if negative. if (sum > 0) return 1; else return -1; } 12
  • 13. Simple Pattern Recognition using a Perceptron  Consider a line in two-dimensional space.  Points in that space can be classified as living on either one side of the line or the other. 13
  • 14.  Let’s say a perceptron has 2 inputs (the x- and y-coordinates of a point).  Using a sign activation function, the output will either be -1 or +1.  In the previous diagram, we can see how each point is either below the line (-1) or above (+1). 14
  • 15. Bias  Let’s consider the point (0,0). – No matter what the weights are, the sum will always be 0! 0 * weight for x = 0 0 * weight for y = 0 1 * weight for bias = weight for bias 15
  • 16. Coding the Perceptron class Perceptron { private: //The Perceptron stores its weights and learning constants. float* spWeights; int mWeightsSize = 0; float c = 0.01; // learning constant 16
  • 17. Initialize Perceptron(int n) { mWeightsSize = n; spWeights = new float[ n ]; //Weights start off random. for (int i = 0; i < mWeightsSize; i++) { spWeights[ i ] = random( -1, 1 ); } } 17
  • 18. Feed forward int feedforward(float inputs[]) { float sum = 0; for (int i = 0; i < mWeightsSize; i++) { sum += inputs[ i ] * spWeights[ i ]; } return activate(sum); } //Output is a +1 or -1. int activate(float sum) { if (sum > 0) return 1; else return -1; } 18
  • 19. Use the Perceptron Perceptron p = new Perceptron(3); float point[] = {50,-12,1}; // The input is 3 values: x,y and bias. int result = p.feedforward(point); 19
  • 20. Supervised Learning ① Provide the perceptron with inputs for which there is a known answer. ② Ask the perceptron to guess an answer. ③ Compute the error. (Did it get the answer right or wrong?) ④ Adjust all the weights according to the error. ⑤ Return to Step 1 and repeat! 20
  • 21. The Perceptron's error ERROR = DESIRED OUTPUT - GUESS OUTPUT  The error is the determining factor in how the perceptron’s weights should be adjusted 21
  • 22.  For any given weight, what we are looking to calculate is the change in weight, often called Δweight. NEW WEIGHT = WEIGHT + ΔWEIGHT  Δweight is calculated as the error multiplied by the input. ΔWEIGHT = ERROR × INPUT  Therefore: NEW WEIGHT = WEIGHT + ERROR × INPUT 22
  • 23. Learning Constant NEW WEIGHT = WEIGHT + ERROR * INPUT * LEARNING CONSTANT  With a small learning constant, the weights will be adjusted slowly, requiring more training time but allowing the network to make very small adjustments that could improve the network’s overall accuracy. float c = 0.01; // learning constant //Train the network against known data. void train(float inputs[], int desired) { int guess = feedforward(inputs); float error = desired - guess; for (int i = 0; i < mWeightsSize; i++) { spWeights[ i ] += c * error * inputs[ i ]; } } 23
  • 24. Trainer  To train the perceptron, we need a set of inputs with a known answer. class Trainer { public: //A "Trainer" object stores the inputs and the correct answer. float mInputs[ 3 ]; int mAnswer; void SetData( float x, float y, int a ) { mInputs[ 0 ] = x; mInputs[ 1 ] = y; //Note that the Trainer has the bias input built into its array. mInputs[ 2 ] = 1; mAnswer = a; }24
  • 25. //The formula for a line float f( float x ) { return 2 * x + 1; } 25
  • 26. void Setup() { srand( time( 0 ) ); //size( 640, 360 ); spPerceptron.reset( new Perceptron( 3 ) ); // Make 2,000 training points. for( int i = 0; i < gTrainerSize; i++ ) { float x = random( -gWidth / 2, gWidth / 2 ); float y = random( -gHeight / 2, gHeight / 2 ); //Is the correct answer 1 or - 1 ? int answer = 1; if( y < f( x ) ) answer = -1; gTraining[ i ].SetData( x, y, answer ); } }26
  • 27. void Training() { for( int i = 0; i < gTrainerSize; i++ ) { spPerceptron->train( gTraining[ i ].mInputs, gTraining[ i ].mAnswer ); } } void main() { Setup(); Training(); } 27
  • 29. 29
  • 30. 30
  • 31. 31
  • 32. 32
  • 33. 33
  • 34. 34
  • 35. Practice  Calculate weights of the Perceptron for Go by using paper and pencil. 35
  • 36. Limitations of Perceptron  Perceptrons are limited in their abilities. It can only solve linearly separable problems. – If you can classify the data with a straight line, then it is linearly separable. On the right, however, is nonlinearly separable data 36
  • 37.  One of the simplest examples of a nonlinearly separable problem is XOR(exclusive or). 37
  • 38. Multi Layered Perceptron  So perceptrons can’t even solve something as simple as XOR. But what if we made a network out of two perceptrons? If one perceptron can solve OR and one perceptron can solve NOT AND, then two perceptrons combined can solve XOR. 38
  • 39. Backpropagation  The output of the network is generated in the same manner as a perceptron. The inputs multiplied by the weights are summed and fed forward through the network.  The difference here is that they pass through additional layers of neurons before reaching the output. Training the network (i.e. adjusting the weights) also involves taking the error(desired result guess).  The error, however, must be fed backwards through the network. The final error ultimately adjusts the weights of all the connections. 39
  • 43. 43
  • 44. 44
  • 45. 45
  • 46. 46
  • 47. 47
  • 48. 48
  • 49. 49
  • 50. 50
  • 51. 51 0.56 = 1 / (1 + e-0.25)
  • 53. Sigmoid Function  A sigmoid function is a mathematical function having an "S" shape (sigmoid curve).  A wide variety of sigmoid functions have been used as the activation function of artificial neurons. 53 0.56 = 1 / (1 + e-0.25)
  • 54. 54
  • 55. Practice  Write a neural network program which recognize a digit on a 8×8 image. 55

Editor's Notes

  1. Supervised Learning —Essentially, a strategy that involves a teacher that is smarter than the network itself. For example, let’s take the facial recognition example. The teacher shows the network a bunch of faces, and the teacher already knows the name associated with each face. The network makes its guesses, then the teacher provides the network with the answers. The network can then compare its answers to the known “correct” ones and make adjustments according to its errors. Unsupervised Learning —Required when there isn’t an example data set with known answers. Imagine searching for a hidden pattern in a data set. An application of this is clustering, i.e. dividing a set of elements into groups according to some unknown pattern. Reinforcement Learning —A strategy built on observation. Think of a little mouse running through a maze. If it turns left, it gets a piece of cheese; if it turns right, it receives a little shock. Presumably, the mouse will learn over time to turn left. Its neural network makes a decision with an outcome (turn left or right) and observes its environment (yum or ouch). If the observation is negative, the network can adjust its weights in order to make a different decision the next time. Reinforcement learning is common in robotics.
  2. Supervised Learning —Essentially, a strategy that involves a teacher that is smarter than the network itself. For example, let’s take the facial recognition example. The teacher shows the network a bunch of faces, and the teacher already knows the name associated with each face. The network makes its guesses, then the teacher provides the network with the answers. The network can then compare its answers to the known “correct” ones and make adjustments according to its errors. Unsupervised Learning —Required when there isn’t an example data set with known answers. Imagine searching for a hidden pattern in a data set. An application of this is clustering, i.e. dividing a set of elements into groups according to some unknown pattern. Reinforcement Learning —A strategy built on observation. Think of a little mouse running through a maze. If it turns left, it gets a piece of cheese; if it turns right, it receives a little shock. Presumably, the mouse will learn over time to turn left. Its neural network makes a decision with an outcome (turn left or right) and observes its environment (yum or ouch). If the observation is negative, the network can adjust its weights in order to make a different decision the next time. Reinforcement learning is common in robotics.
  3. Supervised Learning —Essentially, a strategy that involves a teacher that is smarter than the network itself. For example, let’s take the facial recognition example. The teacher shows the network a bunch of faces, and the teacher already knows the name associated with each face. The network makes its guesses, then the teacher provides the network with the answers. The network can then compare its answers to the known “correct” ones and make adjustments according to its errors. Unsupervised Learning —Required when there isn’t an example data set with known answers. Imagine searching for a hidden pattern in a data set. An application of this is clustering, i.e. dividing a set of elements into groups according to some unknown pattern. Reinforcement Learning —A strategy built on observation. Think of a little mouse running through a maze. If it turns left, it gets a piece of cheese; if it turns right, it receives a little shock. Presumably, the mouse will learn over time to turn left. Its neural network makes a decision with an outcome (turn left or right) and observes its environment (yum or ouch). If the observation is negative, the network can adjust its weights in order to make a different decision the next time. Reinforcement learning is common in robotics.