Artificial Neural Networks (ANNs)
Step-By-Step Training & Testing Example
MENOUFIA UNIVERSITY
FACULTY OF COMPUTERS AND INFORMATION
ALL DEPARTMENTS
ARTIFICIAL INTELLIGENCE
‫المنوفية‬ ‫جامعة‬
‫والمعلومات‬ ‫الحاسبات‬ ‫كلية‬
‫األقسام‬ ‫جميع‬
‫الذكاء‬‫اإلصطناعي‬
‫المنوفية‬ ‫جامعة‬
Ahmed Fawzy Gad
ahmed.fawzy@ci.menofia.edu.eg
Neural Networks & Classification
Linear Classifiers
Linear Classifiers
Linear Classifiers
Linear Classifiers
Complex Data
Linear Classifiers
Complex Data
Linear Classifiers
Complex Data
Not Solved Linearly
Not Solved Linearly
Nonlinear Classifiers
Nonlinear Classifiers
Training
Nonlinear Classifiers
Training
Nonlinear Classifiers
Training
Classification Example
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Neural Networks
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Hidden Output
Neural Networks
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Layer
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
Output Layer
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
𝒀𝒋
Weights
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
𝑾 𝟏
𝑾 𝟐
𝑾 𝟑
Weights=𝑾𝒊
𝒀𝒋
Activation Function
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
𝒀𝒋
Activation Function
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
𝒀𝒋
Activation Function
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
𝒀𝒋
Activation Function
Components
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
𝒀𝒋
Activation Function
Inputs
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
s
𝒀𝒋
Activation Function
Inputs
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
ss=SOP(𝑿𝒊, 𝑾𝒊)
𝒀𝒋
Activation Function
Inputs
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
ss=SOP(𝑿𝒊, 𝑾𝒊)
𝑿𝒊=Inputs 𝑾𝒊=Weights
𝒀𝒋
Activation Function
Inputs
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
ss=SOP(𝑿𝒊, 𝑾𝒊)
𝑿𝒊=Inputs 𝑾𝒊=Weights
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
𝒀𝒋
Activation Function
Inputs
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
ss=SOP(𝑿𝒊, 𝑾𝒊)
𝑿𝒊=Inputs 𝑾𝒊=Weights
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑 S= 𝟏
𝒎
𝑿𝒊 𝑾𝒊
𝒀𝒋
Activation Function
Inputs
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
ss=SOP(𝑿𝒊, 𝑾𝒊)
𝑿𝒊=Inputs 𝑾𝒊=Weights
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
S= 𝟏
𝒎
𝑿𝒊 𝑾𝒊
𝒀𝒋
Activation Function
Outputs
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
Class Label
𝒀𝒋
Activation Functions
Piecewise
Linear Sigmoid Signum
Activation Functions
Which activation function to use?
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Outputs
Class
Labels
Activation
Function
TWO Class
Labels
TWO
Outputs
One that gives two outputs.
B (BLUE)G (GREEN)R (RED)
00255
RED 6880248
25500
BLUE 2101567
Which activation function to use?
𝑪𝒋𝒀𝒋
Activation Functions
Piecewise
Linear Sigmoid SignumSignum
Activation Function
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
Bias
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
Bias
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
Bias
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
𝑿 𝟎 = +𝟏
Bias
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(+𝟏 ∗ 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
Bias
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
Bias Importance
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
Bias Importance
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
X
Y
Bias Importance
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
X
Y y=x+b
Bias Importance
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
X
Y y=x+b
Bias Importance
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
X
Y y=x+b
Y-Intercept
Bias Importance
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
X
Y y=x+b
Y-Intercept
b=0
Bias Importance
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
X
Y y=x+b
Y-Intercept
b=0
Bias Importance
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
X
Y y=x+b
Y-Intercept
b=+v
Bias Importance
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
X
Y y=x+b
Y-Intercept
b=+v
Bias Importance
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
X
Y y=x+b
Y-Intercept
b=-v
Bias Importance
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
X
Y y=x+b
Y-Intercept
b=+v
Bias Importance
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
Same Concept Applies to Bias
S= 𝟏
𝒎
𝑿𝒊 𝑾𝒊+BIAS
Bias Importance
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
S= 𝟏
𝒎
𝑿𝒊 𝑾𝒊+BIAS
Bias Importance
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
S= 𝟏
𝒎
𝑿𝒊 𝑾𝒊+BIAS
Bias Importance
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
S= 𝟏
𝒎
𝑿𝒊 𝑾𝒊+BIAS
Bias Importance
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
S= 𝟏
𝒎
𝑿𝒊 𝑾𝒊+BIAS
Learning Rate
R
G
B
RED/BLUE
W1
W2
W3
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
𝟎 ≤ η ≤ 𝟏
Summary of Parameters
Inputs 𝑿 𝒎
R
G
B
RED/BLUE
W1
W2
W3
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
𝟎 ≤ η ≤ 𝟏
𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐, 𝑿 𝟑)
Summary of Parameters
Weights 𝑾 𝒎
R
G
B
RED/BLUE
W1
W2
W3
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
𝟎 ≤ η ≤ 𝟏
W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐, 𝑾 𝟑)
𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐, 𝑿 𝟑)
Summary of Parameters
Bias 𝒃
R
G
B
RED/BLUE
W1
W2
W3
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
𝟎 ≤ η ≤ 𝟏
𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐, 𝑿 𝟑)
W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐, 𝑾 𝟑)
Summary of Parameters
Sum Of Products (SOP) 𝒔
R
G
B
RED/BLUE
W1
W2
W3
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
𝟎 ≤ η ≤ 𝟏
𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐, 𝑿 𝟑)
W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐, 𝑾 𝟑)
Summary of Parameters
Activation Function 𝒔𝒈𝒏
R
G
B
RED/BLUE
W1
W2
W3
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
𝟎 ≤ η ≤ 𝟏
𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐, 𝑿 𝟑)
W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐, 𝑾 𝟑)
Summary of Parameters
Outputs 𝒀𝒋
R
G
B
RED/BLUE
W1
W2
W3
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
𝟎 ≤ η ≤ 𝟏
𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐, 𝑿 𝟑)
W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐, 𝑾 𝟑)
Summary of Parameters
Learning Rate η
R
G
B
RED/BLUE
W1
W2
W3
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
𝟎 ≤ η ≤ 𝟏
𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐, 𝑿 𝟑)
W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐, 𝑾 𝟑)
Other Parameters
Step n
R
G
B
RED/BLUE
W1
W2
W3
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
𝟎 ≤ η ≤ 𝟏𝒏 = 𝟎, 𝟏, 𝟐, …
𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐, 𝑿 𝟑)
W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐, 𝑾 𝟑)
Other Parameters
Desired Output 𝒅𝒋
R
G
B
RED/BLUE
W1
W2
W3
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
𝟎 ≤ η ≤ 𝟏𝒏 = 𝟎, 𝟏, 𝟐, …
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
𝒅 𝒏 =
−𝟏, 𝒙 𝒏 𝒃𝒆𝒍𝒐𝒏𝒈𝒔 𝒕𝒐 𝑪𝟏 (𝑹𝑬𝑫)
+𝟏, 𝒙 𝒏 𝒃𝒆𝒍𝒐𝒏𝒈𝒔 𝒕𝒐 𝑪𝟐 (𝑩𝑳𝑼𝑬)
𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐, 𝑿 𝟑)
W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐, 𝑾 𝟑)
Neural Networks Training Steps
Weights Initialization
Inputs Application
Sum of Inputs-Weights Products
Activation Function Response Calculation
Weights Adaptation
Back to Step 2
1
2
3
4
5
6
Regarding 5th Step: Weights Adaptation
• If the predicted output Y is not the same as the desired output d,
then weights are to be adapted according to the following equation:
𝑾 𝒏 + 𝟏 = 𝑾 𝒏 + η 𝒅 𝒏 − 𝒀 𝒏 𝑿(𝒏)
Where
𝑾 𝒏 = [𝒃 𝒏 , 𝑾 𝟏(𝒏), 𝑾 𝟐(𝒏), 𝑾 𝟑(𝒏), … , 𝑾 𝒎(𝒏)]
Neural Networks
Training Example
Step n=0
• In each step in the solution, the parameters of the neural network
must be known.
• Parameters of step n=0:
η = .001
𝑋 𝑛 = 𝑋 0 = +1, 255, 0, 0
𝑊 𝑛 = 𝑊 0 = −1, −2, 1, 6.2
𝑑 𝑛 = 𝑑 0 = −1
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
Neural Networks
Training Example
Step n=0
255
0
0
-2
1
6.2
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
=+1𝑿 𝟎
-1
RED/BLUE
𝒀(𝒏)
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
Neural Networks
Training Example
Step n=0 - SOP
255
0
0
-2
1
6.2
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
=+1𝑿 𝟎
-1
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
=+1*-1+255*-2+0*1+0*6.2
=-511
RED/BLUE
𝒀(𝒏)
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
Neural Networks
Training Example
Step n=0 - Output
255
0
0
-2
1
6.2
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
-511
=+1𝑿 𝟎
-1
𝒀 𝒏 = 𝒀 𝟎
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 −𝟓𝟏𝟏
= −𝟏
sgn
RED/BLUE
𝒀(𝒏)
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
𝒔𝒈𝒏 𝒔 =
+𝟏, 𝒔 ≥ 𝟎
−𝟏, 𝒔 < 𝟎
Neural Networks
Training Example
Step n=0 - Output
255
0
0
-2
1
6.2
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
-511 −𝟏
=+1𝑿 𝟎
-1
𝒀 𝒏 = 𝒀 𝟎
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 −𝟓𝟏𝟏
= −𝟏
-1
RED
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
Neural Networks
Training Example
Step n=0
Predicted Vs. Desired
255
0
0
-2
1
6.2
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
-511 −𝟏
=+1𝑿 𝟎
-1
-1
𝒀 𝒏 = 𝒀 𝟎 = −𝟏
𝐝 𝒏 = 𝒅 𝟎 = −𝟏
∵ 𝒀 𝒏 = 𝒅 𝒏
∴ Weights are Correct.
No Adaptation
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
Neural Networks
Training Example
Step n=1
• In each step in the solution, the parameters of the neural network
must be known.
• Parameters of step n=1:
η = .001
𝑋 𝑛 = 𝑋 1 = +1, 248, 80, 68
𝑊 𝑛 = 𝑊 1 = 𝑊 0 = −1, −2, 1, 6.2
𝑑 𝑛 = 𝑑 1 = −1
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
Neural Networks
Training Example
Step n=1
248
80
68
RED/BLUE
-2
1
6.2
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
=+1𝑿 𝟎
-1
𝒀(𝒏)
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
Neural Networks
Training Example
Step n=1 - SOP
248
80
68
RED/BLUE
-2
1
6.2
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
=+1𝑿 𝟎
-1
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
=+1*-1+248*-2+80*1+68*6.2
=4.6
𝒀(𝒏)
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
Neural Networks
Training Example
Step n=1 - Output
248
80
68
-2
1
6.2
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
4.6
=+1𝑿 𝟎
-1
𝒀 𝒏 = 𝒀 𝟏
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 𝟒. 𝟔
= +𝟏
sgn
RED/BLUE
𝒀(𝒏)
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
𝒔𝒈𝒏 𝒔 =
+𝟏, 𝒔 ≥ 𝟎
−𝟏, 𝒔 < 𝟎
Neural Networks
Training Example
Step n=1 - Output
248
80
68
-2
1
6.2
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
4.6 +𝟏
=+1𝑿 𝟎
-1
𝒀 𝒏 = 𝒀 𝟏
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 𝟒. 𝟔
= +𝟏
+1
BLUE
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
Neural Networks
Training Example
Step n=1
Predicted Vs. Desired
248
80
68
-2
1
6.2
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
4.6 +𝟏
=+1𝑿 𝟎
-1
+1
𝒀 𝒏 = 𝒀 𝟏 = +𝟏
𝐝 𝒏 = 𝒅 𝟏 = −𝟏
∵ 𝒀 𝒏 ≠ 𝒅 𝒏
∴ Weights are Incorrect.
Adaptation Required
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
Weights Adaptation
• According to
𝑾 𝒏 + 𝟏 = 𝑾 𝒏 + η 𝒅 𝒏 − 𝒀 𝒏 𝑿(𝒏)
• Where n = 1
𝑾 𝟏 + 𝟏 = 𝑾 𝟏 + η 𝒅 𝟏 − 𝒀 𝟏 𝑿(𝟏)
𝑾 𝟐 = −𝟏, −𝟐, 𝟏, 𝟔. 𝟐 + .001 −𝟏 − (+𝟏) +𝟏, 𝟐𝟒𝟖, 𝟖𝟎, 𝟔𝟖
𝑾 𝟐 = −𝟏, −𝟐, 𝟏, 𝟔. 𝟐 + .001 −𝟐 +𝟏, 𝟐𝟒𝟖, 𝟖𝟎, 𝟔𝟖
𝑾 𝟐 = −𝟏, −𝟐, 𝟏, 𝟔. 𝟐 + −. 𝟎𝟎𝟐 +𝟏, 𝟐𝟒𝟖, 𝟖𝟎, 𝟔𝟖
𝑾 𝟐 = −𝟏, −𝟐, 𝟏, 𝟔. 𝟐 + −. 𝟎𝟎𝟐, −. 𝟒𝟗𝟔, −. 𝟏𝟔, −. 𝟏𝟑𝟔
𝑾 𝟐 = −𝟏. 𝟎𝟎𝟐, −𝟐. 𝟒𝟗𝟔, . 𝟖𝟒, 𝟔. 𝟎𝟔𝟒
Neural Networks
Training Example
Step n=2
• In each step in the solution, the parameters of the neural network
must be known.
• Parameters of step n=2:
η = .001
𝑋 𝑛 = 𝑋 2 = +1, 0, 0, 255
𝑊 𝑛 = 𝑊 2 = −1.002, −2.496, .84, 6.064
𝑑 𝑛 = 𝑑 2 = +1
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
Neural Networks
Training Example
Step n=2
0
0
255
RED/BLUE
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
=+1𝑿 𝟎
𝒀(𝒏)
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
Neural Networks
Training Example
Step n=2 - SOP
0
0
255
RED/BLUE
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
=+1𝑿 𝟎
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
=+1*-1.002+0*-
2.496+0*.84+255*6.064
=1545.32
𝒀(𝒏)
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
Neural Networks
Training Example
Step n=2 - Output
0
0
255
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
1545
.32
=+1𝑿 𝟎
𝒀 𝒏 = 𝒀 𝟐
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 1545.32
= +𝟏
sgn
RED/BLUE
𝒀(𝒏)
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
𝒔𝒈𝒏 𝒔 =
+𝟏, 𝒔 ≥ 𝟎
−𝟏, 𝒔 < 𝟎
Neural Networks
Training Example
Step n=2 - Output
0
0
255
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
1545
.32 +𝟏
=+1𝑿 𝟎
𝒀 𝒏 = 𝒀 𝟐
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 1545.32
= +𝟏
+1
BLUE
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
Neural Networks
Training Example
Step n=2
Predicted Vs. Desired
0
0
255
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
1545
.32 +𝟏
=+1𝑿 𝟎
+1
𝒀 𝒏 = 𝒀 𝟐 = +𝟏
𝐝 𝒏 = 𝒅 𝟐 = +𝟏
∵ 𝒀 𝒏 = 𝒅 𝒏
∴ Weights are Correct.
No Adaptation
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
Neural Networks
Training Example
Step n=3
• In each step in the solution, the parameters of the neural network
must be known.
• Parameters of step n=3:
η = .001
𝑋 𝑛 = 𝑋 3 = +1, 67, 15, 210
𝑊 𝑛 = 𝑊 3 = 𝑊 2 = −1.002, −2.496, .84, 6.064
𝑑 𝑛 = 𝑑 3 = +1
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
Neural Networks
Training Example
Step n=3
67
15
210
RED/BLUE
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
=+1𝑿 𝟎
𝒀(𝒏)
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
Neural Networks
Training Example
Step n=3 - SOP
67
15
210
RED/BLUE
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
=+1𝑿 𝟎
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
=+1*-1.002+67*-
2.496+15*.84+210*6.064
=1349.542
𝒀(𝒏)
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
Neural Networks
Training Example
Step n=3 - Output
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
1349
.542
=+1𝑿 𝟎
𝒀 𝒏 = 𝒀 𝟑
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 1349.542
= +𝟏
sgn
RED/BLUE
𝒀(𝒏)
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
67
15
210
𝒔𝒈𝒏 𝒔 =
+𝟏, 𝒔 ≥ 𝟎
−𝟏, 𝒔 < 𝟎
Neural Networks
Training Example
Step n=3 - Output
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
1349
.542 +𝟏
=+1𝑿 𝟎
𝒀 𝒏 = 𝒀 𝟑
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 1349.542
= +𝟏
+1
BLUE
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
67
15
210
Neural Networks
Training Example
Step n=3
Predicted Vs. Desired
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
1349
.542 +𝟏
=+1𝑿 𝟎
+1
𝒀 𝒏 = 𝒀 𝟑 = +𝟏
𝐝 𝒏 = 𝒅 𝟑 = +𝟏
∵ 𝒀 𝒏 = 𝒅 𝒏
∴ Weights are Correct.
No Adaptation
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
67
15
210
Neural Networks
Training Example
Step n=4
• In each step in the solution, the parameters of the neural network
must be known.
• Parameters of step n=4:
η = .001
𝑋 𝑛 = 𝑋 4 = +1, 255, 0, 0
𝑊 𝑛 = 𝑊 4 = 𝑊 3 = −1.002, −2.496, .84, 6.064
𝑑 𝑛 = 𝑑 4 = −1
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
Neural Networks
Training Example
Step n=4
255
0
0
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
=+1𝑿 𝟎
RED/BLUE
𝒀(𝒏)
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
Neural Networks
Training Example
Step n=4 - SOP
255
0
0
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
=+1𝑿 𝟎
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
=+1*-1.002+255*-
2.496+0*.84+0*6.064
=-637.482
RED/BLUE
𝒀(𝒏)
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
Neural Networks
Training Example
Step n=4 - Output
255
0
0
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
-
637.
482
=+1𝑿 𝟎
𝒀 𝒏 = 𝒀 𝟒
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 −637.482
= −𝟏
sgn
RED/BLUE
𝒀(𝒏)
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
𝒔𝒈𝒏 𝒔 =
+𝟏, 𝒔 ≥ 𝟎
−𝟏, 𝒔 < 𝟎
Neural Networks
Training Example
Step n=4 - Output
255
0
0
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
-
637.
482
−𝟏
=+1𝑿 𝟎
𝒀 𝒏 = 𝒀 𝟒
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 −637.482
= −𝟏
-1
RED
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
Neural Networks
Training Example
Step n=4
Predicted Vs. Desired
255
0
0
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
-
637.
482
−𝟏
=+1𝑿 𝟎
-1
𝒀 𝒏 = 𝒀 𝟒 = −𝟏
𝐝 𝒏 = 𝒅 𝟒 = −𝟏
∵ 𝒀 𝒏 = 𝒅 𝒏
∴ Weights are Correct.
No Adaptation
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
Neural Networks
Training Example
Step n=5
• In each step in the solution, the parameters of the neural network
must be known.
• Parameters of step n=5:
η = .001
𝑋 𝑛 = 𝑋 5 = +1, 248, 80, 68
𝑊 𝑛 = 𝑊 5 = 𝑊 4 = −1.002, −2.496, .84, 6.064
𝑑 𝑛 = 𝑑 5 = −1
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
Neural Networks
Training Example
Step n=5
248
80
68
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
=+1𝑿 𝟎
RED/BLUE
𝒀(𝒏)
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
Neural Networks
Training Example
Step n=5 - SOP
248
80
68
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
=+1𝑿 𝟎
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
=+1*-1.002+248*-
2.496+80*.84+68*6.064
=-31.306
RED/BLUE
𝒀(𝒏)
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
Neural Networks
Training Example
Step n=5 - Output
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
-
31.3
06
=+1𝑿 𝟎
𝒀 𝒏 = 𝒀 𝟓
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 −31.306
= −𝟏
sgn
RED/BLUE
𝒀(𝒏)
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
248
80
68
𝒔𝒈𝒏 𝒔 =
+𝟏, 𝒔 ≥ 𝟎
−𝟏, 𝒔 < 𝟎
Neural Networks
Training Example
Step n=5 - Output
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
-
31.3
06
−𝟏
=+1𝑿 𝟎
𝒀 𝒏 = 𝒀 𝟓
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 −31.306
= −𝟏
-1
RED
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
248
80
68
Neural Networks
Training Example
Step n=5
Predicted Vs. Desired
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
-
637.
482
−𝟏
=+1𝑿 𝟎
-1
𝒀 𝒏 = 𝒀 𝟓 = −𝟏
𝐝 𝒏 = 𝒅 𝟓 = −𝟏
∵ 𝒀 𝒏 = 𝒅 𝒏
∴ Weights are Correct.
No Adaptation
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
248
80
68
Correct Weights
• After testing the weights across all samples and results were correct
then we can conclude that current weights are correct ones for
training the neural network.
• After training phase we come to testing the neural network.
• What is the class of the unknown color of values of R=150, G=100,
B=180?
Testing Trained Neural Network
(R, G, B) = (150, 100, 180)
Trained Neural Networks Parameters
η = .001
𝑊 = −1.002, −2.496, .84, 6.064
Testing Trained Neural Network
(R, G, B) = (150, 100, 180)
SOP
150
100
180
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
=+1𝑿 𝟎
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
=+1*-1.002+150*-
2.496+100*.84+180*6.064
=800.118
RED/BLUE
𝒀
6.064
.84
-2.496
-1.002
Testing Trained Neural Network
(R, G, B) = (150, 100, 180)
Output
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
800.
118
=+1𝑿 𝟎
𝒀
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 800.118
= +𝟏
sgn
RED/BLUE
𝒀
6.064
.84
-2.496
-1.002
150
100
180
𝒔𝒈𝒏 𝒔 =
+𝟏, 𝒔 ≥ 𝟎
−𝟏, 𝒔 < 𝟎
Testing Trained Neural Network
(R, G, B) = (150, 100, 180)
Output
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
800.
118 +𝟏
=+1𝑿 𝟎
𝒀
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 800.118
= +𝟏
+1
BLUE
6.064
.84
-2.496
-1.002
150
100
180

Introduction to Artificial Neural Networks (ANNs) - Step-by-Step Training & Testing Example

  • 1.
    Artificial Neural Networks(ANNs) Step-By-Step Training & Testing Example MENOUFIA UNIVERSITY FACULTY OF COMPUTERS AND INFORMATION ALL DEPARTMENTS ARTIFICIAL INTELLIGENCE ‫المنوفية‬ ‫جامعة‬ ‫والمعلومات‬ ‫الحاسبات‬ ‫كلية‬ ‫األقسام‬ ‫جميع‬ ‫الذكاء‬‫اإلصطناعي‬ ‫المنوفية‬ ‫جامعة‬ Ahmed Fawzy Gad ahmed.fawzy@ci.menofia.edu.eg
  • 2.
    Neural Networks &Classification
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
    Classification Example B (BLUE)G(GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567
  • 16.
    Neural Networks B (BLUE)G(GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Hidden Output
  • 17.
    Neural Networks B (BLUE)G(GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567
  • 18.
    Input Layer B (BLUE)G(GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B
  • 19.
    Output Layer B (BLUE)G(GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE 𝒀𝒋
  • 20.
    Weights B (BLUE)G (GREEN)R(RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE 𝑾 𝟏 𝑾 𝟐 𝑾 𝟑 Weights=𝑾𝒊 𝒀𝒋
  • 21.
    Activation Function B (BLUE)G(GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 𝒀𝒋
  • 22.
    Activation Function B (BLUE)G(GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 𝒀𝒋
  • 23.
    Activation Function B (BLUE)G(GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 𝒀𝒋
  • 24.
    Activation Function Components B (BLUE)G(GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 𝒀𝒋
  • 25.
    Activation Function Inputs B (BLUE)G(GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 s 𝒀𝒋
  • 26.
    Activation Function Inputs B (BLUE)G(GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 ss=SOP(𝑿𝒊, 𝑾𝒊) 𝒀𝒋
  • 27.
    Activation Function Inputs B (BLUE)G(GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 ss=SOP(𝑿𝒊, 𝑾𝒊) 𝑿𝒊=Inputs 𝑾𝒊=Weights 𝒀𝒋
  • 28.
    Activation Function Inputs B (BLUE)G(GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 ss=SOP(𝑿𝒊, 𝑾𝒊) 𝑿𝒊=Inputs 𝑾𝒊=Weights 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 𝒀𝒋
  • 29.
    Activation Function Inputs B (BLUE)G(GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 ss=SOP(𝑿𝒊, 𝑾𝒊) 𝑿𝒊=Inputs 𝑾𝒊=Weights 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 S= 𝟏 𝒎 𝑿𝒊 𝑾𝒊 𝒀𝒋
  • 30.
    Activation Function Inputs B (BLUE)G(GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 ss=SOP(𝑿𝒊, 𝑾𝒊) 𝑿𝒊=Inputs 𝑾𝒊=Weights 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) S= 𝟏 𝒎 𝑿𝒊 𝑾𝒊 𝒀𝒋
  • 31.
    Activation Function Outputs B (BLUE)G(GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 Class Label 𝒀𝒋
  • 32.
  • 33.
    Activation Functions Which activationfunction to use? B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Outputs Class Labels Activation Function TWO Class Labels TWO Outputs One that gives two outputs. B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Which activation function to use? 𝑪𝒋𝒀𝒋
  • 34.
  • 35.
    Activation Function B (BLUE)G(GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋
  • 36.
    Bias B (BLUE)G (GREEN)R(RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
  • 37.
    Bias B (BLUE)G (GREEN)R(RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
  • 38.
    Bias B (BLUE)G (GREEN)R(RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) 𝑿 𝟎 = +𝟏
  • 39.
    Bias B (BLUE)G (GREEN)R(RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(+𝟏 ∗ 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
  • 40.
    Bias B (BLUE)G (GREEN)R(RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
  • 41.
    Bias Importance B (BLUE)G(GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
  • 42.
    Bias Importance B (BLUE)G(GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) X Y
  • 43.
    Bias Importance B (BLUE)G(GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) X Y y=x+b
  • 44.
    Bias Importance B (BLUE)G(GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) X Y y=x+b
  • 45.
    Bias Importance B (BLUE)G(GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) X Y y=x+b Y-Intercept
  • 46.
    Bias Importance B (BLUE)G(GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) X Y y=x+b Y-Intercept b=0
  • 47.
    Bias Importance B (BLUE)G(GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) X Y y=x+b Y-Intercept b=0
  • 48.
    Bias Importance B (BLUE)G(GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) X Y y=x+b Y-Intercept b=+v
  • 49.
    Bias Importance B (BLUE)G(GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) X Y y=x+b Y-Intercept b=+v
  • 50.
    Bias Importance B (BLUE)G(GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) X Y y=x+b Y-Intercept b=-v
  • 51.
    Bias Importance B (BLUE)G(GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) X Y y=x+b Y-Intercept b=+v
  • 52.
    Bias Importance B (BLUE)G(GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) Same Concept Applies to Bias S= 𝟏 𝒎 𝑿𝒊 𝑾𝒊+BIAS
  • 53.
    Bias Importance B (BLUE)G(GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) S= 𝟏 𝒎 𝑿𝒊 𝑾𝒊+BIAS
  • 54.
    Bias Importance B (BLUE)G(GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) S= 𝟏 𝒎 𝑿𝒊 𝑾𝒊+BIAS
  • 55.
    Bias Importance B (BLUE)G(GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) S= 𝟏 𝒎 𝑿𝒊 𝑾𝒊+BIAS
  • 56.
    Bias Importance B (BLUE)G(GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) S= 𝟏 𝒎 𝑿𝒊 𝑾𝒊+BIAS
  • 57.
    Learning Rate R G B RED/BLUE W1 W2 W3 𝑿 𝟏 𝑿𝟐 𝑿 𝟑 F(s)s sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) 𝟎 ≤ η ≤ 𝟏
  • 58.
    Summary of Parameters Inputs𝑿 𝒎 R G B RED/BLUE W1 W2 W3 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) 𝟎 ≤ η ≤ 𝟏 𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐, 𝑿 𝟑)
  • 59.
    Summary of Parameters Weights𝑾 𝒎 R G B RED/BLUE W1 W2 W3 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) 𝟎 ≤ η ≤ 𝟏 W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐, 𝑾 𝟑) 𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐, 𝑿 𝟑)
  • 60.
    Summary of Parameters Bias𝒃 R G B RED/BLUE W1 W2 W3 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) 𝟎 ≤ η ≤ 𝟏 𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐, 𝑿 𝟑) W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐, 𝑾 𝟑)
  • 61.
    Summary of Parameters SumOf Products (SOP) 𝒔 R G B RED/BLUE W1 W2 W3 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) 𝟎 ≤ η ≤ 𝟏 𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐, 𝑿 𝟑) W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐, 𝑾 𝟑)
  • 62.
    Summary of Parameters ActivationFunction 𝒔𝒈𝒏 R G B RED/BLUE W1 W2 W3 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) 𝟎 ≤ η ≤ 𝟏 𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐, 𝑿 𝟑) W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐, 𝑾 𝟑)
  • 63.
    Summary of Parameters Outputs𝒀𝒋 R G B RED/BLUE W1 W2 W3 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) 𝟎 ≤ η ≤ 𝟏 𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐, 𝑿 𝟑) W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐, 𝑾 𝟑)
  • 64.
    Summary of Parameters LearningRate η R G B RED/BLUE W1 W2 W3 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) 𝟎 ≤ η ≤ 𝟏 𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐, 𝑿 𝟑) W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐, 𝑾 𝟑)
  • 65.
    Other Parameters Step n R G B RED/BLUE W1 W2 W3 𝑿𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) 𝟎 ≤ η ≤ 𝟏𝒏 = 𝟎, 𝟏, 𝟐, … 𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐, 𝑿 𝟑) W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐, 𝑾 𝟑)
  • 66.
    Other Parameters Desired Output𝒅𝒋 R G B RED/BLUE W1 W2 W3 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) 𝟎 ≤ η ≤ 𝟏𝒏 = 𝟎, 𝟏, 𝟐, … B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 𝒅 𝒏 = −𝟏, 𝒙 𝒏 𝒃𝒆𝒍𝒐𝒏𝒈𝒔 𝒕𝒐 𝑪𝟏 (𝑹𝑬𝑫) +𝟏, 𝒙 𝒏 𝒃𝒆𝒍𝒐𝒏𝒈𝒔 𝒕𝒐 𝑪𝟐 (𝑩𝑳𝑼𝑬) 𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐, 𝑿 𝟑) W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐, 𝑾 𝟑)
  • 67.
    Neural Networks TrainingSteps Weights Initialization Inputs Application Sum of Inputs-Weights Products Activation Function Response Calculation Weights Adaptation Back to Step 2 1 2 3 4 5 6
  • 68.
    Regarding 5th Step:Weights Adaptation • If the predicted output Y is not the same as the desired output d, then weights are to be adapted according to the following equation: 𝑾 𝒏 + 𝟏 = 𝑾 𝒏 + η 𝒅 𝒏 − 𝒀 𝒏 𝑿(𝒏) Where 𝑾 𝒏 = [𝒃 𝒏 , 𝑾 𝟏(𝒏), 𝑾 𝟐(𝒏), 𝑾 𝟑(𝒏), … , 𝑾 𝒎(𝒏)]
  • 69.
    Neural Networks Training Example Stepn=0 • In each step in the solution, the parameters of the neural network must be known. • Parameters of step n=0: η = .001 𝑋 𝑛 = 𝑋 0 = +1, 255, 0, 0 𝑊 𝑛 = 𝑊 0 = −1, −2, 1, 6.2 𝑑 𝑛 = 𝑑 0 = −1 B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567
  • 70.
    Neural Networks Training Example Stepn=0 255 0 0 -2 1 6.2 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn =+1𝑿 𝟎 -1 RED/BLUE 𝒀(𝒏) B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567
  • 71.
    Neural Networks Training Example Stepn=0 - SOP 255 0 0 -2 1 6.2 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn =+1𝑿 𝟎 -1 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) =+1*-1+255*-2+0*1+0*6.2 =-511 RED/BLUE 𝒀(𝒏) B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567
  • 72.
    Neural Networks Training Example Stepn=0 - Output 255 0 0 -2 1 6.2 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 -511 =+1𝑿 𝟎 -1 𝒀 𝒏 = 𝒀 𝟎 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 −𝟓𝟏𝟏 = −𝟏 sgn RED/BLUE 𝒀(𝒏) B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 𝒔𝒈𝒏 𝒔 = +𝟏, 𝒔 ≥ 𝟎 −𝟏, 𝒔 < 𝟎
  • 73.
    Neural Networks Training Example Stepn=0 - Output 255 0 0 -2 1 6.2 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 -511 −𝟏 =+1𝑿 𝟎 -1 𝒀 𝒏 = 𝒀 𝟎 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 −𝟓𝟏𝟏 = −𝟏 -1 RED B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567
  • 74.
    Neural Networks Training Example Stepn=0 Predicted Vs. Desired 255 0 0 -2 1 6.2 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 -511 −𝟏 =+1𝑿 𝟎 -1 -1 𝒀 𝒏 = 𝒀 𝟎 = −𝟏 𝐝 𝒏 = 𝒅 𝟎 = −𝟏 ∵ 𝒀 𝒏 = 𝒅 𝒏 ∴ Weights are Correct. No Adaptation B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567
  • 75.
    Neural Networks Training Example Stepn=1 • In each step in the solution, the parameters of the neural network must be known. • Parameters of step n=1: η = .001 𝑋 𝑛 = 𝑋 1 = +1, 248, 80, 68 𝑊 𝑛 = 𝑊 1 = 𝑊 0 = −1, −2, 1, 6.2 𝑑 𝑛 = 𝑑 1 = −1 B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567
  • 76.
    Neural Networks Training Example Stepn=1 248 80 68 RED/BLUE -2 1 6.2 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn =+1𝑿 𝟎 -1 𝒀(𝒏) B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567
  • 77.
    Neural Networks Training Example Stepn=1 - SOP 248 80 68 RED/BLUE -2 1 6.2 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn =+1𝑿 𝟎 -1 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) =+1*-1+248*-2+80*1+68*6.2 =4.6 𝒀(𝒏) B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567
  • 78.
    Neural Networks Training Example Stepn=1 - Output 248 80 68 -2 1 6.2 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 4.6 =+1𝑿 𝟎 -1 𝒀 𝒏 = 𝒀 𝟏 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 𝟒. 𝟔 = +𝟏 sgn RED/BLUE 𝒀(𝒏) B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 𝒔𝒈𝒏 𝒔 = +𝟏, 𝒔 ≥ 𝟎 −𝟏, 𝒔 < 𝟎
  • 79.
    Neural Networks Training Example Stepn=1 - Output 248 80 68 -2 1 6.2 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 4.6 +𝟏 =+1𝑿 𝟎 -1 𝒀 𝒏 = 𝒀 𝟏 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 𝟒. 𝟔 = +𝟏 +1 BLUE B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567
  • 80.
    Neural Networks Training Example Stepn=1 Predicted Vs. Desired 248 80 68 -2 1 6.2 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 4.6 +𝟏 =+1𝑿 𝟎 -1 +1 𝒀 𝒏 = 𝒀 𝟏 = +𝟏 𝐝 𝒏 = 𝒅 𝟏 = −𝟏 ∵ 𝒀 𝒏 ≠ 𝒅 𝒏 ∴ Weights are Incorrect. Adaptation Required B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567
  • 81.
    Weights Adaptation • Accordingto 𝑾 𝒏 + 𝟏 = 𝑾 𝒏 + η 𝒅 𝒏 − 𝒀 𝒏 𝑿(𝒏) • Where n = 1 𝑾 𝟏 + 𝟏 = 𝑾 𝟏 + η 𝒅 𝟏 − 𝒀 𝟏 𝑿(𝟏) 𝑾 𝟐 = −𝟏, −𝟐, 𝟏, 𝟔. 𝟐 + .001 −𝟏 − (+𝟏) +𝟏, 𝟐𝟒𝟖, 𝟖𝟎, 𝟔𝟖 𝑾 𝟐 = −𝟏, −𝟐, 𝟏, 𝟔. 𝟐 + .001 −𝟐 +𝟏, 𝟐𝟒𝟖, 𝟖𝟎, 𝟔𝟖 𝑾 𝟐 = −𝟏, −𝟐, 𝟏, 𝟔. 𝟐 + −. 𝟎𝟎𝟐 +𝟏, 𝟐𝟒𝟖, 𝟖𝟎, 𝟔𝟖 𝑾 𝟐 = −𝟏, −𝟐, 𝟏, 𝟔. 𝟐 + −. 𝟎𝟎𝟐, −. 𝟒𝟗𝟔, −. 𝟏𝟔, −. 𝟏𝟑𝟔 𝑾 𝟐 = −𝟏. 𝟎𝟎𝟐, −𝟐. 𝟒𝟗𝟔, . 𝟖𝟒, 𝟔. 𝟎𝟔𝟒
  • 82.
    Neural Networks Training Example Stepn=2 • In each step in the solution, the parameters of the neural network must be known. • Parameters of step n=2: η = .001 𝑋 𝑛 = 𝑋 2 = +1, 0, 0, 255 𝑊 𝑛 = 𝑊 2 = −1.002, −2.496, .84, 6.064 𝑑 𝑛 = 𝑑 2 = +1 B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567
  • 83.
    Neural Networks Training Example Stepn=2 0 0 255 RED/BLUE 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn =+1𝑿 𝟎 𝒀(𝒏) B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002
  • 84.
    Neural Networks Training Example Stepn=2 - SOP 0 0 255 RED/BLUE 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn =+1𝑿 𝟎 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) =+1*-1.002+0*- 2.496+0*.84+255*6.064 =1545.32 𝒀(𝒏) B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002
  • 85.
    Neural Networks Training Example Stepn=2 - Output 0 0 255 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 1545 .32 =+1𝑿 𝟎 𝒀 𝒏 = 𝒀 𝟐 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 1545.32 = +𝟏 sgn RED/BLUE 𝒀(𝒏) B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002 𝒔𝒈𝒏 𝒔 = +𝟏, 𝒔 ≥ 𝟎 −𝟏, 𝒔 < 𝟎
  • 86.
    Neural Networks Training Example Stepn=2 - Output 0 0 255 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 1545 .32 +𝟏 =+1𝑿 𝟎 𝒀 𝒏 = 𝒀 𝟐 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 1545.32 = +𝟏 +1 BLUE B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002
  • 87.
    Neural Networks Training Example Stepn=2 Predicted Vs. Desired 0 0 255 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 1545 .32 +𝟏 =+1𝑿 𝟎 +1 𝒀 𝒏 = 𝒀 𝟐 = +𝟏 𝐝 𝒏 = 𝒅 𝟐 = +𝟏 ∵ 𝒀 𝒏 = 𝒅 𝒏 ∴ Weights are Correct. No Adaptation B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002
  • 88.
    Neural Networks Training Example Stepn=3 • In each step in the solution, the parameters of the neural network must be known. • Parameters of step n=3: η = .001 𝑋 𝑛 = 𝑋 3 = +1, 67, 15, 210 𝑊 𝑛 = 𝑊 3 = 𝑊 2 = −1.002, −2.496, .84, 6.064 𝑑 𝑛 = 𝑑 3 = +1 B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567
  • 89.
    Neural Networks Training Example Stepn=3 67 15 210 RED/BLUE 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn =+1𝑿 𝟎 𝒀(𝒏) B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002
  • 90.
    Neural Networks Training Example Stepn=3 - SOP 67 15 210 RED/BLUE 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn =+1𝑿 𝟎 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) =+1*-1.002+67*- 2.496+15*.84+210*6.064 =1349.542 𝒀(𝒏) B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002
  • 91.
    Neural Networks Training Example Stepn=3 - Output 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 1349 .542 =+1𝑿 𝟎 𝒀 𝒏 = 𝒀 𝟑 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 1349.542 = +𝟏 sgn RED/BLUE 𝒀(𝒏) B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002 67 15 210 𝒔𝒈𝒏 𝒔 = +𝟏, 𝒔 ≥ 𝟎 −𝟏, 𝒔 < 𝟎
  • 92.
    Neural Networks Training Example Stepn=3 - Output 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 1349 .542 +𝟏 =+1𝑿 𝟎 𝒀 𝒏 = 𝒀 𝟑 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 1349.542 = +𝟏 +1 BLUE B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002 67 15 210
  • 93.
    Neural Networks Training Example Stepn=3 Predicted Vs. Desired 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 1349 .542 +𝟏 =+1𝑿 𝟎 +1 𝒀 𝒏 = 𝒀 𝟑 = +𝟏 𝐝 𝒏 = 𝒅 𝟑 = +𝟏 ∵ 𝒀 𝒏 = 𝒅 𝒏 ∴ Weights are Correct. No Adaptation B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002 67 15 210
  • 94.
    Neural Networks Training Example Stepn=4 • In each step in the solution, the parameters of the neural network must be known. • Parameters of step n=4: η = .001 𝑋 𝑛 = 𝑋 4 = +1, 255, 0, 0 𝑊 𝑛 = 𝑊 4 = 𝑊 3 = −1.002, −2.496, .84, 6.064 𝑑 𝑛 = 𝑑 4 = −1 B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567
  • 95.
    Neural Networks Training Example Stepn=4 255 0 0 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn =+1𝑿 𝟎 RED/BLUE 𝒀(𝒏) B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002
  • 96.
    Neural Networks Training Example Stepn=4 - SOP 255 0 0 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn =+1𝑿 𝟎 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) =+1*-1.002+255*- 2.496+0*.84+0*6.064 =-637.482 RED/BLUE 𝒀(𝒏) B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002
  • 97.
    Neural Networks Training Example Stepn=4 - Output 255 0 0 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 - 637. 482 =+1𝑿 𝟎 𝒀 𝒏 = 𝒀 𝟒 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 −637.482 = −𝟏 sgn RED/BLUE 𝒀(𝒏) B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002 𝒔𝒈𝒏 𝒔 = +𝟏, 𝒔 ≥ 𝟎 −𝟏, 𝒔 < 𝟎
  • 98.
    Neural Networks Training Example Stepn=4 - Output 255 0 0 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 - 637. 482 −𝟏 =+1𝑿 𝟎 𝒀 𝒏 = 𝒀 𝟒 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 −637.482 = −𝟏 -1 RED B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002
  • 99.
    Neural Networks Training Example Stepn=4 Predicted Vs. Desired 255 0 0 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 - 637. 482 −𝟏 =+1𝑿 𝟎 -1 𝒀 𝒏 = 𝒀 𝟒 = −𝟏 𝐝 𝒏 = 𝒅 𝟒 = −𝟏 ∵ 𝒀 𝒏 = 𝒅 𝒏 ∴ Weights are Correct. No Adaptation B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002
  • 100.
    Neural Networks Training Example Stepn=5 • In each step in the solution, the parameters of the neural network must be known. • Parameters of step n=5: η = .001 𝑋 𝑛 = 𝑋 5 = +1, 248, 80, 68 𝑊 𝑛 = 𝑊 5 = 𝑊 4 = −1.002, −2.496, .84, 6.064 𝑑 𝑛 = 𝑑 5 = −1 B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567
  • 101.
    Neural Networks Training Example Stepn=5 248 80 68 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn =+1𝑿 𝟎 RED/BLUE 𝒀(𝒏) B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002
  • 102.
    Neural Networks Training Example Stepn=5 - SOP 248 80 68 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn =+1𝑿 𝟎 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) =+1*-1.002+248*- 2.496+80*.84+68*6.064 =-31.306 RED/BLUE 𝒀(𝒏) B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002
  • 103.
    Neural Networks Training Example Stepn=5 - Output 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 - 31.3 06 =+1𝑿 𝟎 𝒀 𝒏 = 𝒀 𝟓 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 −31.306 = −𝟏 sgn RED/BLUE 𝒀(𝒏) B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002 248 80 68 𝒔𝒈𝒏 𝒔 = +𝟏, 𝒔 ≥ 𝟎 −𝟏, 𝒔 < 𝟎
  • 104.
    Neural Networks Training Example Stepn=5 - Output 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 - 31.3 06 −𝟏 =+1𝑿 𝟎 𝒀 𝒏 = 𝒀 𝟓 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 −31.306 = −𝟏 -1 RED B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002 248 80 68
  • 105.
    Neural Networks Training Example Stepn=5 Predicted Vs. Desired 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 - 637. 482 −𝟏 =+1𝑿 𝟎 -1 𝒀 𝒏 = 𝒀 𝟓 = −𝟏 𝐝 𝒏 = 𝒅 𝟓 = −𝟏 ∵ 𝒀 𝒏 = 𝒅 𝒏 ∴ Weights are Correct. No Adaptation B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002 248 80 68
  • 106.
    Correct Weights • Aftertesting the weights across all samples and results were correct then we can conclude that current weights are correct ones for training the neural network. • After training phase we come to testing the neural network. • What is the class of the unknown color of values of R=150, G=100, B=180?
  • 107.
    Testing Trained NeuralNetwork (R, G, B) = (150, 100, 180) Trained Neural Networks Parameters η = .001 𝑊 = −1.002, −2.496, .84, 6.064
  • 108.
    Testing Trained NeuralNetwork (R, G, B) = (150, 100, 180) SOP 150 100 180 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn =+1𝑿 𝟎 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) =+1*-1.002+150*- 2.496+100*.84+180*6.064 =800.118 RED/BLUE 𝒀 6.064 .84 -2.496 -1.002
  • 109.
    Testing Trained NeuralNetwork (R, G, B) = (150, 100, 180) Output 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 800. 118 =+1𝑿 𝟎 𝒀 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 800.118 = +𝟏 sgn RED/BLUE 𝒀 6.064 .84 -2.496 -1.002 150 100 180 𝒔𝒈𝒏 𝒔 = +𝟏, 𝒔 ≥ 𝟎 −𝟏, 𝒔 < 𝟎
  • 110.
    Testing Trained NeuralNetwork (R, G, B) = (150, 100, 180) Output 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 800. 118 +𝟏 =+1𝑿 𝟎 𝒀 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 800.118 = +𝟏 +1 BLUE 6.064 .84 -2.496 -1.002 150 100 180