SlideShare a Scribd company logo
Artificial Neural Networks (ANNs)
Step-By-Step Training & Testing
Example 2
MENOUFIA UNIVERSITY
FACULTY OF COMPUTERS AND INFORMATION
ALL DEPARTMENTS
ARTIFICIAL INTELLIGENCE
‫المنوفية‬ ‫جامعة‬
‫والمعلومات‬ ‫الحاسبات‬ ‫كلية‬
‫األقسام‬ ‫جميع‬
‫الذكاء‬‫اإلصطناعي‬
‫المنوفية‬ ‫جامعة‬
Ahmed Fawzy Gad
ahmed.fawzy@ci.menofia.edu.eg
Classification Example
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
Neural Networks
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
Input Hidden Output
Neural Networks
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
0
50
100
150
200
250
0 5 10 15 20
Input Layer
Input Output
𝑭 𝟏
𝑭 𝟐
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
0
50
100
150
200
250
0 5 10 15 20
Output Layer
Input Output
C1/C2
𝒀𝒋
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
0
50
100
150
200
250
0 5 10 15 20
𝑭 𝟏
𝑭 𝟐
Weights
Input Output
Weights=𝑾𝒊
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
0
50
100
150
200
250
0 5 10 15 20
C1/C2
𝒀𝒋
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
Activation Function
Input Output
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
0
50
100
150
200
250
0 5 10 15 20
C1/C2
𝒀𝒋
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
Activation Function
Input Output
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
0
50
100
150
200
250
0 5 10 15 20
C1/C2
𝒀𝒋
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
Activation Function
Input Output
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
0
50
100
150
200
250
0 5 10 15 20
C1/C2
𝒀𝒋
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
Activation Function
Components
Input Output
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
0
50
100
150
200
250
0 5 10 15 20
C1/C2
𝒀𝒋
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
Activation Function
Inputs
Input Output
s
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
0
50
100
150
200
250
0 5 10 15 20
C1/C2
𝒀𝒋
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
Activation Function
Inputs
Input Output
ss=SOP(𝑿𝒊, 𝑾𝒊)
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
0
50
100
150
200
250
0 5 10 15 20
C1/C2
𝒀𝒋
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
Activation Function
Inputs
Input Output
ss=SOP(𝑿𝒊, 𝑾𝒊)
𝑿𝒊=Inputs 𝑾𝒊=Weights
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
0
50
100
150
200
250
0 5 10 15 20
C1/C2
𝒀𝒋
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
Activation Function
Inputs
Input Output
ss=SOP(𝑿𝒊, 𝑾𝒊)
𝑿𝒊=Inputs 𝑾𝒊=Weights
𝑿 𝟏
𝑿 𝟐
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
0
50
100
150
200
250
0 5 10 15 20
C1/C2
𝒀𝒋
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
Activation Function
Inputs
Input Output
ss=SOP(𝑿𝒊, 𝑾𝒊)
𝑿𝒊=Inputs 𝑾𝒊=Weights
S= 𝟏
𝒎
𝑿𝒊 𝑾𝒊
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
0
50
100
150
200
250
0 5 10 15 20
𝑿 𝟏
𝑿 𝟐
C1/C2
𝒀𝒋
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
Activation Function
Inputs
Input Output
ss=SOP(𝑿𝒊, 𝑾𝒊)
𝑿𝒊=Inputs 𝑾𝒊=Weights s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
0
50
100
150
200
250
0 5 10 15 20
𝑿 𝟏
𝑿 𝟐
C1/C2
𝒀𝒋
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
Activation Function
Outputs
Input Output
F(s)s Class Label
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
0
50
100
150
200
250
0 5 10 15 20
𝑿 𝟏
𝑿 𝟐
C1/C2
𝒀𝒋
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
Activation Functions
Piecewise
Linear Sigmoid Signum
Activation Functions
Which activation function to use?
Outputs
Class
Labels
Activation
Function
TWO Class
Labels
TWO
Outputs
One that gives two outputs.
Which activation function to use?
𝑪𝒋𝒀𝒋
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
Activation Functions
Piecewise
Linear Sigmoid SignumSignum
Activation Function
Input Output
F(s)s sgn
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
0
50
100
150
200
250
0 5 10 15 20
𝑿 𝟏
𝑿 𝟐
C1/C2
𝒀𝒋
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
Bias
Input Output
F(s)s sgn
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
𝑿 𝟏
𝑿 𝟐
C1/C2
𝒀𝒋
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
Bias
Input Output
F(s)s sgn
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
=+1𝑿 𝟎
W0
𝑿 𝟏
𝑿 𝟐
C1/C2
𝒀𝒋
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
Bias
Input Output
F(s)s sgn
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
𝑿 𝟎 = +𝟏
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
=+1𝑿 𝟎
W0
𝑿 𝟏
𝑿 𝟐
C1/C2
𝒀𝒋
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
Bias
Input Output
F(s)s sgn
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
s=(+𝟏 ∗ 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
=+1𝑿 𝟎
W0
𝑿 𝟏
𝑿 𝟐
C1/C2
𝒀𝒋
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
Bias
Input Output
F(s)s sgn
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
=+1𝑿 𝟎
W0
𝑿 𝟏
𝑿 𝟐
C1/C2
𝒀𝒋
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
Bias Importance
Input Output
F(s)s sgn
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
=+1𝑿 𝟎
W0
𝑿 𝟏
𝑿 𝟐
C1/C2
𝒀𝒋
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
Bias Importance
Input Output
F(s)s sgn
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
X
Y
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
=+1𝑿 𝟎
W0
𝑿 𝟏
𝑿 𝟐
C1/C2
𝒀𝒋
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
Bias Importance
Input Output
F(s)s sgn
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
X
Y y=ax+b
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
=+1𝑿 𝟎
W0
𝑿 𝟏
𝑿 𝟐
C1/C2
𝒀𝒋
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
Bias Importance
Input Output
F(s)s sgn
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
X
Y y=x+b
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
=+1𝑿 𝟎
W0
𝑿 𝟏
𝑿 𝟐
C1/C2
𝒀𝒋
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
Bias Importance
Input Output
F(s)s sgn
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
X
Y y=x+b
Y-Intercept
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
=+1𝑿 𝟎
W0
𝑿 𝟏
𝑿 𝟐
C1/C2
𝒀𝒋
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
Bias Importance
Input Output
F(s)s sgn
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
X
Y y=x+b
Y-Intercept
b=0
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
=+1𝑿 𝟎
W0
𝑿 𝟏
𝑿 𝟐
C1/C2
𝒀𝒋
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
Bias Importance
Input Output
F(s)s sgn
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
X
Y y=x+b
Y-Intercept
b=0
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
=+1𝑿 𝟎
W0
𝑿 𝟏
𝑿 𝟐
C1/C2
𝒀𝒋
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
Bias Importance
Input Output
F(s)s sgn
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
X
Y y=x+b
Y-Intercept
b=+v
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
=+1𝑿 𝟎
W0
𝑿 𝟏
𝑿 𝟐
C1/C2
𝒀𝒋
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
Bias Importance
Input Output
F(s)s sgn
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
X
Y y=x+b
Y-Intercept
b=+v
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
=+1𝑿 𝟎
W0
𝑿 𝟏
𝑿 𝟐
C1/C2
𝒀𝒋
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
Bias Importance
Input Output
F(s)s sgn
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
X
Y y=x+b
Y-Intercept
b=-v
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
=+1𝑿 𝟎
W0
𝑿 𝟏
𝑿 𝟐
C1/C2
𝒀𝒋
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
Bias Importance
Input Output
F(s)s sgn
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
X
Y y=x+b
Y-Intercept
b=-v
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
=+1𝑿 𝟎
W0
𝑿 𝟏
𝑿 𝟐
C1/C2
𝒀𝒋
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
Bias Importance
Input Output
F(s)s sgn
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
Same Concept Applies to Bias
S= 𝟏
𝒎
𝑿𝒊 𝑾𝒊+BIAS
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
=+1𝑿 𝟎
W0
𝑿 𝟏
𝑿 𝟐
C1/C2
𝒀𝒋
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
Bias Importance
Input Output
F(s)s sgn
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
S= 𝟏
𝒎
𝑿𝒊 𝑾𝒊+BIAS
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
=+1𝑿 𝟎
W0
𝑿 𝟏
𝑿 𝟐
C1/C2
𝒀𝒋
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
Bias Importance
Input Output
F(s)s sgn
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
S= 𝟏
𝒎
𝑿𝒊 𝑾𝒊+BIAS
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
=+1𝑿 𝟎
W0
𝑿 𝟏
𝑿 𝟐
C1/C2
𝒀𝒋
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
Bias Importance
Input Output
F(s)s sgn
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
S= 𝟏
𝒎
𝑿𝒊 𝑾𝒊+BIAS
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
=+1𝑿 𝟎
W0
𝑿 𝟏
𝑿 𝟐
C1/C2
𝒀𝒋
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
Bias Importance
Input Output
F(s)s sgn
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
S= 𝟏
𝒎
𝑿𝒊 𝑾𝒊+BIAS
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
=+1𝑿 𝟎
W0
𝑿 𝟏
𝑿 𝟐
C1/C2
𝒀𝒋
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
Learning Rate
F(s)s sgn
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
𝟎 ≤ η ≤ 𝟏=+1𝑿 𝟎
W0
𝑿 𝟏
𝑿 𝟐
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
C1/C2
𝒀𝒋
Summary of Parameters
Inputs 𝑿 𝒎
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
𝟎 ≤ η ≤ 𝟏
𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐)
F(s)s sgn
=+1𝑿 𝟎
W0
𝑿 𝟏
𝑿 𝟐
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
C1/C2
𝒀𝒋
Summary of Parameters
Weights 𝑾 𝒎
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
𝟎 ≤ η ≤ 𝟏
W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐)
𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐)
F(s)s sgn
=+1𝑿 𝟎
W0
𝑿 𝟏
𝑿 𝟐
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
C1/C2
𝒀𝒋
Summary of Parameters
Bias 𝒃
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
𝟎 ≤ η ≤ 𝟏
𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐)
W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐)
F(s)s sgn
=+1𝑿 𝟎
W0
𝑿 𝟏
𝑿 𝟐
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
C1/C2
𝒀𝒋
Summary of Parameters
Sum Of Products (SOP) 𝒔
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
𝟎 ≤ η ≤ 𝟏
𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐)
W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐)
F(s)s sgn
=+1𝑿 𝟎
W0
𝑿 𝟏
𝑿 𝟐
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
C1/C2
𝒀𝒋
Summary of Parameters
Activation Function 𝒔𝒈𝒏
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
𝟎 ≤ η ≤ 𝟏
𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐)
W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐)
F(s)s sgn
=+1𝑿 𝟎
W0
𝑿 𝟏
𝑿 𝟐
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
C1/C2
𝒀𝒋
Summary of Parameters
Outputs 𝒀𝒋
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
𝟎 ≤ η ≤ 𝟏
𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐)
W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐)
F(s)s sgn
=+1𝑿 𝟎
W0
𝑿 𝟏
𝑿 𝟐
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
C1/C2
𝒀𝒋
Summary of Parameters
Learning Rate η
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
𝟎 ≤ η ≤ 𝟏
𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐)
W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐)
F(s)s sgn
=+1𝑿 𝟎
W0
𝑿 𝟏
𝑿 𝟐
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
C1/C2
𝒀𝒋
Other Parameters
Step n
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
𝟎 ≤ η ≤ 𝟏𝒏 = 𝟎, 𝟏, 𝟐, …
𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐)
W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐)
F(s)s sgn
=+1𝑿 𝟎
W0
𝑿 𝟏
𝑿 𝟐
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
C1/C2
𝒀𝒋
Other Parameters
Desired Output 𝒅𝒋
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
𝟎 ≤ η ≤ 𝟏𝒏 = 𝟎, 𝟏, 𝟐, …
𝒅 𝒏 =
+𝟏, 𝒙 𝒏 𝒃𝒆𝒍𝒐𝒏𝒈𝒔 𝒕𝒐 𝑪𝟏
−𝟏, 𝒙 𝒏 𝒃𝒆𝒍𝒐𝒏𝒈𝒔 𝒕𝒐 𝑪𝟐
𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐)
W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐)
𝑭 𝟐𝑭 𝟏
16.8121
C1
15.2114
9.4210
C2
8.1195
F(s)s sgn
=+1𝑿 𝟎
W0
𝑿 𝟏
𝑿 𝟐
𝑭 𝟏
𝑭 𝟐
𝑾 𝟏
𝑾 𝟐
C1/C2
𝒀𝒋
Neural Networks Training Steps
Weights Initialization
Inputs Application
Sum of Inputs-Weights Products
Activation Function Response Calculation
Weights Adaptation
Back to Step 2
1
2
3
4
5
6
Regarding 5th Step: Weights Adaptation
• If the predicted output Y is not the same as the desired output d,
then weights are to be adapted according to the following equation:
𝑾 𝒏 + 𝟏 = 𝑾 𝒏 + η 𝒅 𝒏 − 𝒀 𝒏 𝑿(𝒏)
Where
𝑾 𝒏 = [𝒃 𝒏 , 𝑾 𝟏(𝒏), 𝑾 𝟐(𝒏), 𝑾 𝟑(𝒏), … , 𝑾 𝒎(𝒏)]
Neural Networks
Training Example
Step n=0
• In each step in the solution, the parameters of the neural network
must be known.
• Parameters of step n=0:
η = .01
𝑋 𝑛 = 𝑋 0 = +1, 121, 16.8
𝑊 𝑛 = 𝑊 0 = −1230, −30, 300
𝑑 𝑛 = 𝑑 0 = +1
𝑭 𝟐𝑭 𝟏
16.8121
C1 = +1
15.2114
9.4210
C2 = -1
8.1195
Neural Networks
Training Example
Step n=0
F(s)s sgn
=+1𝑿 𝟎
-1230
𝑿 𝟏
𝑿 𝟐
121
𝟏𝟔. 𝟖
-30
300
𝑭 𝟐𝑭 𝟏
16.8121
C1 = +1
15.2114
9.4210
C2 = -1
8.1195
C1/C2
𝒀(𝒏)
Neural Networks
Training Example
Step n=0 - SOP
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
=+1*-1230+121*-30+16.8*300
=180
𝑭 𝟐𝑭 𝟏
16.8121
C1 = +1
15.2114
9.4210
C2 = -1
8.1195
F(s)s sgn
=+1𝑿 𝟎
-1230
𝑿 𝟏
𝑿 𝟐
121
𝟏𝟔. 𝟖
-30
300
C1/C2
𝒀(𝒏)
Neural Networks
Training Example
Step n=0 - Output
𝒀 𝒏 = 𝒀 𝟎
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 𝟏𝟖𝟎
= +𝟏
𝒔𝒈𝒏 𝒔 =
+𝟏, 𝒔 ≥ 𝟎
−𝟏, 𝒔 < 𝟎
𝑭 𝟐𝑭 𝟏
16.8121
C1 = +1
15.2114
9.4210
C2 = -1
8.1195
F(s)s sgn
=+1𝑿 𝟎
-1230
𝑿 𝟏
𝑿 𝟐
121
𝟏𝟔. 𝟖
-30
300
C1/C2
𝒀(𝒏)
Neural Networks
Training Example
Step n=0 - Output
𝑭 𝟐𝑭 𝟏
16.8121
C1 = +1
15.2114
9.4210
C2 = -1
8.1195
F(s)s sgn
=+1𝑿 𝟎
-1230
𝑿 𝟏
𝑿 𝟐
121
𝟏𝟔. 𝟖
-30
300
𝒀 𝒏 = 𝒀 𝟎
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 𝟏𝟖𝟎
= +𝟏
C1
+1
Neural Networks
Training Example
Step n=0
Predicted Vs. Desired
𝒀 𝒏 = 𝒀 𝟎 = +𝟏
𝐝 𝒏 = 𝒅 𝟎 = +𝟏
∵ 𝒀 𝒏 = 𝒅 𝒏
∴ Weights are Correct.
No Adaptation
𝑭 𝟐𝑭 𝟏
16.8121
C1 = +1
15.2114
9.4210
C2 = -1
8.1195
F(s)s sgn
=+1𝑿 𝟎
-1230
𝑿 𝟏
𝑿 𝟐
121
𝟏𝟔. 𝟖
-30
300
C1
+1
Neural Networks
Training Example
Step n=1
• In each step in the solution, the parameters of the neural network
must be known.
• Parameters of step n=1:
η = .01
𝑋 𝑛 = 𝑋 1 = +1, 121, 16.8
𝑊 𝑛 = 𝑊 1 = 𝑊 0 = −1230, −30, 300
𝑑 𝑛 = 𝑑 1 = +1
𝑭 𝟐𝑭 𝟏
16.8121
C1 = +1
15.2114
9.4210
C2 = -1
8.1195
Neural Networks
Training Example
Step n=1
F(s)s sgn
=+1𝑿 𝟎
-1230
𝑿 𝟏
𝑿 𝟐
114
𝟏𝟓. 𝟐
-30
300
𝑭 𝟐𝑭 𝟏
16.8121
C1 = +1
15.2114
9.4210
C2 = -1
8.1195
C1/C2
𝒀(𝒏)
Neural Networks
Training Example
Step n=1 - SOP
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
=+1*-1230+114*-30+15.2*300
=-90
𝑭 𝟐𝑭 𝟏
16.8121
C1 = +1
15.2114
9.4210
C2 = -1
8.1195
F(s)s sgn
=+1𝑿 𝟎
-1230
𝑿 𝟏
𝑿 𝟐
114
𝟏𝟓. 𝟐
-30
300
C1/C2
𝒀(𝒏)
Neural Networks
Training Example
Step n=1 - Output
𝒀 𝒏 = 𝒀 𝟏
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 −𝟗𝟎
= −𝟏
𝒔𝒈𝒏 𝒔 =
+𝟏, 𝒔 ≥ 𝟎
−𝟏, 𝒔 < 𝟎
𝑭 𝟐𝑭 𝟏
16.8121
C1 = +1
15.2114
9.4210
C2 = -1
8.1195
F(s)s sgn
=+1𝑿 𝟎
-1230
𝑿 𝟏
𝑿 𝟐
114
𝟏𝟓. 𝟐
-30
300
C1/C2
𝒀(𝒏)
Neural Networks
Training Example
Step n=1 - Output
𝑭 𝟐𝑭 𝟏
16.8121
C1 = +1
15.2114
9.4210
C2 = -1
8.1195
F(s)s sgn
=+1𝑿 𝟎
-1230
𝑿 𝟏
𝑿 𝟐
114
𝟏𝟓. 𝟐
-30
300
𝒀 𝒏 = 𝒀 𝟏
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 −𝟗𝟎
= −𝟏
C2
-1
Neural Networks
Training Example
Step n=1
Predicted Vs. Desired
𝒀 𝒏 = 𝒀 𝟏 = −𝟏
𝐝 𝒏 = 𝒅 𝟏 = +𝟏
∵ 𝒀 𝒏 ≠ 𝒅 𝒏
∴ Weights are Incorrect.
Adaptation Required.
𝑭 𝟐𝑭 𝟏
16.8121
C1 = +1
15.2114
9.4210
C2 = -1
8.1195
F(s)s sgn
=+1𝑿 𝟎
-1230
𝑿 𝟏
𝑿 𝟐
114
𝟏𝟓. 𝟐
-30
300
C2
-1
Weights Adaptation
• According to
𝑾 𝒏 + 𝟏 = 𝑾 𝒏 + η 𝒅 𝒏 − 𝒀 𝒏 𝑿(𝒏)
• Where n = 1
𝑾 𝟏 + 𝟏 = 𝑾 𝟏 + η 𝒅 𝟏 − 𝒀 𝟏 𝑿(𝟏)
𝑾 𝟐 = −𝟏𝟐𝟑𝟎, 𝟑𝟎, 𝟑𝟎𝟎 + .01 +𝟏 − (−𝟏) +𝟏, 𝟏𝟏𝟒, 𝟏𝟓. 𝟐
𝑾 𝟐 = −𝟏𝟐𝟑𝟎, 𝟑𝟎, 𝟑𝟎𝟎 + .01 +𝟐 +𝟏, 𝟏𝟏𝟒, 𝟏𝟓. 𝟐
𝑾 𝟐 = −𝟏𝟐𝟑𝟎, 𝟑𝟎, 𝟑𝟎𝟎 + .0 𝟐 +𝟏, 𝟏𝟏𝟒, 𝟏𝟓. 𝟐
𝑾 𝟐 = −𝟏𝟐𝟑𝟎, 𝟑𝟎, 𝟑𝟎𝟎 + +. 𝟎𝟐, 𝟐. 𝟐𝟖, . 𝟑𝟎𝟒
𝑾 𝟐 = −𝟏𝟐𝟐𝟗. 𝟎𝟖, −𝟐𝟕. 𝟕𝟐, 𝟑𝟎𝟎. 𝟑𝟎𝟒
Neural Networks
Training Example
Step n=2
• In each step in the solution, the parameters of the neural network
must be known.
• Parameters of step n=2:
η = .01
𝑋 𝑛 = 𝑋 2 = +1, 210, 9.4
𝑊 𝑛 = 𝑊 2 = −1229.08, −27.72, 300.304
𝑑 𝑛 = 𝑑 2 = −1
𝑭 𝟐𝑭 𝟏
16.8121
C1 = +1
15.2114
9.4210
C2 = -1
8.1195
Neural Networks
Training Example
Step n=2
𝑭 𝟐𝑭 𝟏
16.8121
C1 = +1
15.2114
9.4210
C2 = -1
8.1195
F(s)s sgn
=+1𝑿 𝟎 -
1229.
08
𝑿 𝟏
𝑿 𝟐
210
𝟗. 𝟒
-27.72
300.3
04
C1/C2
𝒀(𝒏)
Neural Networks
Training Example
Step n=2 - SOP
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
=+1*-1229.08+210*-
27.72+9.4*300.304
=-4227.4224
𝑭 𝟐𝑭 𝟏
16.8121
C1 = +1
15.2114
9.4210
C2 = -1
8.1195
F(s)s sgn
=+1𝑿 𝟎 -
1229.
08
𝑿 𝟏
𝑿 𝟐
210
𝟗. 𝟒
-27.72
300.3
04
C1/C2
𝒀(𝒏)
Neural Networks
Training Example
Step n=2 - Output
𝒀 𝒏 = 𝒀 𝟐
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 −4227.4224
= +𝟏
𝒔𝒈𝒏 𝒔 =
+𝟏, 𝒔 ≥ 𝟎
−𝟏, 𝒔 < 𝟎
𝑭 𝟐𝑭 𝟏
16.8121
C1 = +1
15.2114
9.4210
C2 = -1
8.1195
F(s)s sgn
=+1𝑿 𝟎 -
1229.
08
𝑿 𝟏
𝑿 𝟐
210
𝟗. 𝟒
-27.72
300.3
04
C1/C2
𝒀(𝒏)
Neural Networks
Training Example
Step n=2 - Output
𝒀 𝒏 = 𝒀 𝟐
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 −4227.4224
= −𝟏
𝑭 𝟐𝑭 𝟏
16.8121
C1 = +1
15.2114
9.4210
C2 = -1
8.1195
F(s)s sgn
=+1𝑿 𝟎 -
1229.
08
𝑿 𝟏
𝑿 𝟐
210
𝟗. 𝟒
-27.72
300.3
04
C2
−𝟏
Neural Networks
Training Example
Step n=2
Predicted Vs. Desired
𝒀 𝒏 = 𝒀 𝟐 = −𝟏
𝐝 𝒏 = 𝒅 𝟐 = −𝟏
∵ 𝒀 𝒏 = 𝒅 𝒏
∴ Weights are Correct.
No Adaptation
𝑭 𝟐𝑭 𝟏
16.8121
C1 = +1
15.2114
9.4210
C2 = -1
8.1195
F(s)s sgn
=+1𝑿 𝟎 -
1229.
08
𝑿 𝟏
𝑿 𝟐
210
𝟗. 𝟒
-27.72
300.3
04
C2
−𝟏
Neural Networks
Training Example
Step n=3
• In each step in the solution, the parameters of the neural network
must be known.
• Parameters of step n=3:
η = .01
𝑋 𝑛 = 𝑋 3 = +1, 210, 9.4
𝑊 𝑛 = 𝑊 3 = 𝑊 2 = −1229.08, −27.72, 300.304
𝑑 𝑛 = 𝑑 3 = −1
𝑭 𝟐𝑭 𝟏
16.8121
C1 = +1
15.2114
9.4210
C2 = -1
8.1195
Neural Networks
Training Example
Step n=3
𝑭 𝟐𝑭 𝟏
16.8121
C1 = +1
15.2114
9.4210
C2 = -1
8.1195
F(s)s sgn
=+1𝑿 𝟎 -
1229.
08
𝑿 𝟏
𝑿 𝟐
195
𝟖. 𝟏
-27.72
300.3
04
C1/C2
𝒀(𝒏)
Neural Networks
Training Example
Step n=3 - SOP
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
=+1*-1229.08+195*-
27.72+8.1*300.304
=-4202.0176
𝑭 𝟐𝑭 𝟏
16.8121
C1 = +1
15.2114
9.4210
C2 = -1
8.1195
F(s)s sgn
=+1𝑿 𝟎 -
1229.
08
𝑿 𝟏
𝑿 𝟐
195
𝟖. 𝟏
-27.72
300.3
04
C1/C2
𝒀(𝒏)
Neural Networks
Training Example
Step n=3 - Output
𝒀 𝒏 = 𝒀 𝟑
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 −4202.0176
= −𝟏
𝒔𝒈𝒏 𝒔 =
+𝟏, 𝒔 ≥ 𝟎
−𝟏, 𝒔 < 𝟎
𝑭 𝟐𝑭 𝟏
16.8121
C1 = +1
15.2114
9.4210
C2 = -1
8.1195
F(s)s sgn
=+1𝑿 𝟎 -
1229.
08
𝑿 𝟏
𝑿 𝟐
195
𝟖. 𝟏
-27.72
300.3
04
C1/C2
𝒀(𝒏)
Neural Networks
Training Example
Step n=3 - Output
𝒀 𝒏 = 𝒀 𝟑
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 −4202.0176
= −𝟏
𝑭 𝟐𝑭 𝟏
16.8121
C1 = +1
15.2114
9.4210
C2 = -1
8.1195
F(s)s sgn
=+1𝑿 𝟎 -
1229.
08
𝑿 𝟏
𝑿 𝟐
195
𝟖. 𝟏
-27.72
300.3
04
C2
−𝟏
Neural Networks
Training Example
Step n=3
Predicted Vs. Desired
𝒀 𝒏 = 𝒀 𝟑 = −𝟏
𝐝 𝒏 = 𝒅 𝟑 = −𝟏
∵ 𝒀 𝒏 = 𝒅 𝒏
∴ Weights are Correct.
No Adaptation
𝑭 𝟐𝑭 𝟏
16.8121
C1 = +1
15.2114
9.4210
C2 = -1
8.1195
F(s)s sgn
=+1𝑿 𝟎 -
1229.
08
𝑿 𝟏
𝑿 𝟐
195
𝟖. 𝟏
-27.72
300.3
04
C1
−𝟏
Neural Networks
Training Example
Step n=4
• In each step in the solution, the parameters of the neural network
must be known.
• Parameters of step n=4:
η = .01
𝑋 𝑛 = 𝑋 4 = +1, 121, 16.8
𝑊 𝑛 = 𝑊 4 = 𝑊 3 = −1229.08, −27.72, 300.304
𝑑 𝑛 = 𝑑 4 = +1
𝑭 𝟐𝑭 𝟏
16.8121
C1 = +1
15.2114
9.4210
C2 = -1
8.1195
Neural Networks
Training Example
Step n=4
𝑭 𝟐𝑭 𝟏
16.8121
C1 = +1
15.2114
9.4210
C2 = -1
8.1195
F(s)s sgn
=+1𝑿 𝟎 -
1229.
08
𝑿 𝟏
𝑿 𝟐
121
𝟏𝟔. 𝟖
-27.72
300.3
04
C1/C2
𝒀(𝒏)
Neural Networks
Training Example
Step n=4 - SOP
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
=+1*-1229.08+121*-
27.72+16.8*300.304
=461.91
𝑭 𝟐𝑭 𝟏
16.8121
C1 = +1
15.2114
9.4210
C2 = -1
8.1195
F(s)s sgn
=+1𝑿 𝟎 -
1229.
08
𝑿 𝟏
𝑿 𝟐
121
𝟏𝟔. 𝟖
-27.72
300.3
04
C1/C2
𝒀(𝒏)
Neural Networks
Training Example
Step n=4 - Output
𝒀 𝒏 = 𝒀 𝟒
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 461.91
= +𝟏
𝒔𝒈𝒏 𝒔 =
+𝟏, 𝒔 ≥ 𝟎
−𝟏, 𝒔 < 𝟎
𝑭 𝟐𝑭 𝟏
16.8121
C1 = +1
15.2114
9.4210
C2 = -1
8.1195
F(s)s sgn
=+1𝑿 𝟎 -
1229.
08
𝑿 𝟏
𝑿 𝟐
121
𝟏𝟔. 𝟖
-27.72
300.3
04
C1/C2
𝒀(𝒏)
Neural Networks
Training Example
Step n=4 - Output
𝒀 𝒏 = 𝒀 𝟒
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 461.91
= +𝟏
𝑭 𝟐𝑭 𝟏
16.8121
C1 = +1
15.2114
9.4210
C2 = -1
8.1195
F(s)s sgn
=+1𝑿 𝟎 -
1229.
08
𝑿 𝟏
𝑿 𝟐
121
𝟏𝟔. 𝟖
-27.72
300.3
04
C1
+𝟏
Neural Networks
Training Example
Step n=4
Predicted Vs. Desired
𝒀 𝒏 = 𝒀 𝟒 = +𝟏
𝐝 𝒏 = 𝒅 𝟒 = +𝟏
∵ 𝒀 𝒏 = 𝒅 𝒏
∴ Weights are Correct.
No Adaptation
𝑭 𝟐𝑭 𝟏
16.8121
C1 = +1
15.2114
9.4210
C2 = -1
8.1195
F(s)s sgn
=+1𝑿 𝟎 -
1229.
08
𝑿 𝟏
𝑿 𝟐
121
𝟏𝟔. 𝟖
-27.72
300.3
04
C1
+𝟏
Neural Networks
Training Example
Step n=5
• In each step in the solution, the parameters of the neural network
must be known.
• Parameters of step n=5:
η = .01
𝑋 𝑛 = 𝑋 5 = +1, 114, 15.2
𝑊 𝑛 = 𝑊 5 = 𝑊 4 = −1229.08, −27.72, 300.304
𝑑 𝑛 = 𝑑 5 = +1
𝑭 𝟐𝑭 𝟏
16.8121
C1 = +1
15.2114
9.4210
C2 = -1
8.1195
Neural Networks
Training Example
Step n=5
𝑭 𝟐𝑭 𝟏
16.8121
C1 = +1
15.2114
9.4210
C2 = -1
8.1195
F(s)s sgn
=+1𝑿 𝟎 -
1229.
08
𝑿 𝟏
𝑿 𝟐
114
𝟏𝟓. 𝟐
-27.72
300.3
04
C1/C2
𝒀(𝒏)
Neural Networks
Training Example
Step n=5 - SOP
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
=+1*-1229.08+114*-
27.72+15.2*300.304
= 1𝟕𝟓. 𝟒𝟔
𝑭 𝟐𝑭 𝟏
16.8121
C1 = +1
15.2114
9.4210
C2 = -1
8.1195
F(s)s sgn
=+1𝑿 𝟎 -
1229.
08
𝑿 𝟏
𝑿 𝟐
114
𝟏𝟓. 𝟐
-27.72
300.3
04
C1/C2
𝒀(𝒏)
Neural Networks
Training Example
Step n=5 - Output
𝒀 𝒏 = 𝒀 𝟓
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 1𝟕𝟓. 𝟒𝟔
= +𝟏
𝒔𝒈𝒏 𝒔 =
+𝟏, 𝒔 ≥ 𝟎
−𝟏, 𝒔 < 𝟎
𝑭 𝟐𝑭 𝟏
16.8121
C1 = +1
15.2114
9.4210
C2 = -1
8.1195
F(s)s sgn
=+1𝑿 𝟎 -
1229.
08
𝑿 𝟏
𝑿 𝟐
114
𝟏𝟓. 𝟐
-27.72
300.3
04
C1/C2
𝒀(𝒏)
Neural Networks
Training Example
Step n=5 - Output
𝒀 𝒏 = 𝒀 𝟓
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 1𝟕𝟓. 𝟒𝟔
= +𝟏
𝑭 𝟐𝑭 𝟏
16.8121
C1 = +1
15.2114
9.4210
C2 = -1
8.1195
F(s)s sgn
=+1𝑿 𝟎 -
1229.
08
𝑿 𝟏
𝑿 𝟐
114
𝟏𝟓. 𝟐
-27.72
300.3
04
C1
+𝟏
Neural Networks
Training Example
Step n=5
Predicted Vs. Desired
𝒀 𝒏 = 𝒀 𝟒 = +𝟏
𝐝 𝒏 = 𝒅 𝟒 = +𝟏
∵ 𝒀 𝒏 = 𝒅 𝒏
∴ Weights are Correct.
No Adaptation
𝑭 𝟐𝑭 𝟏
16.8121
C1 = +1
15.2114
9.4210
C2 = -1
8.1195
F(s)s sgn
=+1𝑿 𝟎 -
1229.
08
𝑿 𝟏
𝑿 𝟐
114
𝟏𝟓. 𝟐
-27.72
300.3
04
C1
+𝟏
Correct Weights
• After testing the weights across all samples and results were correct
then we can conclude that current weights are correct ones for
training the neural network.
• After training phase we come to testing the neural network.
• What is the class of the unknown color of values of F1=140 and
F2=17.9?
Testing Trained Neural Network
(F1, F2) = (140, 17.9)
Trained Neural Networks Parameters
𝑊 = −1229.08, −27.72, 300.304
Testing Trained Neural Network
(F1, F2) = (140, 17.9)
SOP
F(s)s sgn
=+1𝑿 𝟎 -
1229.
08
𝑿 𝟏
𝑿 𝟐
140
𝟏𝟕. 𝟗
-27.72
300.3
04
C1/C2
𝒀(𝒏)
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐)
=+1*-1229.08+140*-
27.72+17.9*300.304
= 𝟐𝟔𝟓
Testing Trained Neural Network
(F1, F2) = (140, 17.9)
Output
𝒀
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 𝟐𝟔𝟓
= +𝟏
𝒔𝒈𝒏 𝒔 =
+𝟏, 𝒔 ≥ 𝟎
−𝟏, 𝒔 < 𝟎
F(s)s sgn
=+1𝑿 𝟎 -
1229.
08
𝑿 𝟏
𝑿 𝟐
140
𝟏𝟕. 𝟗
-27.72
300.3
04
C1/C2
𝒀(𝒏)
Testing Trained Neural Network
(F1, F2) = (140, 17.9)
Output
𝒀
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 𝟐𝟔𝟓
= +𝟏
F(s)s sgn
=+1𝑿 𝟎 -
1229.
08
𝑿 𝟏
𝑿 𝟐
140
𝟏𝟕. 𝟗
-27.72
300.3
04
C1
+1

More Related Content

What's hot

오토인코더의 모든 것
오토인코더의 모든 것오토인코더의 모든 것
오토인코더의 모든 것
NAVER Engineering
 
Strassen's matrix multiplication
Strassen's matrix multiplicationStrassen's matrix multiplication
Strassen's matrix multiplication
Megha V
 
Variational Autoencoder Tutorial
Variational Autoencoder Tutorial Variational Autoencoder Tutorial
Variational Autoencoder Tutorial
Hojin Yang
 
Perceptron & Neural Networks
Perceptron & Neural NetworksPerceptron & Neural Networks
Perceptron & Neural Networks
NAGUR SHAREEF SHAIK
 
Backpropagation: Understanding How to Update ANNs Weights Step-by-Step
Backpropagation: Understanding How to Update ANNs Weights Step-by-StepBackpropagation: Understanding How to Update ANNs Weights Step-by-Step
Backpropagation: Understanding How to Update ANNs Weights Step-by-Step
Ahmed Gad
 
Feed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descentFeed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descent
Muhammad Rasel
 
SVM Algorithm Explained | Support Vector Machine Tutorial Using R | Edureka
SVM Algorithm Explained | Support Vector Machine Tutorial Using R | EdurekaSVM Algorithm Explained | Support Vector Machine Tutorial Using R | Edureka
SVM Algorithm Explained | Support Vector Machine Tutorial Using R | Edureka
Edureka!
 
Variational Autoencoder
Variational AutoencoderVariational Autoencoder
Variational Autoencoder
Mark Chang
 
딥러닝과 강화 학습으로 나보다 잘하는 쿠키런 AI 구현하기 DEVIEW 2016
딥러닝과 강화 학습으로 나보다 잘하는 쿠키런 AI 구현하기 DEVIEW 2016딥러닝과 강화 학습으로 나보다 잘하는 쿠키런 AI 구현하기 DEVIEW 2016
딥러닝과 강화 학습으로 나보다 잘하는 쿠키런 AI 구현하기 DEVIEW 2016
Taehoon Kim
 
Classification Based Machine Learning Algorithms
Classification Based Machine Learning AlgorithmsClassification Based Machine Learning Algorithms
Classification Based Machine Learning Algorithms
Md. Main Uddin Rony
 
Generative adversarial networks
Generative adversarial networksGenerative adversarial networks
Generative adversarial networks
Yunjey Choi
 
Hill climbing algorithm
Hill climbing algorithmHill climbing algorithm
Hill climbing algorithm
Dr. C.V. Suresh Babu
 
Genetic algorithm
Genetic algorithmGenetic algorithm
Genetic algorithm
manalishipra
 
GoogLeNet Insights
GoogLeNet InsightsGoogLeNet Insights
GoogLeNet Insights
Auro Tripathy
 
Naive Bayes Presentation
Naive Bayes PresentationNaive Bayes Presentation
Naive Bayes Presentation
Md. Enamul Haque Chowdhury
 
Greedy Algorithm - Knapsack Problem
Greedy Algorithm - Knapsack ProblemGreedy Algorithm - Knapsack Problem
Greedy Algorithm - Knapsack Problem
Madhu Bala
 
Neural network (perceptron)
Neural network (perceptron)Neural network (perceptron)
Neural network (perceptron)
Jeonghun Yoon
 
Variational Autoencoders For Image Generation
Variational Autoencoders For Image GenerationVariational Autoencoders For Image Generation
Variational Autoencoders For Image Generation
Jason Anderson
 
Regularization in deep learning
Regularization in deep learningRegularization in deep learning
Regularization in deep learning
Kien Le
 

What's hot (20)

오토인코더의 모든 것
오토인코더의 모든 것오토인코더의 모든 것
오토인코더의 모든 것
 
Strassen's matrix multiplication
Strassen's matrix multiplicationStrassen's matrix multiplication
Strassen's matrix multiplication
 
Variational Autoencoder Tutorial
Variational Autoencoder Tutorial Variational Autoencoder Tutorial
Variational Autoencoder Tutorial
 
Perceptron & Neural Networks
Perceptron & Neural NetworksPerceptron & Neural Networks
Perceptron & Neural Networks
 
Backpropagation: Understanding How to Update ANNs Weights Step-by-Step
Backpropagation: Understanding How to Update ANNs Weights Step-by-StepBackpropagation: Understanding How to Update ANNs Weights Step-by-Step
Backpropagation: Understanding How to Update ANNs Weights Step-by-Step
 
Feed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descentFeed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descent
 
SVM Algorithm Explained | Support Vector Machine Tutorial Using R | Edureka
SVM Algorithm Explained | Support Vector Machine Tutorial Using R | EdurekaSVM Algorithm Explained | Support Vector Machine Tutorial Using R | Edureka
SVM Algorithm Explained | Support Vector Machine Tutorial Using R | Edureka
 
Variational Autoencoder
Variational AutoencoderVariational Autoencoder
Variational Autoencoder
 
딥러닝과 강화 학습으로 나보다 잘하는 쿠키런 AI 구현하기 DEVIEW 2016
딥러닝과 강화 학습으로 나보다 잘하는 쿠키런 AI 구현하기 DEVIEW 2016딥러닝과 강화 학습으로 나보다 잘하는 쿠키런 AI 구현하기 DEVIEW 2016
딥러닝과 강화 학습으로 나보다 잘하는 쿠키런 AI 구현하기 DEVIEW 2016
 
Classification Based Machine Learning Algorithms
Classification Based Machine Learning AlgorithmsClassification Based Machine Learning Algorithms
Classification Based Machine Learning Algorithms
 
Generative adversarial networks
Generative adversarial networksGenerative adversarial networks
Generative adversarial networks
 
Hill climbing algorithm
Hill climbing algorithmHill climbing algorithm
Hill climbing algorithm
 
Genetic algorithm
Genetic algorithmGenetic algorithm
Genetic algorithm
 
GoogLeNet Insights
GoogLeNet InsightsGoogLeNet Insights
GoogLeNet Insights
 
Naive Bayes Presentation
Naive Bayes PresentationNaive Bayes Presentation
Naive Bayes Presentation
 
Greedy Algorithm - Knapsack Problem
Greedy Algorithm - Knapsack ProblemGreedy Algorithm - Knapsack Problem
Greedy Algorithm - Knapsack Problem
 
Neural network (perceptron)
Neural network (perceptron)Neural network (perceptron)
Neural network (perceptron)
 
Link Analysis
Link AnalysisLink Analysis
Link Analysis
 
Variational Autoencoders For Image Generation
Variational Autoencoders For Image GenerationVariational Autoencoders For Image Generation
Variational Autoencoders For Image Generation
 
Regularization in deep learning
Regularization in deep learningRegularization in deep learning
Regularization in deep learning
 

Similar to Introduction to Artificial Neural Networks (ANNs) - Step-by-Step Training & Testing Example 2

Brief Introduction to Deep Learning + Solving XOR using ANNs
Brief Introduction to Deep Learning + Solving XOR using ANNsBrief Introduction to Deep Learning + Solving XOR using ANNs
Brief Introduction to Deep Learning + Solving XOR using ANNs
Ahmed Gad
 
On the Step Explosion Problem
On the Step Explosion ProblemOn the Step Explosion Problem
On the Step Explosion Problem
Universität Rostock
 
Lec16 Intro to Computer Engineering by Hsien-Hsin Sean Lee Georgia Tech -- Fi...
Lec16 Intro to Computer Engineering by Hsien-Hsin Sean Lee Georgia Tech -- Fi...Lec16 Intro to Computer Engineering by Hsien-Hsin Sean Lee Georgia Tech -- Fi...
Lec16 Intro to Computer Engineering by Hsien-Hsin Sean Lee Georgia Tech -- Fi...
Hsien-Hsin Sean Lee, Ph.D.
 
9920Lec12 FSM.ppt
9920Lec12 FSM.ppt9920Lec12 FSM.ppt
9920Lec12 FSM.ppt
SHASHISHARMA850123
 
20 ms-me-amd-06 (simple linear regression)
20 ms-me-amd-06 (simple linear regression)20 ms-me-amd-06 (simple linear regression)
20 ms-me-amd-06 (simple linear regression)
HassanShah124
 
04 comb ex
04 comb ex04 comb ex
04 comb ex
Aravindharamanan S
 
Proyecto final curso – Electrónica Digital I
Proyecto final curso – Electrónica Digital IProyecto final curso – Electrónica Digital I
Proyecto final curso – Electrónica Digital I
Daniel A. Lopez Ch.
 
Cash flow
Cash flowCash flow
Cash flow
Ahmed Gamal
 
Analysis sequential circuits
Analysis sequential circuitsAnalysis sequential circuits
Analysis sequential circuits
G Subramaniamg
 
Lec17-Registers.ppt
Lec17-Registers.pptLec17-Registers.ppt
Lec17-Registers.ppt
priyadarsini47
 
To excel or not?
To excel or not?To excel or not?
To excel or not?
Filippo Selden
 
DC MACHINE WINDINGS
DC MACHINE WINDINGSDC MACHINE WINDINGS
DC MACHINE WINDINGS
AYAN ADHIKARY
 
How much time will be used for driving
How much time will be used for drivingHow much time will be used for driving
How much time will be used for drivingRuo Yang
 
Mecanismos
MecanismosMecanismos
Electrocraft drive selection_guide_catalog
Electrocraft drive selection_guide_catalogElectrocraft drive selection_guide_catalog
Electrocraft drive selection_guide_catalog
Electromate
 
Ee2365 nol part 2
Ee2365 nol part 2Ee2365 nol part 2
Ee2365 nol part 2
Arun Kumaar
 
Filter design and simulation
Filter design and simulationFilter design and simulation
Filter design and simulation
Sandesh Agrawal
 
Measuring the Precision of Multi-perspective Process Models
Measuring the Precision of Multi-perspective Process ModelsMeasuring the Precision of Multi-perspective Process Models
Measuring the Precision of Multi-perspective Process Models
Felix Mannhardt
 
Decision Table Based Testing
Decision Table Based TestingDecision Table Based Testing
Decision Table Based Testing
Himani Solanki
 

Similar to Introduction to Artificial Neural Networks (ANNs) - Step-by-Step Training & Testing Example 2 (20)

Brief Introduction to Deep Learning + Solving XOR using ANNs
Brief Introduction to Deep Learning + Solving XOR using ANNsBrief Introduction to Deep Learning + Solving XOR using ANNs
Brief Introduction to Deep Learning + Solving XOR using ANNs
 
On the Step Explosion Problem
On the Step Explosion ProblemOn the Step Explosion Problem
On the Step Explosion Problem
 
Lec16 Intro to Computer Engineering by Hsien-Hsin Sean Lee Georgia Tech -- Fi...
Lec16 Intro to Computer Engineering by Hsien-Hsin Sean Lee Georgia Tech -- Fi...Lec16 Intro to Computer Engineering by Hsien-Hsin Sean Lee Georgia Tech -- Fi...
Lec16 Intro to Computer Engineering by Hsien-Hsin Sean Lee Georgia Tech -- Fi...
 
9920Lec12 FSM.ppt
9920Lec12 FSM.ppt9920Lec12 FSM.ppt
9920Lec12 FSM.ppt
 
Final Project
Final ProjectFinal Project
Final Project
 
20 ms-me-amd-06 (simple linear regression)
20 ms-me-amd-06 (simple linear regression)20 ms-me-amd-06 (simple linear regression)
20 ms-me-amd-06 (simple linear regression)
 
04 comb ex
04 comb ex04 comb ex
04 comb ex
 
Proyecto final curso – Electrónica Digital I
Proyecto final curso – Electrónica Digital IProyecto final curso – Electrónica Digital I
Proyecto final curso – Electrónica Digital I
 
Cash flow
Cash flowCash flow
Cash flow
 
Analysis sequential circuits
Analysis sequential circuitsAnalysis sequential circuits
Analysis sequential circuits
 
Lec17-Registers.ppt
Lec17-Registers.pptLec17-Registers.ppt
Lec17-Registers.ppt
 
To excel or not?
To excel or not?To excel or not?
To excel or not?
 
DC MACHINE WINDINGS
DC MACHINE WINDINGSDC MACHINE WINDINGS
DC MACHINE WINDINGS
 
How much time will be used for driving
How much time will be used for drivingHow much time will be used for driving
How much time will be used for driving
 
Mecanismos
MecanismosMecanismos
Mecanismos
 
Electrocraft drive selection_guide_catalog
Electrocraft drive selection_guide_catalogElectrocraft drive selection_guide_catalog
Electrocraft drive selection_guide_catalog
 
Ee2365 nol part 2
Ee2365 nol part 2Ee2365 nol part 2
Ee2365 nol part 2
 
Filter design and simulation
Filter design and simulationFilter design and simulation
Filter design and simulation
 
Measuring the Precision of Multi-perspective Process Models
Measuring the Precision of Multi-perspective Process ModelsMeasuring the Precision of Multi-perspective Process Models
Measuring the Precision of Multi-perspective Process Models
 
Decision Table Based Testing
Decision Table Based TestingDecision Table Based Testing
Decision Table Based Testing
 

More from Ahmed Gad

ICEIT'20 Cython for Speeding-up Genetic Algorithm
ICEIT'20 Cython for Speeding-up Genetic AlgorithmICEIT'20 Cython for Speeding-up Genetic Algorithm
ICEIT'20 Cython for Speeding-up Genetic Algorithm
Ahmed Gad
 
NumPyCNNAndroid: A Library for Straightforward Implementation of Convolutiona...
NumPyCNNAndroid: A Library for Straightforward Implementation of Convolutiona...NumPyCNNAndroid: A Library for Straightforward Implementation of Convolutiona...
NumPyCNNAndroid: A Library for Straightforward Implementation of Convolutiona...
Ahmed Gad
 
Python for Computer Vision - Revision 2nd Edition
Python for Computer Vision - Revision 2nd EditionPython for Computer Vision - Revision 2nd Edition
Python for Computer Vision - Revision 2nd Edition
Ahmed Gad
 
Multi-Objective Optimization using Non-Dominated Sorting Genetic Algorithm wi...
Multi-Objective Optimization using Non-Dominated Sorting Genetic Algorithm wi...Multi-Objective Optimization using Non-Dominated Sorting Genetic Algorithm wi...
Multi-Objective Optimization using Non-Dominated Sorting Genetic Algorithm wi...
Ahmed Gad
 
M.Sc. Thesis - Automatic People Counting in Crowded Scenes
M.Sc. Thesis - Automatic People Counting in Crowded ScenesM.Sc. Thesis - Automatic People Counting in Crowded Scenes
M.Sc. Thesis - Automatic People Counting in Crowded Scenes
Ahmed Gad
 
Derivation of Convolutional Neural Network from Fully Connected Network Step-...
Derivation of Convolutional Neural Network from Fully Connected Network Step-...Derivation of Convolutional Neural Network from Fully Connected Network Step-...
Derivation of Convolutional Neural Network from Fully Connected Network Step-...
Ahmed Gad
 
Introduction to Optimization with Genetic Algorithm (GA)
Introduction to Optimization with Genetic Algorithm (GA)Introduction to Optimization with Genetic Algorithm (GA)
Introduction to Optimization with Genetic Algorithm (GA)
Ahmed Gad
 
Derivation of Convolutional Neural Network (ConvNet) from Fully Connected Net...
Derivation of Convolutional Neural Network (ConvNet) from Fully Connected Net...Derivation of Convolutional Neural Network (ConvNet) from Fully Connected Net...
Derivation of Convolutional Neural Network (ConvNet) from Fully Connected Net...
Ahmed Gad
 
Avoid Overfitting with Regularization
Avoid Overfitting with RegularizationAvoid Overfitting with Regularization
Avoid Overfitting with Regularization
Ahmed Gad
 
Genetic Algorithm (GA) Optimization - Step-by-Step Example
Genetic Algorithm (GA) Optimization - Step-by-Step ExampleGenetic Algorithm (GA) Optimization - Step-by-Step Example
Genetic Algorithm (GA) Optimization - Step-by-Step Example
Ahmed Gad
 
ICCES 2017 - Crowd Density Estimation Method using Regression Analysis
ICCES 2017 - Crowd Density Estimation Method using Regression AnalysisICCES 2017 - Crowd Density Estimation Method using Regression Analysis
ICCES 2017 - Crowd Density Estimation Method using Regression Analysis
Ahmed Gad
 
Computer Vision: Correlation, Convolution, and Gradient
Computer Vision: Correlation, Convolution, and GradientComputer Vision: Correlation, Convolution, and Gradient
Computer Vision: Correlation, Convolution, and Gradient
Ahmed Gad
 
Python for Computer Vision - Revision
Python for Computer Vision - RevisionPython for Computer Vision - Revision
Python for Computer Vision - Revision
Ahmed Gad
 
Anime Studio Pro 10 Tutorial as Part of Multimedia Course
Anime Studio Pro 10 Tutorial as Part of Multimedia CourseAnime Studio Pro 10 Tutorial as Part of Multimedia Course
Anime Studio Pro 10 Tutorial as Part of Multimedia Course
Ahmed Gad
 
Operations in Digital Image Processing + Convolution by Example
Operations in Digital Image Processing + Convolution by ExampleOperations in Digital Image Processing + Convolution by Example
Operations in Digital Image Processing + Convolution by Example
Ahmed Gad
 
MATLAB Code + Description : Real-Time Object Motion Detection and Tracking
MATLAB Code + Description : Real-Time Object Motion Detection and TrackingMATLAB Code + Description : Real-Time Object Motion Detection and Tracking
MATLAB Code + Description : Real-Time Object Motion Detection and Tracking
Ahmed Gad
 
MATLAB Code + Description : Very Simple Automatic English Optical Character R...
MATLAB Code + Description : Very Simple Automatic English Optical Character R...MATLAB Code + Description : Very Simple Automatic English Optical Character R...
MATLAB Code + Description : Very Simple Automatic English Optical Character R...
Ahmed Gad
 
Graduation Project - Face Login : A Robust Face Identification System for Sec...
Graduation Project - Face Login : A Robust Face Identification System for Sec...Graduation Project - Face Login : A Robust Face Identification System for Sec...
Graduation Project - Face Login : A Robust Face Identification System for Sec...
Ahmed Gad
 
Introduction to MATrices LABoratory (MATLAB) as Part of Digital Signal Proces...
Introduction to MATrices LABoratory (MATLAB) as Part of Digital Signal Proces...Introduction to MATrices LABoratory (MATLAB) as Part of Digital Signal Proces...
Introduction to MATrices LABoratory (MATLAB) as Part of Digital Signal Proces...
Ahmed Gad
 
Introduction to Digital Signal Processing (DSP) - Course Notes
Introduction to Digital Signal Processing (DSP) - Course NotesIntroduction to Digital Signal Processing (DSP) - Course Notes
Introduction to Digital Signal Processing (DSP) - Course Notes
Ahmed Gad
 

More from Ahmed Gad (20)

ICEIT'20 Cython for Speeding-up Genetic Algorithm
ICEIT'20 Cython for Speeding-up Genetic AlgorithmICEIT'20 Cython for Speeding-up Genetic Algorithm
ICEIT'20 Cython for Speeding-up Genetic Algorithm
 
NumPyCNNAndroid: A Library for Straightforward Implementation of Convolutiona...
NumPyCNNAndroid: A Library for Straightforward Implementation of Convolutiona...NumPyCNNAndroid: A Library for Straightforward Implementation of Convolutiona...
NumPyCNNAndroid: A Library for Straightforward Implementation of Convolutiona...
 
Python for Computer Vision - Revision 2nd Edition
Python for Computer Vision - Revision 2nd EditionPython for Computer Vision - Revision 2nd Edition
Python for Computer Vision - Revision 2nd Edition
 
Multi-Objective Optimization using Non-Dominated Sorting Genetic Algorithm wi...
Multi-Objective Optimization using Non-Dominated Sorting Genetic Algorithm wi...Multi-Objective Optimization using Non-Dominated Sorting Genetic Algorithm wi...
Multi-Objective Optimization using Non-Dominated Sorting Genetic Algorithm wi...
 
M.Sc. Thesis - Automatic People Counting in Crowded Scenes
M.Sc. Thesis - Automatic People Counting in Crowded ScenesM.Sc. Thesis - Automatic People Counting in Crowded Scenes
M.Sc. Thesis - Automatic People Counting in Crowded Scenes
 
Derivation of Convolutional Neural Network from Fully Connected Network Step-...
Derivation of Convolutional Neural Network from Fully Connected Network Step-...Derivation of Convolutional Neural Network from Fully Connected Network Step-...
Derivation of Convolutional Neural Network from Fully Connected Network Step-...
 
Introduction to Optimization with Genetic Algorithm (GA)
Introduction to Optimization with Genetic Algorithm (GA)Introduction to Optimization with Genetic Algorithm (GA)
Introduction to Optimization with Genetic Algorithm (GA)
 
Derivation of Convolutional Neural Network (ConvNet) from Fully Connected Net...
Derivation of Convolutional Neural Network (ConvNet) from Fully Connected Net...Derivation of Convolutional Neural Network (ConvNet) from Fully Connected Net...
Derivation of Convolutional Neural Network (ConvNet) from Fully Connected Net...
 
Avoid Overfitting with Regularization
Avoid Overfitting with RegularizationAvoid Overfitting with Regularization
Avoid Overfitting with Regularization
 
Genetic Algorithm (GA) Optimization - Step-by-Step Example
Genetic Algorithm (GA) Optimization - Step-by-Step ExampleGenetic Algorithm (GA) Optimization - Step-by-Step Example
Genetic Algorithm (GA) Optimization - Step-by-Step Example
 
ICCES 2017 - Crowd Density Estimation Method using Regression Analysis
ICCES 2017 - Crowd Density Estimation Method using Regression AnalysisICCES 2017 - Crowd Density Estimation Method using Regression Analysis
ICCES 2017 - Crowd Density Estimation Method using Regression Analysis
 
Computer Vision: Correlation, Convolution, and Gradient
Computer Vision: Correlation, Convolution, and GradientComputer Vision: Correlation, Convolution, and Gradient
Computer Vision: Correlation, Convolution, and Gradient
 
Python for Computer Vision - Revision
Python for Computer Vision - RevisionPython for Computer Vision - Revision
Python for Computer Vision - Revision
 
Anime Studio Pro 10 Tutorial as Part of Multimedia Course
Anime Studio Pro 10 Tutorial as Part of Multimedia CourseAnime Studio Pro 10 Tutorial as Part of Multimedia Course
Anime Studio Pro 10 Tutorial as Part of Multimedia Course
 
Operations in Digital Image Processing + Convolution by Example
Operations in Digital Image Processing + Convolution by ExampleOperations in Digital Image Processing + Convolution by Example
Operations in Digital Image Processing + Convolution by Example
 
MATLAB Code + Description : Real-Time Object Motion Detection and Tracking
MATLAB Code + Description : Real-Time Object Motion Detection and TrackingMATLAB Code + Description : Real-Time Object Motion Detection and Tracking
MATLAB Code + Description : Real-Time Object Motion Detection and Tracking
 
MATLAB Code + Description : Very Simple Automatic English Optical Character R...
MATLAB Code + Description : Very Simple Automatic English Optical Character R...MATLAB Code + Description : Very Simple Automatic English Optical Character R...
MATLAB Code + Description : Very Simple Automatic English Optical Character R...
 
Graduation Project - Face Login : A Robust Face Identification System for Sec...
Graduation Project - Face Login : A Robust Face Identification System for Sec...Graduation Project - Face Login : A Robust Face Identification System for Sec...
Graduation Project - Face Login : A Robust Face Identification System for Sec...
 
Introduction to MATrices LABoratory (MATLAB) as Part of Digital Signal Proces...
Introduction to MATrices LABoratory (MATLAB) as Part of Digital Signal Proces...Introduction to MATrices LABoratory (MATLAB) as Part of Digital Signal Proces...
Introduction to MATrices LABoratory (MATLAB) as Part of Digital Signal Proces...
 
Introduction to Digital Signal Processing (DSP) - Course Notes
Introduction to Digital Signal Processing (DSP) - Course NotesIntroduction to Digital Signal Processing (DSP) - Course Notes
Introduction to Digital Signal Processing (DSP) - Course Notes
 

Recently uploaded

Polish students' mobility in the Czech Republic
Polish students' mobility in the Czech RepublicPolish students' mobility in the Czech Republic
Polish students' mobility in the Czech Republic
Anna Sz.
 
Operation Blue Star - Saka Neela Tara
Operation Blue Star   -  Saka Neela TaraOperation Blue Star   -  Saka Neela Tara
Operation Blue Star - Saka Neela Tara
Balvir Singh
 
special B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdfspecial B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdf
Special education needs
 
The Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official PublicationThe Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official Publication
Delapenabediema
 
Basic phrases for greeting and assisting costumers
Basic phrases for greeting and assisting costumersBasic phrases for greeting and assisting costumers
Basic phrases for greeting and assisting costumers
PedroFerreira53928
 
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup   New Member Orientation and Q&A (May 2024).pdfWelcome to TechSoup   New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
TechSoup
 
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
siemaillard
 
Additional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdfAdditional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdf
joachimlavalley1
 
Synthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptxSynthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptx
Pavel ( NSTU)
 
Instructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptxInstructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptx
Jheel Barad
 
GIÁO ÁN DẠY THÊM (KẾ HOẠCH BÀI BUỔI 2) - TIẾNG ANH 8 GLOBAL SUCCESS (2 CỘT) N...
GIÁO ÁN DẠY THÊM (KẾ HOẠCH BÀI BUỔI 2) - TIẾNG ANH 8 GLOBAL SUCCESS (2 CỘT) N...GIÁO ÁN DẠY THÊM (KẾ HOẠCH BÀI BUỔI 2) - TIẾNG ANH 8 GLOBAL SUCCESS (2 CỘT) N...
GIÁO ÁN DẠY THÊM (KẾ HOẠCH BÀI BUỔI 2) - TIẾNG ANH 8 GLOBAL SUCCESS (2 CỘT) N...
Nguyen Thanh Tu Collection
 
ESC Beyond Borders _From EU to You_ InfoPack general.pdf
ESC Beyond Borders _From EU to You_ InfoPack general.pdfESC Beyond Borders _From EU to You_ InfoPack general.pdf
ESC Beyond Borders _From EU to You_ InfoPack general.pdf
Fundacja Rozwoju Społeczeństwa Przedsiębiorczego
 
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
EugeneSaldivar
 
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
siemaillard
 
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXXPhrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
MIRIAMSALINAS13
 
Model Attribute Check Company Auto Property
Model Attribute  Check Company Auto PropertyModel Attribute  Check Company Auto Property
Model Attribute Check Company Auto Property
Celine George
 
How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17
Celine George
 
1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx
JosvitaDsouza2
 
Template Jadual Bertugas Kelas (Boleh Edit)
Template Jadual Bertugas Kelas (Boleh Edit)Template Jadual Bertugas Kelas (Boleh Edit)
Template Jadual Bertugas Kelas (Boleh Edit)
rosedainty
 
The Roman Empire A Historical Colossus.pdf
The Roman Empire A Historical Colossus.pdfThe Roman Empire A Historical Colossus.pdf
The Roman Empire A Historical Colossus.pdf
kaushalkr1407
 

Recently uploaded (20)

Polish students' mobility in the Czech Republic
Polish students' mobility in the Czech RepublicPolish students' mobility in the Czech Republic
Polish students' mobility in the Czech Republic
 
Operation Blue Star - Saka Neela Tara
Operation Blue Star   -  Saka Neela TaraOperation Blue Star   -  Saka Neela Tara
Operation Blue Star - Saka Neela Tara
 
special B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdfspecial B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdf
 
The Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official PublicationThe Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official Publication
 
Basic phrases for greeting and assisting costumers
Basic phrases for greeting and assisting costumersBasic phrases for greeting and assisting costumers
Basic phrases for greeting and assisting costumers
 
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup   New Member Orientation and Q&A (May 2024).pdfWelcome to TechSoup   New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
 
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
 
Additional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdfAdditional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdf
 
Synthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptxSynthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptx
 
Instructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptxInstructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptx
 
GIÁO ÁN DẠY THÊM (KẾ HOẠCH BÀI BUỔI 2) - TIẾNG ANH 8 GLOBAL SUCCESS (2 CỘT) N...
GIÁO ÁN DẠY THÊM (KẾ HOẠCH BÀI BUỔI 2) - TIẾNG ANH 8 GLOBAL SUCCESS (2 CỘT) N...GIÁO ÁN DẠY THÊM (KẾ HOẠCH BÀI BUỔI 2) - TIẾNG ANH 8 GLOBAL SUCCESS (2 CỘT) N...
GIÁO ÁN DẠY THÊM (KẾ HOẠCH BÀI BUỔI 2) - TIẾNG ANH 8 GLOBAL SUCCESS (2 CỘT) N...
 
ESC Beyond Borders _From EU to You_ InfoPack general.pdf
ESC Beyond Borders _From EU to You_ InfoPack general.pdfESC Beyond Borders _From EU to You_ InfoPack general.pdf
ESC Beyond Borders _From EU to You_ InfoPack general.pdf
 
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
 
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
 
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXXPhrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
Phrasal Verbs.XXXXXXXXXXXXXXXXXXXXXXXXXX
 
Model Attribute Check Company Auto Property
Model Attribute  Check Company Auto PropertyModel Attribute  Check Company Auto Property
Model Attribute Check Company Auto Property
 
How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17
 
1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx
 
Template Jadual Bertugas Kelas (Boleh Edit)
Template Jadual Bertugas Kelas (Boleh Edit)Template Jadual Bertugas Kelas (Boleh Edit)
Template Jadual Bertugas Kelas (Boleh Edit)
 
The Roman Empire A Historical Colossus.pdf
The Roman Empire A Historical Colossus.pdfThe Roman Empire A Historical Colossus.pdf
The Roman Empire A Historical Colossus.pdf
 

Introduction to Artificial Neural Networks (ANNs) - Step-by-Step Training & Testing Example 2

  • 1. Artificial Neural Networks (ANNs) Step-By-Step Training & Testing Example 2 MENOUFIA UNIVERSITY FACULTY OF COMPUTERS AND INFORMATION ALL DEPARTMENTS ARTIFICIAL INTELLIGENCE ‫المنوفية‬ ‫جامعة‬ ‫والمعلومات‬ ‫الحاسبات‬ ‫كلية‬ ‫األقسام‬ ‫جميع‬ ‫الذكاء‬‫اإلصطناعي‬ ‫المنوفية‬ ‫جامعة‬ Ahmed Fawzy Gad ahmed.fawzy@ci.menofia.edu.eg
  • 2. Classification Example 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195
  • 3. Neural Networks 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 Input Hidden Output
  • 4. Neural Networks 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 0 50 100 150 200 250 0 5 10 15 20
  • 5. Input Layer Input Output 𝑭 𝟏 𝑭 𝟐 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 0 50 100 150 200 250 0 5 10 15 20
  • 6. Output Layer Input Output C1/C2 𝒀𝒋 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 0 50 100 150 200 250 0 5 10 15 20 𝑭 𝟏 𝑭 𝟐
  • 7. Weights Input Output Weights=𝑾𝒊 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 0 50 100 150 200 250 0 5 10 15 20 C1/C2 𝒀𝒋 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐
  • 8. Activation Function Input Output 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 0 50 100 150 200 250 0 5 10 15 20 C1/C2 𝒀𝒋 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐
  • 9. Activation Function Input Output 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 0 50 100 150 200 250 0 5 10 15 20 C1/C2 𝒀𝒋 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐
  • 10. Activation Function Input Output 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 0 50 100 150 200 250 0 5 10 15 20 C1/C2 𝒀𝒋 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐
  • 11. Activation Function Components Input Output 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 0 50 100 150 200 250 0 5 10 15 20 C1/C2 𝒀𝒋 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐
  • 12. Activation Function Inputs Input Output s 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 0 50 100 150 200 250 0 5 10 15 20 C1/C2 𝒀𝒋 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐
  • 13. Activation Function Inputs Input Output ss=SOP(𝑿𝒊, 𝑾𝒊) 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 0 50 100 150 200 250 0 5 10 15 20 C1/C2 𝒀𝒋 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐
  • 14. Activation Function Inputs Input Output ss=SOP(𝑿𝒊, 𝑾𝒊) 𝑿𝒊=Inputs 𝑾𝒊=Weights 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 0 50 100 150 200 250 0 5 10 15 20 C1/C2 𝒀𝒋 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐
  • 15. Activation Function Inputs Input Output ss=SOP(𝑿𝒊, 𝑾𝒊) 𝑿𝒊=Inputs 𝑾𝒊=Weights 𝑿 𝟏 𝑿 𝟐 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 0 50 100 150 200 250 0 5 10 15 20 C1/C2 𝒀𝒋 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐
  • 16. Activation Function Inputs Input Output ss=SOP(𝑿𝒊, 𝑾𝒊) 𝑿𝒊=Inputs 𝑾𝒊=Weights S= 𝟏 𝒎 𝑿𝒊 𝑾𝒊 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 0 50 100 150 200 250 0 5 10 15 20 𝑿 𝟏 𝑿 𝟐 C1/C2 𝒀𝒋 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐
  • 17. Activation Function Inputs Input Output ss=SOP(𝑿𝒊, 𝑾𝒊) 𝑿𝒊=Inputs 𝑾𝒊=Weights s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 0 50 100 150 200 250 0 5 10 15 20 𝑿 𝟏 𝑿 𝟐 C1/C2 𝒀𝒋 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐
  • 18. Activation Function Outputs Input Output F(s)s Class Label 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 0 50 100 150 200 250 0 5 10 15 20 𝑿 𝟏 𝑿 𝟐 C1/C2 𝒀𝒋 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐
  • 20. Activation Functions Which activation function to use? Outputs Class Labels Activation Function TWO Class Labels TWO Outputs One that gives two outputs. Which activation function to use? 𝑪𝒋𝒀𝒋 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195
  • 22. Activation Function Input Output F(s)s sgn 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 0 50 100 150 200 250 0 5 10 15 20 𝑿 𝟏 𝑿 𝟐 C1/C2 𝒀𝒋 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐
  • 23. Bias Input Output F(s)s sgn =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 𝑿 𝟏 𝑿 𝟐 C1/C2 𝒀𝒋 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐
  • 24. Bias Input Output F(s)s sgn s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 =+1𝑿 𝟎 W0 𝑿 𝟏 𝑿 𝟐 C1/C2 𝒀𝒋 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐
  • 25. Bias Input Output F(s)s sgn s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) 𝑿 𝟎 = +𝟏 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 =+1𝑿 𝟎 W0 𝑿 𝟏 𝑿 𝟐 C1/C2 𝒀𝒋 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐
  • 26. Bias Input Output F(s)s sgn s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) s=(+𝟏 ∗ 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 =+1𝑿 𝟎 W0 𝑿 𝟏 𝑿 𝟐 C1/C2 𝒀𝒋 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐
  • 27. Bias Input Output F(s)s sgn s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 =+1𝑿 𝟎 W0 𝑿 𝟏 𝑿 𝟐 C1/C2 𝒀𝒋 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐
  • 28. Bias Importance Input Output F(s)s sgn s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 =+1𝑿 𝟎 W0 𝑿 𝟏 𝑿 𝟐 C1/C2 𝒀𝒋 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐
  • 29. Bias Importance Input Output F(s)s sgn s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) X Y 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 =+1𝑿 𝟎 W0 𝑿 𝟏 𝑿 𝟐 C1/C2 𝒀𝒋 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐
  • 30. Bias Importance Input Output F(s)s sgn s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) X Y y=ax+b 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 =+1𝑿 𝟎 W0 𝑿 𝟏 𝑿 𝟐 C1/C2 𝒀𝒋 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐
  • 31. Bias Importance Input Output F(s)s sgn s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) X Y y=x+b 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 =+1𝑿 𝟎 W0 𝑿 𝟏 𝑿 𝟐 C1/C2 𝒀𝒋 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐
  • 32. Bias Importance Input Output F(s)s sgn s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) X Y y=x+b Y-Intercept 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 =+1𝑿 𝟎 W0 𝑿 𝟏 𝑿 𝟐 C1/C2 𝒀𝒋 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐
  • 33. Bias Importance Input Output F(s)s sgn s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) X Y y=x+b Y-Intercept b=0 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 =+1𝑿 𝟎 W0 𝑿 𝟏 𝑿 𝟐 C1/C2 𝒀𝒋 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐
  • 34. Bias Importance Input Output F(s)s sgn s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) X Y y=x+b Y-Intercept b=0 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 =+1𝑿 𝟎 W0 𝑿 𝟏 𝑿 𝟐 C1/C2 𝒀𝒋 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐
  • 35. Bias Importance Input Output F(s)s sgn s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) X Y y=x+b Y-Intercept b=+v 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 =+1𝑿 𝟎 W0 𝑿 𝟏 𝑿 𝟐 C1/C2 𝒀𝒋 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐
  • 36. Bias Importance Input Output F(s)s sgn s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) X Y y=x+b Y-Intercept b=+v 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 =+1𝑿 𝟎 W0 𝑿 𝟏 𝑿 𝟐 C1/C2 𝒀𝒋 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐
  • 37. Bias Importance Input Output F(s)s sgn s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) X Y y=x+b Y-Intercept b=-v 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 =+1𝑿 𝟎 W0 𝑿 𝟏 𝑿 𝟐 C1/C2 𝒀𝒋 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐
  • 38. Bias Importance Input Output F(s)s sgn s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) X Y y=x+b Y-Intercept b=-v 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 =+1𝑿 𝟎 W0 𝑿 𝟏 𝑿 𝟐 C1/C2 𝒀𝒋 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐
  • 39. Bias Importance Input Output F(s)s sgn s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) Same Concept Applies to Bias S= 𝟏 𝒎 𝑿𝒊 𝑾𝒊+BIAS 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 =+1𝑿 𝟎 W0 𝑿 𝟏 𝑿 𝟐 C1/C2 𝒀𝒋 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐
  • 40. Bias Importance Input Output F(s)s sgn s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) S= 𝟏 𝒎 𝑿𝒊 𝑾𝒊+BIAS 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 =+1𝑿 𝟎 W0 𝑿 𝟏 𝑿 𝟐 C1/C2 𝒀𝒋 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐
  • 41. Bias Importance Input Output F(s)s sgn s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) S= 𝟏 𝒎 𝑿𝒊 𝑾𝒊+BIAS 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 =+1𝑿 𝟎 W0 𝑿 𝟏 𝑿 𝟐 C1/C2 𝒀𝒋 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐
  • 42. Bias Importance Input Output F(s)s sgn s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) S= 𝟏 𝒎 𝑿𝒊 𝑾𝒊+BIAS 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 =+1𝑿 𝟎 W0 𝑿 𝟏 𝑿 𝟐 C1/C2 𝒀𝒋 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐
  • 43. Bias Importance Input Output F(s)s sgn s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) S= 𝟏 𝒎 𝑿𝒊 𝑾𝒊+BIAS 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 =+1𝑿 𝟎 W0 𝑿 𝟏 𝑿 𝟐 C1/C2 𝒀𝒋 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐
  • 44. Learning Rate F(s)s sgn s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) 𝟎 ≤ η ≤ 𝟏=+1𝑿 𝟎 W0 𝑿 𝟏 𝑿 𝟐 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐 C1/C2 𝒀𝒋
  • 45. Summary of Parameters Inputs 𝑿 𝒎 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) 𝟎 ≤ η ≤ 𝟏 𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐) F(s)s sgn =+1𝑿 𝟎 W0 𝑿 𝟏 𝑿 𝟐 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐 C1/C2 𝒀𝒋
  • 46. Summary of Parameters Weights 𝑾 𝒎 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) 𝟎 ≤ η ≤ 𝟏 W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐) 𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐) F(s)s sgn =+1𝑿 𝟎 W0 𝑿 𝟏 𝑿 𝟐 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐 C1/C2 𝒀𝒋
  • 47. Summary of Parameters Bias 𝒃 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) 𝟎 ≤ η ≤ 𝟏 𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐) W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐) F(s)s sgn =+1𝑿 𝟎 W0 𝑿 𝟏 𝑿 𝟐 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐 C1/C2 𝒀𝒋
  • 48. Summary of Parameters Sum Of Products (SOP) 𝒔 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) 𝟎 ≤ η ≤ 𝟏 𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐) W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐) F(s)s sgn =+1𝑿 𝟎 W0 𝑿 𝟏 𝑿 𝟐 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐 C1/C2 𝒀𝒋
  • 49. Summary of Parameters Activation Function 𝒔𝒈𝒏 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) 𝟎 ≤ η ≤ 𝟏 𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐) W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐) F(s)s sgn =+1𝑿 𝟎 W0 𝑿 𝟏 𝑿 𝟐 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐 C1/C2 𝒀𝒋
  • 50. Summary of Parameters Outputs 𝒀𝒋 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) 𝟎 ≤ η ≤ 𝟏 𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐) W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐) F(s)s sgn =+1𝑿 𝟎 W0 𝑿 𝟏 𝑿 𝟐 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐 C1/C2 𝒀𝒋
  • 51. Summary of Parameters Learning Rate η s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) 𝟎 ≤ η ≤ 𝟏 𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐) W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐) F(s)s sgn =+1𝑿 𝟎 W0 𝑿 𝟏 𝑿 𝟐 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐 C1/C2 𝒀𝒋
  • 52. Other Parameters Step n s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) 𝟎 ≤ η ≤ 𝟏𝒏 = 𝟎, 𝟏, 𝟐, … 𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐) W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐) F(s)s sgn =+1𝑿 𝟎 W0 𝑿 𝟏 𝑿 𝟐 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐 C1/C2 𝒀𝒋
  • 53. Other Parameters Desired Output 𝒅𝒋 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) 𝟎 ≤ η ≤ 𝟏𝒏 = 𝟎, 𝟏, 𝟐, … 𝒅 𝒏 = +𝟏, 𝒙 𝒏 𝒃𝒆𝒍𝒐𝒏𝒈𝒔 𝒕𝒐 𝑪𝟏 −𝟏, 𝒙 𝒏 𝒃𝒆𝒍𝒐𝒏𝒈𝒔 𝒕𝒐 𝑪𝟐 𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐) W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐) 𝑭 𝟐𝑭 𝟏 16.8121 C1 15.2114 9.4210 C2 8.1195 F(s)s sgn =+1𝑿 𝟎 W0 𝑿 𝟏 𝑿 𝟐 𝑭 𝟏 𝑭 𝟐 𝑾 𝟏 𝑾 𝟐 C1/C2 𝒀𝒋
  • 54. Neural Networks Training Steps Weights Initialization Inputs Application Sum of Inputs-Weights Products Activation Function Response Calculation Weights Adaptation Back to Step 2 1 2 3 4 5 6
  • 55. Regarding 5th Step: Weights Adaptation • If the predicted output Y is not the same as the desired output d, then weights are to be adapted according to the following equation: 𝑾 𝒏 + 𝟏 = 𝑾 𝒏 + η 𝒅 𝒏 − 𝒀 𝒏 𝑿(𝒏) Where 𝑾 𝒏 = [𝒃 𝒏 , 𝑾 𝟏(𝒏), 𝑾 𝟐(𝒏), 𝑾 𝟑(𝒏), … , 𝑾 𝒎(𝒏)]
  • 56. Neural Networks Training Example Step n=0 • In each step in the solution, the parameters of the neural network must be known. • Parameters of step n=0: η = .01 𝑋 𝑛 = 𝑋 0 = +1, 121, 16.8 𝑊 𝑛 = 𝑊 0 = −1230, −30, 300 𝑑 𝑛 = 𝑑 0 = +1 𝑭 𝟐𝑭 𝟏 16.8121 C1 = +1 15.2114 9.4210 C2 = -1 8.1195
  • 57. Neural Networks Training Example Step n=0 F(s)s sgn =+1𝑿 𝟎 -1230 𝑿 𝟏 𝑿 𝟐 121 𝟏𝟔. 𝟖 -30 300 𝑭 𝟐𝑭 𝟏 16.8121 C1 = +1 15.2114 9.4210 C2 = -1 8.1195 C1/C2 𝒀(𝒏)
  • 58. Neural Networks Training Example Step n=0 - SOP s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) =+1*-1230+121*-30+16.8*300 =180 𝑭 𝟐𝑭 𝟏 16.8121 C1 = +1 15.2114 9.4210 C2 = -1 8.1195 F(s)s sgn =+1𝑿 𝟎 -1230 𝑿 𝟏 𝑿 𝟐 121 𝟏𝟔. 𝟖 -30 300 C1/C2 𝒀(𝒏)
  • 59. Neural Networks Training Example Step n=0 - Output 𝒀 𝒏 = 𝒀 𝟎 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 𝟏𝟖𝟎 = +𝟏 𝒔𝒈𝒏 𝒔 = +𝟏, 𝒔 ≥ 𝟎 −𝟏, 𝒔 < 𝟎 𝑭 𝟐𝑭 𝟏 16.8121 C1 = +1 15.2114 9.4210 C2 = -1 8.1195 F(s)s sgn =+1𝑿 𝟎 -1230 𝑿 𝟏 𝑿 𝟐 121 𝟏𝟔. 𝟖 -30 300 C1/C2 𝒀(𝒏)
  • 60. Neural Networks Training Example Step n=0 - Output 𝑭 𝟐𝑭 𝟏 16.8121 C1 = +1 15.2114 9.4210 C2 = -1 8.1195 F(s)s sgn =+1𝑿 𝟎 -1230 𝑿 𝟏 𝑿 𝟐 121 𝟏𝟔. 𝟖 -30 300 𝒀 𝒏 = 𝒀 𝟎 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 𝟏𝟖𝟎 = +𝟏 C1 +1
  • 61. Neural Networks Training Example Step n=0 Predicted Vs. Desired 𝒀 𝒏 = 𝒀 𝟎 = +𝟏 𝐝 𝒏 = 𝒅 𝟎 = +𝟏 ∵ 𝒀 𝒏 = 𝒅 𝒏 ∴ Weights are Correct. No Adaptation 𝑭 𝟐𝑭 𝟏 16.8121 C1 = +1 15.2114 9.4210 C2 = -1 8.1195 F(s)s sgn =+1𝑿 𝟎 -1230 𝑿 𝟏 𝑿 𝟐 121 𝟏𝟔. 𝟖 -30 300 C1 +1
  • 62. Neural Networks Training Example Step n=1 • In each step in the solution, the parameters of the neural network must be known. • Parameters of step n=1: η = .01 𝑋 𝑛 = 𝑋 1 = +1, 121, 16.8 𝑊 𝑛 = 𝑊 1 = 𝑊 0 = −1230, −30, 300 𝑑 𝑛 = 𝑑 1 = +1 𝑭 𝟐𝑭 𝟏 16.8121 C1 = +1 15.2114 9.4210 C2 = -1 8.1195
  • 63. Neural Networks Training Example Step n=1 F(s)s sgn =+1𝑿 𝟎 -1230 𝑿 𝟏 𝑿 𝟐 114 𝟏𝟓. 𝟐 -30 300 𝑭 𝟐𝑭 𝟏 16.8121 C1 = +1 15.2114 9.4210 C2 = -1 8.1195 C1/C2 𝒀(𝒏)
  • 64. Neural Networks Training Example Step n=1 - SOP s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) =+1*-1230+114*-30+15.2*300 =-90 𝑭 𝟐𝑭 𝟏 16.8121 C1 = +1 15.2114 9.4210 C2 = -1 8.1195 F(s)s sgn =+1𝑿 𝟎 -1230 𝑿 𝟏 𝑿 𝟐 114 𝟏𝟓. 𝟐 -30 300 C1/C2 𝒀(𝒏)
  • 65. Neural Networks Training Example Step n=1 - Output 𝒀 𝒏 = 𝒀 𝟏 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 −𝟗𝟎 = −𝟏 𝒔𝒈𝒏 𝒔 = +𝟏, 𝒔 ≥ 𝟎 −𝟏, 𝒔 < 𝟎 𝑭 𝟐𝑭 𝟏 16.8121 C1 = +1 15.2114 9.4210 C2 = -1 8.1195 F(s)s sgn =+1𝑿 𝟎 -1230 𝑿 𝟏 𝑿 𝟐 114 𝟏𝟓. 𝟐 -30 300 C1/C2 𝒀(𝒏)
  • 66. Neural Networks Training Example Step n=1 - Output 𝑭 𝟐𝑭 𝟏 16.8121 C1 = +1 15.2114 9.4210 C2 = -1 8.1195 F(s)s sgn =+1𝑿 𝟎 -1230 𝑿 𝟏 𝑿 𝟐 114 𝟏𝟓. 𝟐 -30 300 𝒀 𝒏 = 𝒀 𝟏 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 −𝟗𝟎 = −𝟏 C2 -1
  • 67. Neural Networks Training Example Step n=1 Predicted Vs. Desired 𝒀 𝒏 = 𝒀 𝟏 = −𝟏 𝐝 𝒏 = 𝒅 𝟏 = +𝟏 ∵ 𝒀 𝒏 ≠ 𝒅 𝒏 ∴ Weights are Incorrect. Adaptation Required. 𝑭 𝟐𝑭 𝟏 16.8121 C1 = +1 15.2114 9.4210 C2 = -1 8.1195 F(s)s sgn =+1𝑿 𝟎 -1230 𝑿 𝟏 𝑿 𝟐 114 𝟏𝟓. 𝟐 -30 300 C2 -1
  • 68. Weights Adaptation • According to 𝑾 𝒏 + 𝟏 = 𝑾 𝒏 + η 𝒅 𝒏 − 𝒀 𝒏 𝑿(𝒏) • Where n = 1 𝑾 𝟏 + 𝟏 = 𝑾 𝟏 + η 𝒅 𝟏 − 𝒀 𝟏 𝑿(𝟏) 𝑾 𝟐 = −𝟏𝟐𝟑𝟎, 𝟑𝟎, 𝟑𝟎𝟎 + .01 +𝟏 − (−𝟏) +𝟏, 𝟏𝟏𝟒, 𝟏𝟓. 𝟐 𝑾 𝟐 = −𝟏𝟐𝟑𝟎, 𝟑𝟎, 𝟑𝟎𝟎 + .01 +𝟐 +𝟏, 𝟏𝟏𝟒, 𝟏𝟓. 𝟐 𝑾 𝟐 = −𝟏𝟐𝟑𝟎, 𝟑𝟎, 𝟑𝟎𝟎 + .0 𝟐 +𝟏, 𝟏𝟏𝟒, 𝟏𝟓. 𝟐 𝑾 𝟐 = −𝟏𝟐𝟑𝟎, 𝟑𝟎, 𝟑𝟎𝟎 + +. 𝟎𝟐, 𝟐. 𝟐𝟖, . 𝟑𝟎𝟒 𝑾 𝟐 = −𝟏𝟐𝟐𝟗. 𝟎𝟖, −𝟐𝟕. 𝟕𝟐, 𝟑𝟎𝟎. 𝟑𝟎𝟒
  • 69. Neural Networks Training Example Step n=2 • In each step in the solution, the parameters of the neural network must be known. • Parameters of step n=2: η = .01 𝑋 𝑛 = 𝑋 2 = +1, 210, 9.4 𝑊 𝑛 = 𝑊 2 = −1229.08, −27.72, 300.304 𝑑 𝑛 = 𝑑 2 = −1 𝑭 𝟐𝑭 𝟏 16.8121 C1 = +1 15.2114 9.4210 C2 = -1 8.1195
  • 70. Neural Networks Training Example Step n=2 𝑭 𝟐𝑭 𝟏 16.8121 C1 = +1 15.2114 9.4210 C2 = -1 8.1195 F(s)s sgn =+1𝑿 𝟎 - 1229. 08 𝑿 𝟏 𝑿 𝟐 210 𝟗. 𝟒 -27.72 300.3 04 C1/C2 𝒀(𝒏)
  • 71. Neural Networks Training Example Step n=2 - SOP s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) =+1*-1229.08+210*- 27.72+9.4*300.304 =-4227.4224 𝑭 𝟐𝑭 𝟏 16.8121 C1 = +1 15.2114 9.4210 C2 = -1 8.1195 F(s)s sgn =+1𝑿 𝟎 - 1229. 08 𝑿 𝟏 𝑿 𝟐 210 𝟗. 𝟒 -27.72 300.3 04 C1/C2 𝒀(𝒏)
  • 72. Neural Networks Training Example Step n=2 - Output 𝒀 𝒏 = 𝒀 𝟐 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 −4227.4224 = +𝟏 𝒔𝒈𝒏 𝒔 = +𝟏, 𝒔 ≥ 𝟎 −𝟏, 𝒔 < 𝟎 𝑭 𝟐𝑭 𝟏 16.8121 C1 = +1 15.2114 9.4210 C2 = -1 8.1195 F(s)s sgn =+1𝑿 𝟎 - 1229. 08 𝑿 𝟏 𝑿 𝟐 210 𝟗. 𝟒 -27.72 300.3 04 C1/C2 𝒀(𝒏)
  • 73. Neural Networks Training Example Step n=2 - Output 𝒀 𝒏 = 𝒀 𝟐 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 −4227.4224 = −𝟏 𝑭 𝟐𝑭 𝟏 16.8121 C1 = +1 15.2114 9.4210 C2 = -1 8.1195 F(s)s sgn =+1𝑿 𝟎 - 1229. 08 𝑿 𝟏 𝑿 𝟐 210 𝟗. 𝟒 -27.72 300.3 04 C2 −𝟏
  • 74. Neural Networks Training Example Step n=2 Predicted Vs. Desired 𝒀 𝒏 = 𝒀 𝟐 = −𝟏 𝐝 𝒏 = 𝒅 𝟐 = −𝟏 ∵ 𝒀 𝒏 = 𝒅 𝒏 ∴ Weights are Correct. No Adaptation 𝑭 𝟐𝑭 𝟏 16.8121 C1 = +1 15.2114 9.4210 C2 = -1 8.1195 F(s)s sgn =+1𝑿 𝟎 - 1229. 08 𝑿 𝟏 𝑿 𝟐 210 𝟗. 𝟒 -27.72 300.3 04 C2 −𝟏
  • 75. Neural Networks Training Example Step n=3 • In each step in the solution, the parameters of the neural network must be known. • Parameters of step n=3: η = .01 𝑋 𝑛 = 𝑋 3 = +1, 210, 9.4 𝑊 𝑛 = 𝑊 3 = 𝑊 2 = −1229.08, −27.72, 300.304 𝑑 𝑛 = 𝑑 3 = −1 𝑭 𝟐𝑭 𝟏 16.8121 C1 = +1 15.2114 9.4210 C2 = -1 8.1195
  • 76. Neural Networks Training Example Step n=3 𝑭 𝟐𝑭 𝟏 16.8121 C1 = +1 15.2114 9.4210 C2 = -1 8.1195 F(s)s sgn =+1𝑿 𝟎 - 1229. 08 𝑿 𝟏 𝑿 𝟐 195 𝟖. 𝟏 -27.72 300.3 04 C1/C2 𝒀(𝒏)
  • 77. Neural Networks Training Example Step n=3 - SOP s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) =+1*-1229.08+195*- 27.72+8.1*300.304 =-4202.0176 𝑭 𝟐𝑭 𝟏 16.8121 C1 = +1 15.2114 9.4210 C2 = -1 8.1195 F(s)s sgn =+1𝑿 𝟎 - 1229. 08 𝑿 𝟏 𝑿 𝟐 195 𝟖. 𝟏 -27.72 300.3 04 C1/C2 𝒀(𝒏)
  • 78. Neural Networks Training Example Step n=3 - Output 𝒀 𝒏 = 𝒀 𝟑 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 −4202.0176 = −𝟏 𝒔𝒈𝒏 𝒔 = +𝟏, 𝒔 ≥ 𝟎 −𝟏, 𝒔 < 𝟎 𝑭 𝟐𝑭 𝟏 16.8121 C1 = +1 15.2114 9.4210 C2 = -1 8.1195 F(s)s sgn =+1𝑿 𝟎 - 1229. 08 𝑿 𝟏 𝑿 𝟐 195 𝟖. 𝟏 -27.72 300.3 04 C1/C2 𝒀(𝒏)
  • 79. Neural Networks Training Example Step n=3 - Output 𝒀 𝒏 = 𝒀 𝟑 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 −4202.0176 = −𝟏 𝑭 𝟐𝑭 𝟏 16.8121 C1 = +1 15.2114 9.4210 C2 = -1 8.1195 F(s)s sgn =+1𝑿 𝟎 - 1229. 08 𝑿 𝟏 𝑿 𝟐 195 𝟖. 𝟏 -27.72 300.3 04 C2 −𝟏
  • 80. Neural Networks Training Example Step n=3 Predicted Vs. Desired 𝒀 𝒏 = 𝒀 𝟑 = −𝟏 𝐝 𝒏 = 𝒅 𝟑 = −𝟏 ∵ 𝒀 𝒏 = 𝒅 𝒏 ∴ Weights are Correct. No Adaptation 𝑭 𝟐𝑭 𝟏 16.8121 C1 = +1 15.2114 9.4210 C2 = -1 8.1195 F(s)s sgn =+1𝑿 𝟎 - 1229. 08 𝑿 𝟏 𝑿 𝟐 195 𝟖. 𝟏 -27.72 300.3 04 C1 −𝟏
  • 81. Neural Networks Training Example Step n=4 • In each step in the solution, the parameters of the neural network must be known. • Parameters of step n=4: η = .01 𝑋 𝑛 = 𝑋 4 = +1, 121, 16.8 𝑊 𝑛 = 𝑊 4 = 𝑊 3 = −1229.08, −27.72, 300.304 𝑑 𝑛 = 𝑑 4 = +1 𝑭 𝟐𝑭 𝟏 16.8121 C1 = +1 15.2114 9.4210 C2 = -1 8.1195
  • 82. Neural Networks Training Example Step n=4 𝑭 𝟐𝑭 𝟏 16.8121 C1 = +1 15.2114 9.4210 C2 = -1 8.1195 F(s)s sgn =+1𝑿 𝟎 - 1229. 08 𝑿 𝟏 𝑿 𝟐 121 𝟏𝟔. 𝟖 -27.72 300.3 04 C1/C2 𝒀(𝒏)
  • 83. Neural Networks Training Example Step n=4 - SOP s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) =+1*-1229.08+121*- 27.72+16.8*300.304 =461.91 𝑭 𝟐𝑭 𝟏 16.8121 C1 = +1 15.2114 9.4210 C2 = -1 8.1195 F(s)s sgn =+1𝑿 𝟎 - 1229. 08 𝑿 𝟏 𝑿 𝟐 121 𝟏𝟔. 𝟖 -27.72 300.3 04 C1/C2 𝒀(𝒏)
  • 84. Neural Networks Training Example Step n=4 - Output 𝒀 𝒏 = 𝒀 𝟒 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 461.91 = +𝟏 𝒔𝒈𝒏 𝒔 = +𝟏, 𝒔 ≥ 𝟎 −𝟏, 𝒔 < 𝟎 𝑭 𝟐𝑭 𝟏 16.8121 C1 = +1 15.2114 9.4210 C2 = -1 8.1195 F(s)s sgn =+1𝑿 𝟎 - 1229. 08 𝑿 𝟏 𝑿 𝟐 121 𝟏𝟔. 𝟖 -27.72 300.3 04 C1/C2 𝒀(𝒏)
  • 85. Neural Networks Training Example Step n=4 - Output 𝒀 𝒏 = 𝒀 𝟒 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 461.91 = +𝟏 𝑭 𝟐𝑭 𝟏 16.8121 C1 = +1 15.2114 9.4210 C2 = -1 8.1195 F(s)s sgn =+1𝑿 𝟎 - 1229. 08 𝑿 𝟏 𝑿 𝟐 121 𝟏𝟔. 𝟖 -27.72 300.3 04 C1 +𝟏
  • 86. Neural Networks Training Example Step n=4 Predicted Vs. Desired 𝒀 𝒏 = 𝒀 𝟒 = +𝟏 𝐝 𝒏 = 𝒅 𝟒 = +𝟏 ∵ 𝒀 𝒏 = 𝒅 𝒏 ∴ Weights are Correct. No Adaptation 𝑭 𝟐𝑭 𝟏 16.8121 C1 = +1 15.2114 9.4210 C2 = -1 8.1195 F(s)s sgn =+1𝑿 𝟎 - 1229. 08 𝑿 𝟏 𝑿 𝟐 121 𝟏𝟔. 𝟖 -27.72 300.3 04 C1 +𝟏
  • 87. Neural Networks Training Example Step n=5 • In each step in the solution, the parameters of the neural network must be known. • Parameters of step n=5: η = .01 𝑋 𝑛 = 𝑋 5 = +1, 114, 15.2 𝑊 𝑛 = 𝑊 5 = 𝑊 4 = −1229.08, −27.72, 300.304 𝑑 𝑛 = 𝑑 5 = +1 𝑭 𝟐𝑭 𝟏 16.8121 C1 = +1 15.2114 9.4210 C2 = -1 8.1195
  • 88. Neural Networks Training Example Step n=5 𝑭 𝟐𝑭 𝟏 16.8121 C1 = +1 15.2114 9.4210 C2 = -1 8.1195 F(s)s sgn =+1𝑿 𝟎 - 1229. 08 𝑿 𝟏 𝑿 𝟐 114 𝟏𝟓. 𝟐 -27.72 300.3 04 C1/C2 𝒀(𝒏)
  • 89. Neural Networks Training Example Step n=5 - SOP s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) =+1*-1229.08+114*- 27.72+15.2*300.304 = 1𝟕𝟓. 𝟒𝟔 𝑭 𝟐𝑭 𝟏 16.8121 C1 = +1 15.2114 9.4210 C2 = -1 8.1195 F(s)s sgn =+1𝑿 𝟎 - 1229. 08 𝑿 𝟏 𝑿 𝟐 114 𝟏𝟓. 𝟐 -27.72 300.3 04 C1/C2 𝒀(𝒏)
  • 90. Neural Networks Training Example Step n=5 - Output 𝒀 𝒏 = 𝒀 𝟓 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 1𝟕𝟓. 𝟒𝟔 = +𝟏 𝒔𝒈𝒏 𝒔 = +𝟏, 𝒔 ≥ 𝟎 −𝟏, 𝒔 < 𝟎 𝑭 𝟐𝑭 𝟏 16.8121 C1 = +1 15.2114 9.4210 C2 = -1 8.1195 F(s)s sgn =+1𝑿 𝟎 - 1229. 08 𝑿 𝟏 𝑿 𝟐 114 𝟏𝟓. 𝟐 -27.72 300.3 04 C1/C2 𝒀(𝒏)
  • 91. Neural Networks Training Example Step n=5 - Output 𝒀 𝒏 = 𝒀 𝟓 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 1𝟕𝟓. 𝟒𝟔 = +𝟏 𝑭 𝟐𝑭 𝟏 16.8121 C1 = +1 15.2114 9.4210 C2 = -1 8.1195 F(s)s sgn =+1𝑿 𝟎 - 1229. 08 𝑿 𝟏 𝑿 𝟐 114 𝟏𝟓. 𝟐 -27.72 300.3 04 C1 +𝟏
  • 92. Neural Networks Training Example Step n=5 Predicted Vs. Desired 𝒀 𝒏 = 𝒀 𝟒 = +𝟏 𝐝 𝒏 = 𝒅 𝟒 = +𝟏 ∵ 𝒀 𝒏 = 𝒅 𝒏 ∴ Weights are Correct. No Adaptation 𝑭 𝟐𝑭 𝟏 16.8121 C1 = +1 15.2114 9.4210 C2 = -1 8.1195 F(s)s sgn =+1𝑿 𝟎 - 1229. 08 𝑿 𝟏 𝑿 𝟐 114 𝟏𝟓. 𝟐 -27.72 300.3 04 C1 +𝟏
  • 93. Correct Weights • After testing the weights across all samples and results were correct then we can conclude that current weights are correct ones for training the neural network. • After training phase we come to testing the neural network. • What is the class of the unknown color of values of F1=140 and F2=17.9?
  • 94. Testing Trained Neural Network (F1, F2) = (140, 17.9) Trained Neural Networks Parameters 𝑊 = −1229.08, −27.72, 300.304
  • 95. Testing Trained Neural Network (F1, F2) = (140, 17.9) SOP F(s)s sgn =+1𝑿 𝟎 - 1229. 08 𝑿 𝟏 𝑿 𝟐 140 𝟏𝟕. 𝟗 -27.72 300.3 04 C1/C2 𝒀(𝒏) s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐) =+1*-1229.08+140*- 27.72+17.9*300.304 = 𝟐𝟔𝟓
  • 96. Testing Trained Neural Network (F1, F2) = (140, 17.9) Output 𝒀 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 𝟐𝟔𝟓 = +𝟏 𝒔𝒈𝒏 𝒔 = +𝟏, 𝒔 ≥ 𝟎 −𝟏, 𝒔 < 𝟎 F(s)s sgn =+1𝑿 𝟎 - 1229. 08 𝑿 𝟏 𝑿 𝟐 140 𝟏𝟕. 𝟗 -27.72 300.3 04 C1/C2 𝒀(𝒏)
  • 97. Testing Trained Neural Network (F1, F2) = (140, 17.9) Output 𝒀 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 𝟐𝟔𝟓 = +𝟏 F(s)s sgn =+1𝑿 𝟎 - 1229. 08 𝑿 𝟏 𝑿 𝟐 140 𝟏𝟕. 𝟗 -27.72 300.3 04 C1 +1